title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
When going green gets you taxed off your homestead
1350 sq feet For the past decade, I’ve been living in the Northern mountains of New Hampshire. It’s not a big place or a fancy place, but it soon became an integral part of our lives. It gave us safe shelter from the outside world; it’s our office, church, school and grocery store all rolled into one. In short, we bothered no one and no one bothered us. When my wife and I took on this property it needed a lot of work. But over the years we had slowly transformed it into the super-efficient homestead you see before you. Now flourishing on all sides with organic gardens and fruit trees, it’s fair to say we have worked every inch of this land. We were intentional about living this way, living simply while growing our own food is important to us. With a small flock of chickens and several sheep, this little gem soon became the absolute epitome of sustainable living. We simply grew what we ate and ate what we grew. Living this way meant we didn’t need a huge income. Given that our savings had been all but wiped out by an earlier illness, this lifestyle fitted us rather well. Here’s where it gets tricky. To be clear, growing food year round is hard work, more so when the long Northern winter hits. In an attempt to extend the growing season, I decided to build a high tunnel. This final addition was an important piece of the sustainable puzzle. Little did I know at the time, but this “improvement” was about to be my downfall. While building the ends of the high tunnel I was careful to make sure they could withstand the strongest winds. I must have built them a little too well because it soon provoked a visit from the local tax inspector. I guess he liked what he saw because he then began sniffing around for a reason to increase our property taxes. The first increase came in at more than $1500. The second increase meant our property taxes had doubled. When you are trying to live a simple life, this rate of increase was not only unwelcome it was, sadly, unsustainable. Ultimately we were being priced out of our home. The irony is, had we let the place go to ruin the property taxes would have stayed the same. With no sign of the man letting up, we knew we couldn’t survive another increase. If I’m honest, I’m still a little irritated by this. You know somethings not right when an Englishman such as myself gets to complain about American taxation! And so, the house went up for sale. Either way, a lady from Florida saw the photos online and was immediately smitten. She flew up the very next day and said it was the sharpest looking house she had viewed to date. She particularly liked the warm, friendly feeling of our home and kindly commented on how clean everything was. Her husband was suitably impressed with the efficiency of the house which could be heated year-round with just four cords of wood. This was really important to him (just as it had been to us) because he wanted a manageable place with affordable utility bills. He admired the barn and even marveled at my small collection of hand tools all neatly lined up on the workbench like a surgeon’s operating table. It seemed to tick all the right boxes, but after several days of deliberating, the wife finally decided not to buy. The reason? She had once traveled to Japan and while there she had bought a large collection of ornamental china vases. For the past fifteen years, wherever she lived, they lived. No matter how hard she tried in her mind, she simply couldn’t find a place in our tiny house to put them all. As her husband rolled his eyes for the third time it made me realize this was a classic case of something that they no longer owned. These vases owned them! Long story short, someone else soon came along and snapped up our homestead. This actually brought heaviness to my soul. My wife and I have fond memories here. But it also made me thankful for the time we had worked and owned this beautiful place. For now, this American dream has come to an end. We have decided to sell everything we own and try our luck back in the motherland. As liberating as this may sound, it’s probably the first time I’ve packed a suitcase and not wanted to be somewhere else. I’m really not sure where the wind will take us next, and although we will miss the stability of our home, my message to you is clear. Things shouldn’t define who we are, because one day, we may have no choice but to let them go. Wish me luck! Like what you read? Check out the rest of my book Heather McLeod, thought you might be able to relate
https://jameslilley24.medium.com/when-going-green-gets-you-kicked-out-of-your-homestead-6322bbe17158
['James Lilley']
2019-05-05 04:08:01.115000+00:00
['Life Lessons', 'Green Energy', 'Gardening', 'Home', 'Sustainability']
5 Steps to Write your Brand Story
What is a Brand Story? A Brand Story is important when it comes to marketing your business. It helps create a sense of uniqueness to what you’re trying to sell to potential clients and relate to your audience. It is an advantage as it helps with the process of content creation, such as the videos you make or the article you put up on your blog. Most of what you produce can be revolved around your brand story. Once you’ve had that as part of your business, your audience can easily recognize the underlying message in your content that is your brand story. Your introduction Now, the first step to write out your brand story is to think of how you would introduce yourself. Whether it be for your business or personal brand. Also, you could include your belief if you have one. For example, when I introduce myself to potential clients, it goes like this, “Hi, my name is Stanley and I believe in possibilities. I am in the real estate industry-or in a training and workshop business (depending on whom I’m speaking to), where I conduct training to help people to focus in possibilities so that they can achieve what they what to achieve. Your background Next, you need to write down your background story, let your audience into a part of your life. Include things like what was it that made you decide to start your own business, and the stories behind how the business started. When you’re jotting down your thoughts, be as detailed as possible and include as many events as you can that relates to what caused you to build this business or personal brand from the ground up. Your struggles & successes Thirdly, include the dark side too. Write down the struggles you’ve faced and are facing. All the challenges and difficulties that you face in your business journey are part of your story too and your audience should know that it’s not all sunshine and rainbows when it comes to starting your own business. However, they should also be informed that with every problem, there will be a solution. Explain on how you’ve overcome these obstacles, and these will be the success stories that you make a part of your brand story. People will want to related to your struggles and you want to show them that you also start somewhere before. That is the “human” part of the personal brand. However, you don’t only want to dwell in the struggles and also share your success stories. Talk about how you succeeded and why these success matters to you Your mission statement Last but not least, write out your mission statement (this can also be included in the first step, but you would want to reiterate it at the end as well). A mission statement is what you would like to do with your business. What is your mission statement in starting your business or personal brand? What do you hope to achieve and impact you wish to create for the people who does business with you? Your brand story should be one that is carefully and seriously thought out. I have a worksheet available for those who wish to have a guideline to work through as they write out their brand story, it includes the 5 steps mentioned above, the thought processes and examples on “How to Create Your Brand Story”. Here’s a suggestion, create a simple video of you talking about your brand story, or if you’re uncomfortable in front of a camera, you can also write it in an article and post it on your website. And there you go, good luck in your writing process!
https://medium.com/@imstanleyship/5-steps-to-write-your-brand-story-5df1d4a5d0a6
['Stanley Ship']
2020-02-07 16:44:19.715000+00:00
['Branding', 'Personal Branding', 'Marketing', 'Brand Story']
Practice Tips to Improve Your Shooting
Firing a rifle entails more than just pulling the trigger. It requires a lot of practice. A regular dry fire routine is essential to improve shooting skills and helps make one more comfortable with a gun. Here are a few tips to help improve shooting skills. Position and Balance Proper body positioning is fundamental to improving shooting skills. The proper beginner stance is to have your feet shoulder-width apart, and, for right-hand shooters, the left foot slightly forward and the right foot slightly backward. Balance is also vital in a shooting range. It ensures a comfortable position to fire. With proper balance, you maintain control and stability. Good positioning and balance are essential skills beginners should learn. Familiarity with Your Weapon It is essential to get acquainted with your weapon. Get familiar with the controls of the gun. The essence of this is to get more comfortable with the firearm. Be sure you can locate the safety and trigger. Try indoor or outdoor dry-firearm shooting ranges. Also, practice how to load and offload the gun. Invest in your weapons first and master them before trying out other types of firearms. It’s also advisable to get your ear and eye protection gear. Try Shooting Drills A good shot doesn’t mean you are a good marksman. It might be beginners luck. Achieving good shots requires continual practice. Try shooting by using the ball and dummy drill. This drill is an easy way to see how your sights are improving as you pull the trigger. Shoot the target until you get better shots. Doing this will develop a better consistent trigger press. Focus on Accuracy By enhancing your accuracy, you improve your overall shooting skills. Always set a target and keep eyes on it when shooting. Then consistently hit the target over and over, while changing the distance margins. For a start, try short ranges before moving further to long ranges. With time, your overall accuracy will improve. Handling firearms requires persistence and patience. The overall key point to note is practice regularly to improve shooting skills. Make sure you’re interested. The more you practice, the more experience you get, and the better you become.
https://medium.com/@davidkenik/practice-tips-to-improve-your-shooting-bd74ad9b6635
['David Kenik']
2020-10-09 14:26:12.772000+00:00
['Firearms', 'David Kenik', 'Firearm Training', 'Guns']
butterfly
I wake like draws open between forms of light marching ant like with purpose and tiny rhythms on Kafka’s grass sliding through tunnels following white rabbits into portals sinkholes of imagination that do not realise the transformation like sand without grains or rain on the roof of a log cabin where butterflies sleep wings closed just as our mind’s hide half the other’s pattern.
https://medium.com/@naomifolb/butterfly-b8a9e81f0d29
['Naomi Folb']
2020-12-18 19:55:03.426000+00:00
['Creativity', 'Kafkaesque', 'Mindfulness', 'Resilience', 'Poetry']
4 Secrets of Marketing Automation for Marketing Agency
Businesses cannot overstate the value of automation in today’s digital world. But, marketing automation for marketing agency firms is the critical approach for them to fulfill their goals and expand their operations as they develop. Indeed, with the appropriate technology in place to launch appealing marketing efforts for your clients, you can scale with confidence while generating improved work processes that yield better outcomes. According to emailmonday, 75% of marketing professionals are already utilising at least one sort of Marketing Automation technology in their strategy, while a Forrester report found that marketing automation spending is anticipated to reach $25.1 billion annually by 2023. Interestingly, as per Marketo, 76% of businesses implementing marketing automation generate a return on investment in year one. All these numbers demonstrate the immense scope of automation’s growing popularity, but the practical benefits are much more apparent. Certainly, marketing automation is vital if you want to expand your marketing agency. It allows you to develop and take on more clients without placing additional strain on your present resources. It is one of the primary advantages of using Aritic PinPoint since it aids in the efficient automation of essential processes. 4 Tips on Marketing Automation for Marketing Agency Let’s take a look at how marketing automation may help your marketing agency expand while also adding significant value and efficiency to your services and operations. 1. Invest in Inbound Marketing The first thing you think of traditional advertising is certainly Outbound Marketing. A company sends out its message through social media, cold emails or advertisements for its audience. But, if it’s not relevant, doesn’t have a clear call to action or is not attractive enough, the message gets usually ignored. The reason why focusing on inbound marketing is such a game-changer. With inbound marketing, you’ll see prospective customers come to you because It’s less interruptive and proves to be more valuable with the mix of marketing automation for marketing agency firms. The idea is to create a system that constantly attracts the attention and interest of your prospects to your marketing agency. Above all, inbound marketing cuts costs strengthens customer confidence and provides quality leads to you. 2. Foster Interest and Consideration through Value Differentiating Factors Your differentiation strategy is the way you set apart your business from otherwise similar competitors in the market. Typically, this implies highlighting a significant value differentiating factors between you and your competitors. And this difference must be valued and borne in mind by your potential customers. Strong differentiating factors will result in a competitive advantage for your marketing agency. Value differentiating factors link to positioning. Businesses use them to differentiate their products or services from their competition to provide unique value to the customer, which is why the successfully differentiated businesses are constantly developing and promoting products, services, and, above all, unique customer experiences. Almost 52% of businesses focus on having different value differentiators for their various products or services. Therefore, we believe any marketing agency or company should continue to foster interest and consideration through value differentiating factors. 3. Enhance Engagement with Segmented Communications Email blasts no more work, specifically when audiences today have greater expectations from the companies and agencies they do business within all entirety. They want more relevant communications or messages that align perfectly with their interests and challenges. As a matter of fact, 70% of millennialsare irked when they get email communications that have nothing to do with their interests. The good news is that one of the best ways to increase the relevance of any email message is through segmentation and embracing drip campaigns with personalized content. Segments are filters that are applied to the main list. Specifically, this means that you can discover and classify subscribers showing similar traits to give them more relevant communications on a specific topic, promotion, or product that suits their interests. Using the power of segmentation means you can send more relevant and better email campaigns that reach your audiences with the information they look for specifically. It ultimately drives more engagement with the aid of marketing automation for marketing agency firms. 4. Nurture Leads Nurturing leads is about your relationship with your prospects and how you can strengthen those relationships to win sales. That’s where lead scoring plays a dominant role to let you know which prospects are ready to convert and which needs more effort from you on the part of nurturing them. Lead scoring is also a go-to tool driven by marketing automation for marketing agencies, used to set up automated systems that would feed relevant communications to nurture cold leads. In fact, Lead-nurturing emails have a response rate of up to 10x compared to standalone emails. While it takes time and patience, the lead nurturing establishes, if not the readiness to convert, at least valuable knowledge, trust and relationships. These relationships with your leads are crucial to your marketing efforts and ultimately will drive the success of your marketing agency business. Parting Thoughts Marketing automation is a way forward. Now that you know about the best practices of marketing automation for marketing agency, you can begin to reflect on the various processes within your marketing scope that you can automate. Automation makes it easier to implement the agency’s various marketing strategies by not requiring them to hit “send” on every time for their campaign, email, message, or social sharing they create. The right tools will automate all these actions based on specific triggers or schedules you set in advance. When you have the right automation technology at your side, your team can automate online marketing campaigns, reduce administrative work, eliminate repetitive tasks, and release their bandwidth to focus on more critical issues. Indeed, this is where marketing automation software like Aritic PinPoint extends your reach in all these dimensions.
https://medium.com/@aritic/4-secrets-of-marketing-automation-for-marketing-agency-8c6b7656ed5e
[]
2021-07-05 13:39:26.058000+00:00
['Marketing Automation', 'Marketing Agency', 'Marketing Strategies', 'Marketing', 'CRM']
STASIS EURS — The Most Transparent Stablecoin in 2021
In the modern world, grasped by volatility and uncertainty in the future of the financial and cryptocurrency sector, the need for trusted and reliable stablecoin solutions rises dramatically. The ongoing acceptance of the distributed ledger technology and blockchain-based cryptocurrencies paves the way to a smooth shift — from fiat to digital. Nations tend to rely on their balance-sheet currencies and assets they see fit in the modern world. Despite being an EU-only currency, Euro grasped the attention of individuals and corporates in many world countries. From Latin American to Asia — people unravel the benefits of the Euro currency and, consequently, the convenience of using euro-backed stablecoin such as STASIS EURO (EURS). Opening the box of stablecoin use cases Since its inception, STASIS has been a crypto-enabler platform designed to bridge the gap between traditional finance and cryptocurrency markets. Its first product, EURS, is a fully collateralized, euro-pegged stablecoin. Launched in June 2018, EURS currently has a market cap of over 100 million in USD equivalent and is a top 100 digital asset by market cap. In 2021, EURS is the world’s leading non-USD stablecoin, with a broad international base of investors that realized how convenient and popular our solution is for the market. Our unparalleled transparency makes it possible to enable EURS to be traced directly to how it was created and how it operates today. Created with a clear goal to open the path to crypto adoption in Europe and beyond within EU accepted laws and regulations, it works through established banks and auditors. STASIS reserves expanded beyond cash and its equivalents in July 2021, and we provided a more detailed breakdown of reserve composition, adding clarity and insight into the assets backing the largest euro-backed stablecoin. Mindful of community sentiment, our company responds to engagement from USDC users, developers, and other stakeholders by deepening its commitment to transparency and exploring new opportunities to collaborate with the community. Later this year, we expect to announce several new opportunities for members to become more formally involved with STASIS standards and governance activities. STASIS’s standards have provided assurance and trust to the ecosystem contributing to the dramatic growth of EURS over 2021. EURS is actively employed on major exchanges and in DeFi protocols and is increasingly being used as a form of payment across a growing number of applications. The goal of the multi-chain vision is becoming real, with EURS now live on alternative blockchains. Since its inception, the STASIS team considered transparency as our main distinguishing feature and our quality differentiator to be one of the largest top auditors in the world. The BDO, one of the world’s leading networks of independent public accounting firms is one of our most trusted partners. This international network of public accounting, tax, and advisory firms announced a total combined fee income of US$ 10.3 billion / € 9.2 billion, representing year-on-year growth of 7.8% at constant exchange rates (+7.5% in euro; +6.7% in US$). BDO public accounting, tax, and advisory firms provide professional services in 167 countries, with 91,054 people working out of 1,658 offices worldwide. Understanding what customers are striving for We at STASIS have always strived for maximum openness in front of our token holders, and, accordingly, we have the capacity to perform verifications on a daily basis. Our team has often outlined multiple times that we never engaged in any illicit activity or launched ICO to raise funds. We provide an unrivaled level of reserve transparency so that investors can always be confident that their digital assets are fully backed by the appropriate collateral. However, given the fact that the project exists on the funds from stakeholders (and we did not raise money through crowdfunding and are only experimenting with attracting external investors) and in this regard, we did not see support from the community, that substantial verification from the top counterparty gives an advantage, we made a decision to reduce the frequency of these verifications. If someone is interested in sponsoring us to increase the frequency of this verification, don’t hesitate to contact [email protected]. Our team will contact you and review this possibility to come live. In the meantime, we will continue to publish reports in the format of a quarterly, annual, and on-demand basis. Stay tuned and follow us: Website — https://eurs.stasis.net/ Twitter — https://twitter.com/stasisnet Telegram — https://t.me/stasis_community Discord — https://discord.gg/uKVjgR8C Medium — https://medium.com/stasis-blog Youtube — https://www.youtube.com/channel/UCq7eJ9c8ec2PEQweJg-pcDA
https://medium.com/stasis-blog/stasis-eurs-the-most-transparent-stablecoin-in-2021-aa03d779fb52
['Alex In Krypto']
2021-09-02 13:23:42.757000+00:00
['Transparency', '2021', 'Cryptocurrency', 'Digital Asset', 'Stable Coin']
Thinking Citizen Blog — Supreme Court Justice (IV): Clarence Thomas
Thinking Citizen Blog — Saturday is Justice, Freedom, Law, and Values Day Today’s Topic — Supreme Court Justice (IV): Clarence Thomas — second African American, longest serving (29 years). Conservative In 1948, Clarence Thomas was born into poverty in a small black town speaking Gullah as his first language. His father was a farm laborer, his mother a maid, both descendants of slaves. In 1974, Clarence graduated from Yale Law School and in 1990 President George H.W. Bush nominated him to the US Supreme Court. Today, he is the longest serving justice on the court and is best known for his conservatism, deeply rooted in the values of hard work and self-reliance instilled in him by his maternal grandfather, Myers Anderson, whom he has called “the greatest man I have ever known.” Justice Thomas’s autobiography is entitled “My Grandfather’s Son.” Experts — please chime in. Correct, elaborate, elucidate. FEROCIOUSLY OPPOSED TO AFFIRMATIVE ACTION 1. Thomas strongly believes that racial preferences are a violation of the equal protection clause. 2. He also believes that the policies undermine the value of self-reliance preached by Frederick Douglass and practiced by his grandfather. 3. His personal experience with affirmative action after graduating from Yale Law School was that few potential employers took the degree seriously — the assumption being that the only way he got into Yale was his race. NB: His intellectual mentor has been Thomas Sowell, the African American economist, whose analysis of affirmative action programs around the world has shown a.) that they tend to benefit the elite within the favored groups at the expense of the non-elite within the non-preferred groups, b.) lead members of non-preferred groups to re-designate themselves as members of the preferred groups, c.) reduce incentives of both groups to do their best because either it is unnecessary or futile. A TEXTUALIST, ORIGINALIST, LIBERTARIAN — CLOSEST TO JUSTICE SCALIA 1. “On average, from 1994 to 2004, Scalia and Thomas had an 87% voting alignment, the highest on the court.” 2. Opposed to judges making law rather than just interpreting it. 3. A Catholic, Thomas has opposed abortion and dissented in opinions that affirmed gay rights such as Romer (1996) and Obergefell (2015). NB: He has generally supported police against defendants. CONTROVERSIES — ANITA HILL ALLEGATIONS, SILENCE DURING ORAL ARGUMENTS 1. Sexual harassment allegations by Anita Hill (above), who had worked with Thomas at the Department of Education and the Equal Employment Opportunities Commission, were denied by Thomas. He was ultimately confirmed by a vote of 52 to 48 with 11 Democrats voting for confirmation and 2 Republicans against. 2. Thomas is known as the silent judge. Prior to Covid and teleconferencing, Thomas had only spoken in 32 of 2400 oral arguments since 1991. He had gone over a decade without saying a word. Jeffrey Toobin, writing in the New Yorker. has called this silence “disgraceful” behavior that has “gone from curious to bizarre to downright embarrassing, for himself and for the institution he represents.” 3. “Thomas has given many reasons for his silence, including self-consciousness about how he speaks, a preference for listening to those arguing the case, and difficulty getting in a word. Thomas’s speaking and listening habits also may have been influenced by his Gullah upbringing, during which time his English was relatively unpolished.” NB: “Thomas is not the first quiet justice. In the 1970s and 1980s, Justices William J. Brennan, Thurgood Marshall, and Harry Blackmun generally were quiet.” https://en.wikipedia.org/wiki/Clarence_Thomas My Grandfather’s Son Clarence Thomas — Wikiquote Affirmative Action Around the World Clarence Thomas Supreme Court nomination Anita Hill Click here for the last three years of posts arranged by theme: PDF with headlines — Google Drive YOUR TURN Please share the coolest thing you learned in the last week related to justice, freedom, the law or basic values. Or the coolest, most important thing you learned in your life related to justice, freedom, the law, or basic values. Or just some random justice-related fact that blew you away. This is your chance to make some one’s day. Or to cement in your mind something that you might otherwise forget. Or to think more deeply about something dear to your heart.
https://medium.com/@john-muresianu/thinking-citizen-blog-supreme-court-justice-iv-clarence-thomas-7b7283f9c7b9
['John Muresianu']
2020-12-06 18:04:04.324000+00:00
['Supreme Court', 'Justice', 'United States', 'Thinking Citizen Blog']
OCI card application
I will try to cover the OCI card application here. The two most important things are: The first is the official Indian Government Website for OCI application. There are two types of document collection steps that you will need to do: 1. Soft copies: Upload documents on ociservices.gov.in 2. Hard copies: The second is a service provided by Cox and Kings Global Services. Send hard copies of documents to CKGS. This is a life saver and an absolutely amazing service. They have good documentation, a good system which updates status frequently, and is the main point of contact for all interaction. 1. Make sure to choose FedEx both ways. It makes life so much simpler. 2. Have all hard copies of documents collected and well ordered with tags in a folder. It makes life very easy for the authorities, and chances for a lot of wait and back and forth of documents goes down significantly. 3. In case of a doubt whether to notarize a document or not, just go ahead and notarize it. 4. Make sure to do all this way in advance before traveling, since it takes up to 60 days for your OCI card to show up.
https://medium.com/@gadgilsanjeev/oci-card-application-22edcdabc58c
['Sanjeev Gadgil']
2019-11-22 00:01:59.718000+00:00
['Oci Card', 'Baby', 'Immigration', 'India', 'USA']
The Positive Power of Social Media Influencers in 2020
Photo by CoWomen on Unsplash Thanks to social media, mobile viewing is now outpacing cable television as a primary source of entertainment. And for good reason. With apps like YouTube, TikTok, and Instagram providing a platform for anyone to become an online celebrity, viewers have a significantly greater number of niche channels and personalities to choose from than any cable provider. But the benefits of these social platforms extend well beyond mere entertainment. Many online celebrities are using their social influence to create positive changes in their communities and around the world. Here are five people who are using their viral status for the greater good. Toba Courage a.k.a. “The Hack” Toba Courage is a YouTube celebrity who takes penny-pinching to extreme levels with daily “no money” challenges in big cities like London and Stockholm. Relying only on the kindness of strangers, local food banks, promotions and street smarts, Toba shows viewers how to “hack” the system and scrape up enough food to get through the day. Although Toba probably does well enough to stay off the streets personally, he does an exceptional job highlighting many of the existing programs and companies who look after the community and the destitute situation that millions of people find themselves in every day. Jimmy Donaldson a.k.a. “Mr. Beast” Jimmy Donaldson became a household name in 2017 when his YouTube Channel “Mr. Beast” went viral after counting to one hundred thousand live for 44 hours! As Jimmy’s channel continued to grow, he began receiving offers for sponsorship and utilized those sponsors as a way to give back to the local community and the environment. In his most ambitious stunt, Mr. Beast attempted to plant 20 Million trees before 2020 as a way to celebrate reaching 20 Million subscribers. With a huge amount of effort the 20 Million tree goal was eventually reached, thanks to the global community and famous contributors like Tesla CEO Elon Musk, Salesforce CEO Mark Benihoff, and Shopify CEO Tobi Lutke. Lauren Singers Lauren Singer might not be making as big of a splash on social media as Mr. Beast, but it’s likely because she’s trying to conserve the water. Lauren is a living example of how much we can reduce waste by making conscious decisions in our daily lives. For the past eight years, Lauren has managed to consolidate her garbage enough to fit inside a tiny 16oz mason jar. This was achieved though everyday practices like composting, recycling, and repurposing materials. She continues to provide information, tips and advice on waste reduction and environmentally friendly practices through her daily Instagram posts. Mark Rober Mark Rober is one of those YouTubers you can’t help but love. This DIY dad with a childlike curiosity and mechanical engineering degree takes science projects to a new extreme and sets world records along the way. Mark’s first YouTube video went viral back in 2011, when he crafted a Halloween costume using two i-Pads to give the illusion of a hole in his chest. The best part about Mark’s channel, however, is that he educates his audience throughout his progress. Whether the topic involves physics, chemistry, or engineering, viewers will always find his content just as informative as it is entertaining. Florence Simpson Florence Simpson is the Tik Tok poster girl for body positivity. Florence’s posts often focus on spreading good vibes across her followers, and with over 600,000 of them, that’s a lot of love! Instead of intense dieting or shying away from good foods, Florence shows her viewers that they can be happy and healthy without drastically altering their lifestyle. Apart from quick cooking tips and daily affirmations, Florence also takes her viewers along with her during daily routines like haircuts, shopping, and more. This transparency allows her followers to feel as though they are experiencing her positive lifestyle firsthand and can live the same way regardless of shape or size. The Influencer Starter Kit Becoming a social media influencer is 90% genuine personality. The remaining 10% is whatever’s being used to capture that personality for the world to see. Many influencers often start off with basic necessities and eventually work their way toward the equipment that’s best suited for them, whether it’s traveling, extreme sports, or recoding from the comfort of their own home. That being said, there is no right answer for what gear works best, but here are some recommendations: Cameras/Webcams Samsung Galaxy Note 10 — An all-in-one device for everyday high-end photo and video. Logitech C920 PRO Webcam — For those who find themselves often vlogging directly from their computers. Canon EOS M50- A superb143-point autofocus camera with facial detection and auto tracking. GoPro Hero 8 Black — The ultimate customizable action camera for vloggers often finding themselves in extreme situations. Accessories Software Snapseed — Google’s free photo editing app with intuitive UI Windows Movie Maker 10— The best free software to use for video editing with new time lapse features. Adobe Premiere Pro — The ultimate editing software used by amateurs and professionals alike. Conclusion Social media has evolved to the point where online celebrities are almost on par with their box-office counterparts. The ubiquitous nature of apps like YouTube, TikTok, and Instagram are enabling everyday people to create changes and raise awareness in their industry, community, and environment that previously would have been impossible. Those who are looking to make a similar impact should take a page out of these influencer’s books and do it from the heart.
https://medium.com/@dstncraft/the-positive-power-of-social-media-influencers-in-2020-c12abe563519
['Dustin Craft']
2020-12-15 06:09:45.391000+00:00
['Instagram', 'YouTube', 'Influencer Marketing', 'Social Media', 'Tiktok App']
Quadcopter Physics
The future belong to the flying robots, The Unmanned Ariel Vehicles(UAV) as they can also be termed. The true potential of drones are yet to be uncovered! This is a perfect blog for you to retouch the basics of quadcopters as we will cover the crazy science of drone flight here. All of these drones and our flying machines are a part of family of rotating wing aircrafts called rotorcrafts. Let’s go ahead and see the type of drones. Well, we better focus on quadcopters. To put it formally, a quadcopter is an aircraft equipped with four independently controlled rotors mounted on a rigid frame. A quadcopter has two possible configurations “+” and “X”. X-configuration is a more acrobatic configuration. well, but it’s more basically a personal choice. Now we are ready with basic understanding of a quadcopter, let us check out how the hovering works! The main forces acting on a quadcopter is reflected in the figure: · Lift force- Vertical force generated by the propeller. · Gravity force- The force which keep you on ground. · Thrust- Power generated by the motors that push and move the plane. · Drag force- The force that opposes the motion of the quadcopter. How do drones fly? Each arm of the quadcopter has a motor attached to a propeller. Propeller blades is the paramount of a drone. Each cross-section of the propeller generates a lift force when air flows over the blade. Yes, the air foil principle. Spinning blades push air down. Of course, all forces come in pairs, which means that as the rotor pushes down on the air, the air pushes up on the rotor. This is the basic idea behind lift. Time for take off! Increase the rotor speed of all four motors so that lift force overcomes weight of the drone. The drone will now take off ,when the required height is reached reduce the rotor speed until lift force exactly balances the drone weight. Now you achieved levitation. why not rotate all the propellers in same direction? Answer is simple! Newtons third law. If all propellers spin in the same direction drone body would have spun in the opposite direction. Wait, now why is this the case? If all the rotors were to spin in the same direction, it would result in a net Torque causing the complete Quad to rotate. By spinning the motors in opposite direction we are making the net reaction torque on the quadcopter zero. Accordingly the diagonally opposite rotors rotate in the same direction. Do the position of counter rotating matter? Hovering is achieved by accelerating each motor until they produce a force one fourth that of gravity and as long as we have two counter rotating motors the torque from spinning the propellers will balance out and the drone will not spin. It doesn’t actually matter where you put the counter rotating motors as long as we have two of them spinning in one direction and two in the other. well, the quadcopter developers settled on a configuration with opposing motors spinning in same direction and yes, there must be a reason for this! And there is, This design is to prevent unwanted yaw when moving. You can’t make the drone yaw right without also accidentally rolling and/or pitching and/or climbing. Not very useful! (we will check out these terms shortly.)This has also to do with the mathematics of drone flight. Let’s not get into such messy details. Basic movements of a Quadcopter A quadcopter has 6 degrees of freedom. 3 Rotational and 3 Translational. Translational motion include up/down(vertical), left/right (lateral)and forward/backward(longitudinal). Pitch(Y-axis), Yaw(Z-axis) and Roll(X-axis) are the three angular motions. Maneuvering of a quadcopter is pretty basic. Any motion can be achieved by varying the speed of the propellers in certain sequence. Give your brain a quick warm-up figuring out the logic behind these sequences. Well, you are now ready to get out in the world of flying robots!
https://medium.com/@abhiramibhargan/quadcopter-physics-4f6a82af272
['Abhirami Bhargan']
2020-12-05 13:42:03.545000+00:00
['Flying Robot', 'Drones', 'Aviation', 'Robotics', 'Unmanned Aircraft Systems']
Which college should I choose?
Being an 18 year old who had watched one too many teenage romantic comedies, it was very easy for me to picture a college life that contained row of sorority houses, seasonal football games and my first year ending with me having intense eye contact with a guy that every girl wanted but who only wanted me in return. Now let’ get to the real stuff, it’s not like places like that don’t exist, they 100% are real, and hence are shown in movies, granted in a somewhat glamorized fashion. However, it costs to live that life. By the time you or anyone in that institution graduates you would probably end up in thousands of dollars of debt by the end of your basic four year education only to realize that you actually need more education to pay for the loan that you had just acquired. Assuming that you were an amazing student who made financially smart choices and secured scholarships, grants or had parents who were willing to pay your tuition for you, can you really live that life? If you have been a shy individual all your life, college will not turn you into an extroverted machine, maybe underage drinking can(not condoning that) but college will not. Joining a sorority is not for everyone. Even though it may seem idealistic but you may join one that collides with your lifestyle, thinking that you can smile more often and attend a million events without hindering your GPA, which is not true. To sum up, I would like to say, that to pick a college, it is important to know yourself as a person first. If these four years are the only time you would like to “live your life” or make “long lasting relationships” then maybe picking a college that gives you a sense of community is essential. However, if you believe that these four years are only a stepping stone then think smart, and opt for financially smarter choices over emotional ones.
https://medium.com/@hirarh3/which-college-should-i-choose-9d16492343b6
[]
2020-12-25 02:21:06.128000+00:00
['Finance', 'Financial Planning', 'College', 'Sorority', 'Teenagers']
The Prevailing Importance of Women in the Age of Crypto
The era of the cryptocurrency industry has clearly demonstrated an expeditious advancement. The implementation of this innovative form of financial interaction entails a completely transformed method of business transaction. Women in Tech Emily Bett Rickards aka Felicity Smoak in Arrow, DC & The CW Television Network As such a globally impactful field takes widespread coverage, it is imperative to focus on the emphasis of the leaders and shapers of the industry, who bring the sector to reach greater heights. In order to maintain this ongoing success in a long-term and sustainable manner, fostering and initiating processes to integrate women into the crypto and blockchain field is vital. Societal tech implementation has taken full gear over the past decade, overall linking to an increasingly substantial effort to incorporate the female sector to this movement. Societal daily happenings are largely impacted by the shift of business and social engagement onto a technological foundation. Because of such a drastic transition, the importance of having women part of this adaptation has been perceived as extremely crucial. Consequently, various initiatives have been catered to target this social importance. Over time, there is a clear rise of women specifically in the tech sector: women hold 20% of tech jobs and 17.7% of startups have had a female founder. This figure will undoubtedly escalate over time, as the persistent initiative to integrate women in the tech industry is acknowledged on a wider scale. However, the surprising figures of women’s affiliation in the crypto-industry are on a noticeable low. Currently 5% of women are engaged in the crypto sector; such contributions include female developers, founders, and investors. In contrast, when viewing the percentage of male community engagement in the crypto field, 95% is dominated by men. This brings us to highlight the true state of exigency to change this current social standing. The Virtues of Women in Crypto Catheryne Nicholson — BlockCypher Being an active player in the crypto industry entails upholding certain virtues. For, if one highly values the foundations that bolster the spread of cryptocurrency, it is not restricted within the framework of business. An active player in this domain attributes importance to individual freedoms, to the opportunity and possibilities of the individual. A decentralized financial transaction system, such as that within the crypto, entails believing in the capabilities of the individual to contribute to a greater good and to the advancement of values that leave positive externalities on society and on productivity levels. Therefore, this is not limited to a business perspective, but rather the consideration of the various people who can play a key role in the decentralization of common business practice and reshape it in the most innovative nature. By understanding the value of innovation in crypto and the possibilities it opens to society, an individual who is well-versed in this sector understands the fact that this includes the inclusion of women into this industry in order to maintain continuous development and exceptional success. Women in the Blockchain Since the emergence of this industry and the commencement of its prevailing impact, a number of women have been playing key roles in developing its global influence. These women have been substantial contributors to the expansion of the field and explicitly demonstrate the fact that as the crypto field grows over time, women are needed to maintain the sustainable outcomes that will leave promising externalities on society. Included in this article are ten such women who exemplify the true appreciable roles they have been playing into further progressing the scope of the blockchain sector. Top 10 Influential Women Leaders Within the Crypto Industry Kathleen Breitman @breitwoman: Breitman is currently a Co-Founder of Tezos, an open-source decentralized blockchain platform that operates under a self-upgrading and governance model for smart contracts. This means that various stakeholders vote on alterations to the specific protocol, which contributes to an overall social consensus on the particular proposals. This powerful company has raised $232 million up to date. 2. Catheryne Nicholson @Catheryne_N: A Mimblewimble and Grin proponent, Nicholson is the CEO and Co-Founder of BlockCypher, an industry-leading blockchain and web services company. BlockCypher allows corporations to build blockchain applications in a facilitated manner. An engineer and entrepreneur, Nicholson is a rich background in product marketing as well as softwareproduct management. 3. Arianna Simpson @AriannaSimpson: Managing partner at Autonomous Partners, a cryptocurrency venture capital fund. Before the fund establishment, Simpson worked at BitGo, a BitCoin security platform that provides a multi-user BitCoin wallet, overall providing clients with financial services and security solutions. 4. Elizabeth Stark @starkness ‏: CEO of Lightning Labs, which provides a more scalable blockchain to make any transaction cheap and quick. This decentralized protocol streamlines the number of nodes involved in individual transactions. As a result, individual transactions are performed at a quicker pace. 5. Katherine Wu @katherineykwu ‏: Manager of Business Development and Community Management for Messari, which is a NYC-based crypto startup. Messari promotes transparency and smarter decision-making in the crypto ecosystem through data gathering into an organized and structured database. 6. Neha Narula @neha ‏: Director of the Digital Currency Initiative at the MIT Media Lab. MIT Media Lab has launched the MIT Digital Currency Initiative, the goal of which is to bring together highly qualified individuals to conduct the proper research in order to uphold the development of digital currency and blockchain technology. Formerly a software engineer at Google, Narula has completed her PhD in computer science at MIT and has constructed various scalable databases and secure software systems. 7. Elaine Shi @ElaineRShi ‏: Co-Founder and Chief Scientist at Thunder Token, a blockchain protocol that performs high throughput and confirms transactions in seconds. Elaine has published the first peer-reviewed papers specifically on the BitCoin and decentralized smart contracts. Elaine is currently a professor at Cornell University. 8. Mona El Isa @Mona_El_Isa: Founder of Melonport, a Melon (blockchain) protocol for digital asset management software that is constructed on the Ethereum platform. El Isa has been distinguished for winning the “Hottest Blockchain/Crypto Startup Founder” at the Europas, which awards the newest, innovative European startups. Maja Vujinovic 9. Maja Vujinovic: CEO of OGroup, an influential company predominantly focusing on the merge between open protocols and legacy systems. Maja has formerly taken on the role of Chief Innovation Officer of Emerging Tech & Future of Work at GE Digital and has taken roles relating to digital business transformation and overall driving innovation. 10. Sandra Ro @srolondon: CEO of Global Blockchain Business Council, a top blockchain technology trade association, merging global innovative organizations and business leaders and global regulators to enhance the understanding of blockchain. Sandra has formerly engaged in executing the digital assets and blockchain technology initiatives as the Executive Director of Digitization at CME Group
https://medium.com/beam-mw/the-importance-of-women-in-the-cryptocurrency-industry-e98c50fdea9f
['Moriah Khalili']
2019-01-03 11:49:35.924000+00:00
['Women In Tech', 'Bitcoin', 'Women', 'Cryptocurrency']
Classifying images of everyday objects using a neural network
Followings are the path that I have taken through experimenting with different network architectures (#hidden layers, size of each hidden layer, activation function) and hyper-parameters (#epochs, LR) in so far as I get the desired validation loss & accuracy as follows: Experiment#1: Start with the initial size of the two hidden layers being (16, 32) , resulting in the initial model accuracy of {‘val_acc’: 0.097} and the initial model loss of {‘val_loss’: 2.303}. Experiment#2: Doubling the size of the two hidden layers from (16, 32) to (32, 64), doubled the model accuracy from {‘val_acc’: 0.097} to {‘val_acc’: 0.268} and decreased the model loss from {‘val_loss’: 2.303} down to {‘val_loss’: 1.941}. Experiment#3: Quintuplicating the number of epochs from 5 to 25 while retaining the previous architecture and other hyperparameters, worsened the results a little bit such that it slightly decreased the model accuracy from {‘val_acc’: 0.268} to {‘val_acc’: 0.239} and slightly increased the model loss from {‘val_loss’: 1.941} up to {‘val_loss’: 1.968}. Experiment#4: Doubling the Learning Rates from [0.5, 0.1, 0.01, 0.001] to [1, 0.2, 0.02, 0.002] in each training phase while retaining the previous architecture and other hyperparameters, did not help either and aggravated the results again such that it decreased the model accuracy from {‘val_acc’: 0.239} down to {‘val_acc’: 0.173} and increased the model loss from {‘val_loss’: 1.968} up to {‘val_loss’: 2.120}. Experiment#5: Changing the Activation Function in the network architecture from F.relu() to torch.sigmoid() along with halving the Learning Rates back to [0.5, 0.1, 0.01, 0.001] while retaining the rest of the hyperparameters, indeed improved the results substantially so far as it leveraged the model accuracy from {‘val_acc’: 0.173} up to {‘val_acc’: 0.475} and dampened the model loss from {‘val_loss’: 2.120} way down to {‘val_loss’: 1.469}. Experiment#6: Doubling total number of layers from 3 to 6 in the network architecture while retaining the rest of the hyperparameters as before, deteriorated the results substantially to the extent that it pushed the model accuracy from {‘val_acc’: 0.475} back down to {‘val_acc’: 0.168} and catapulted the model loss from {‘val_loss’: 1.469} way up to {‘val_loss’: 2.280}. Experiment#7: Halving the total number of layers from 6 back to 3 in the network architecture along with decreasing the Learning Rates slightly from [0.5, 0.1, 0.01, 0.001] to [0.2, 0.02, 0.002, 0.0002] in each training phase while retaining the rest of the hyperparameters as before, enhanced the results substantially again to the extent that it bounced the model accuracy from {‘val_acc’: 0.168} way up to {‘val_acc’: 0.430} and diminish the model loss from {‘val_loss’: 2.280} back down to {‘val_loss’: 1.593}. Experiment#8: Changing the optimizer from torch.optim.SGD to torch.optim.Adam while retaining the rest of the hyperparameters as before, crushed the results substantially again to the extent that it dampened the model accuracy from {‘val_acc’: 0.430} way down to {‘val_acc’: 0.094} and raised the model loss from {‘val_loss’: 1.593} back up to {‘val_loss’: 2.303}. Experiment#9: Reverting the optimizer from torch.optim.Adam back to torch.optim.SGD while retaining the rest of the hyperparameters as before, regained back the good results again so that it jumped the model accuracy from {‘val_acc’: 0.094} way up to {‘val_acc’: 0.465} and diminished the model loss from {‘val_loss’: 2.303} back down to {‘val_loss’: 1.497}. Experiment#10: Reverting back the Activation Function in the network architecture from torch.sigmoid() to F.relu() while retaining the rest of the hyperparameters, slightly enhanced the results so that it upticked the model accuracy from {‘val_acc’: 0.475} up to {‘val_acc’: 0.483} and downticked the model loss from {‘val_loss’: 1.469} way down to {‘val_loss’: 1.472}. Experiment#11: Halving the Learning Rates slightly from [0.2, 0.02, 0.002, 0.0002] to [0.1, 0.01, 0.001, 0.0001] in each training phase while retaining the rest of the hyperparameters as before, improved the results slightly again such that it raised the model accuracy from {‘val_acc’: 0.483} up to {‘val_acc’: 0.490} and decreased the model loss from {‘val_loss’: 1.472} further down to {‘val_loss’: 1.451}. Experiment#12: Increasing total number of layers from 3 to 4 in the network architecture while retaining the rest of the hyperparameters as before, improved the results again to the extent that it raised the model accuracy from {‘val_acc’: 0.490} further up to {‘val_acc’: 0.500097} and dampened the model loss from {‘val_loss’: 1.451} down to {‘val_loss’: 1.407}.
https://medium.com/@dehhaghi/classifying-images-of-everyday-objects-using-a-neural-network-7daf807298b1
['Mohsen Dehhaghi']
2020-12-19 21:11:06.297000+00:00
['Deep Learning', 'Image Classification', 'Neural Networks']
Kubernetes Application Management — Stateful Services
By Bruce Wu Background You can conveniently deploy a highly available and scalable distributed stateless service in Kubernetes (K8s) by using Deployment and ReplicationController. These applications do not store data locally and they distribute requests based on simple load balancing policies. With the popularization of K8s and the rise of cloud-native architectures, more and more people hope to orchestrate stateful services such as databases in K8s. This process is not easy because of the complexity of stateful services. This article shows you how to deploy stateful services in K8s by taking the most popular opensource database MySQL as an example. The study of this article is made based on k8s 1.13 . Use StatefulSet to Deploy MySQL This section shows you how to deploy a highly available MySQL service based by using StatefulSet. The example is taken from the official K8s tutorial Run a Replicated Stateful Application. Introduction to StatefulSet Deployment and ReplicationController are designed for stateless services. Their pod names, host names, and storage are unstable, and their pod startup and destruction orders are random, so they are not suitable for stateful applications such as databases. To address this problem, K8s offers a StatefulSet controller for stateful services. Pods managed by this controller have the following characteristics: 1. Uniqueness — If a StatefulSet has N pods, each pod is assigned a unique ordinal number in the range of [0, N). 2. Sequence — By default, the startup, update, and destruction of pods in StatefulSet are performed in sequence. 3. Stable network identity — The host name and DNS address of a pod does not change when the pod is rescheduled. 4. Stable persistent storage — After a pod is rescheduled, the original PersistentVolume can still be mounted to this pod to ensure data integrity and consistency. Service Deployment The highly available MySQL service used in this example consists of one master node and multiple slave nodes that asynchronously replicate data from the master node. This is the one-master-multiple-slave replication model. The master node is used to handle read and write requests from users, but the slave nodes can only be used to handle read requests. To deploy such a service, you need many other K8s resource objects apart from StatefulSet, such as ConfigMap, Headless Service, and ClusterIP Service. Their collaboration allows stateful services such as MySQL to be run on K8s. ConfigMap To make application configuration maintenance easier, large systems and distributed applications often use the centralized configuration management policy. In the K8s environment, you can use ConfigMap to separate configurations and pods. This is helpful to ensure that the controller is portable, and its configuration can be easily modified and managed. This example has a ConfigMap named mysql. When pods of the StatefulSet controller are started, they read the corresponding configuration from the ConfigMap according to their roles. Headless Service Headless Service provides a DNS address for each associated pod. The format of the DNS address is: <pod-name>.<service-name> . This allows the client to choose the application instance as needed and solves the problem with identifying different instances in a distributed environment. This example has a Headless Service named mysql , which is associated with pods of the StatefulSet controller. These pods are assigned the following DNS addresses mysql-0.mysql , mysql-1.mysql , and mysql-2.mysql . This allows the client to access the master node by using mysql-0.mysql , and slave nodes by using mysql-1.mysql or mysql-2.mysql . ClusterIP Service To make read-only access more convenient, the example provides a normal service named mysql-read . This service has its own cluster IP address to receive user requests, and to distribute requests to the associated pods, including pods of master nodes and slave nodes. This service also hides the pod access details from the user. StatefulSet StatefulSet is the key to the deployment of the service. Each pod managed by it will be assigned a unique name. The format of the names is: <statefulset-name>-<ordinal-index> . In this example, the StatefulSet is named mysql , so these pods are named respectively as mysql-0 , mysql-1 , and mysql-2 . By default, these pods are created in order and destructed in a descending order. As shown in the following figure, a pod contains two init containers and two app containers. The pod is bound to the PersistentVolume provided by the volume provider by using a unique PersistentVolumeClaim. Functions of components related to the pod are described as follows: The init-mysql container is mainly responsible for generating configuration files. It extracts the ordinal number of a pod from the hostname, and stores this ordinal number into the /mnt/conf.d/server-id.cnf file. In addition, it replicates master.cnf or slave.cnf from ConfigMap to the /mnt/conf directory based on the node type. container is mainly responsible for generating configuration files. It extracts the ordinal number of a pod from the hostname, and stores this ordinal number into the file. In addition, it replicates master.cnf or slave.cnf from ConfigMap to the directory based on the node type. The clone-mysql container is mainly responsible for cloning data. The clone-mysql container of the Pod N+1 clones data from Pod N to the PersistentVolume that is bound with Pod N. container is mainly responsible for cloning data. The container of the Pod clones data from Pod to the PersistentVolume that is bound with Pod N. After Init container completes running, the app container starts running. The mysql container is responsible running the mysqld service. container is responsible running the mysqld service. The xtrabackup controller runs in the sidecar mode. When it detects that mysqld for the mysql container is ready, it runs the START SLAVE command to replicate data from the slave node. In addition, it monitors data cloning requests from other pods. controller runs in the sidecar mode. When it detects that mysqld for the container is ready, it runs the command to replicate data from the slave node. In addition, it monitors data cloning requests from other pods. StatefulSet uses volumeClaimTemplates to associate each pod with a unique PersistentVolumeClaim (PVC). In this example, Pod N is associated with the data-mysql-N PVC. This PVC is then bound with a persistent volume (PV) provided by the system. This mechanism ensures that the pod can still mount the existing data after it is rescheduled. Service Maintenance To ensure service performance and system reliability, the corresponding maintenance support is required after the completion of the deployment. For database services, common maintenance work includes service failure recovery, service scaling, service status monitoring, and data backup and recovery. Service Failure Recovery The service failure recovery capability is a key metric that measures the automation degree of a system. In this architecture, the MySQL service can automatically recover when the host, the master node, or the slave nodes are down. If any of these problems occur, K8s reschedules the problematic pod and restarts it. Due to the fact that these pods are managed by the StatefulSet controller, the pod names, host names, and storage will remain unchanged. Service Scaling Under the one-master-multiple-slave model, scaling means to adjust the number of slave points. When you use StatefulSet, the startup and destruction orders of pods are ensured. On this basis, you can use the following command to easily scale up or scale down your service. kubectl scale statefulset mysql --replicas=<NumOfReplicas> Service Status Monitoring To ensure service stability, you must closely monitor the service status. Apart from the readiness and liveness probes, you usually need other monitoring metrics with finer granularity to check whether the service runs normally. You can use mysqld-exporter to expose core MySQL metrics to Prometheus, and then set Prometheus monitoring alarms. We recommend that you deploy mysqld-exporter in the same pod with the mysqld container in the sidecar mode. Data Backup and Recovery Data backup and recovery are effective measures to ensure data security. In this example, you can directly use volume APIs or use the VolumeSnapshot feature to back up and recover data. Use the Volume API Many volume providers offer features of saving data snapshots and recovering data based on such snapshots. These features are usually provided to users in the form of APIs. To use this method, you must be familiar with APIs that are provided by volume providers. For example, if you use Alibaba Cloud disk as the external volume, you need to know how to use the snapshot API of Alibaba Cloud disk. Use VolumeSnapshot K8s v1.12 introduces three snapshot-related resource objects: VolumeSnapshot , VolumeSnapshotContent , and VolumeSnapshotClass , and provides standard operation methods through these objects. In this case, you can create snapshots for volumes that store the MySQL data or restore data based on such snapshots without being aware of the existence of external volumes. Comparing with directly using the underlying volume APIs, using VolumeSnapshot is obviously a better choice. However, VolumeSnapshot is still in the Alpha stage, and not all external volumes support snapshot operations. These all restrict the application of VolumeSnapshot. For more information about VolumeSnapshot, see Volume Snapshots. Use Operator to Deploy MySQL StatefulSet allows you to deploy a highly available MySQL service in K8s, but the process is relatively complex. You must be familiar with various K8s resource objects and learn about many MySQL operation details. You also need to maintain a complex set of management scripts. To reduce the difficulty of deploying complex applications in K8s, Kubernetes Operator was developed. Introduction to Operator Operator was developed by CoreOS to package, deploy and manage complex applications that need to be run in K8s. Operator turns the software operation knowledge of maintenance personnel into code, and comprehensively uses various K8s resource objects to deploy and maintain complex applications. Operator defines new resource objects for services by using CustomResourceDefinition. In addition, it ensures that the application runs in the desired state by using custom controllers. The workflow of Operator can be summarized in the following steps: 1. Observe: Operator observes the status of the target object by using a K8s API. 2. Analyze: Operator analyzes the differences between the current status and the desired status of the service. 3. Act: Operator orchestrates the service to adjust the current status to the desired status. Oracle MySQL Operator For MySQL services, many outstanding opensource Operator solutions are available, such as grtl/mysql-operator, oracle/mysql-operator, presslabs/mysql-operator, and kubedb/mysql. The Oracle MySQL Operator is a typical representative of these opensource solutions. Working Mechanism of Oracle MySQL Operator Oracle MySQL Operator supports two MySQL deployment modes: Primary — The service consists of one read/write primary node and multiple read-only secondary nodes. Multi-Primary — The roles of all nodes within the cluster are the same, and all nodes are primary nodes. Every node can process read and write requests of users. The following figure shows how an Operator works under the Multi-Primary mode. The following procedure is the key to understand the working mechanism of the Operator: 1. Use CustomResourceDefinition(CRD) of K8s to define several resource objects related to the deployment and maintenance of the MySQL service. - [mysqlclusters](https://github.com/oracle/mysql-operator/blob/0.3.0/contrib/manifests/custom-resource-definitions.yaml#L5) - Describes the desired status of the cluster, including the deployment mode and the number of nodes. - [mysqlbackups](https://github.com/oracle/mysql-operator/blob/0.3.0/contrib/manifests/custom-resource-definitions.yaml#L18) - Describes the on-demand backup policy and sets the location to store the backup data, for example AWS S3. - [mysqlrestores](https://github.com/oracle/mysql-operator/blob/0.3.0/contrib/manifests/custom-resource-definitions.yaml#L31) - Describes the data recovery policy. You need to set the backup data and target cluster. - [mysqlbackupschedules](https://github.com/oracle/mysql-operator/blob/0.3.0/contrib/manifests/custom-resource-definitions.yaml#L44) - Describes the scheduled backup policy and sets the time interval for backup. 2. Deploy an Operator instance in K8s. This Operator will continuously monitor the CREATE, READ, UPDATE, and DELETE (CRUD) operations on these resource objects, and observe the status of these objects. 3. If you perform an operation, for example when you create a MySQL cluster, a new MySQLCluster resource object will be created. When the Operator detects the event that the MySQLCluster resource object is created, it creates a cluster that meet the requirements based on the user configuration. In this example, a highly available MySQL cluster is created based on Group Replication. Several K8s native resource objects are used, such as StatefulSet and Headless service. 4. When Operator detects any differences between the current status of MySQLCluster and the desired status, it performs the corresponding orchestration operation to ensure status consistency. Service Deployment Operator encapsulates the complex application deployment details. This allows you to easily create a cluster. For example, you can use the following configuration to deploy a MySQL Multi-Primary cluster that consists of three nodes: apiVersion: mysql.oracle.com/v1alpha1 kind: Cluster metadata: name: mysql-multimaster-cluster spec: multiMaster: true members: 3 Service Maintenance The Operator mode also requires the maintenance work, such as service failure recovery, service scaling, service status monitoring, and data backup and recovery. Service Failure Recovery StatefulSet allows K8s to reschedule a MySQL service instance when it becomes unavailable. In addition, if StatefulSet is deleted by mistake, Operator can recreate it. Service Scaling You can modify the spec.members field of MySQLCluster to easily scale up and down the service. In this example, only MySQLCluster is exposed to users. The underlying K8s resource objects are hidden. Service Status Monitoring You can monitor the status of Operator and each MySQL cluster by deploying Prometheus in K8s. For more information about the detailed procedure, see Monitoring. Data Backup and Recovery You can use the MySQLBackup and MySQLRestore resource objects to back up and restore data without worrying about the operation differences between different volumes. In addition, you can create scheduled back up jobs by using MySQLBackupSchedule. For example, you can use the following configuration to back up data of the test database of the MySQL cluster mysql-cluster . [...] kind: BackupSchedule spec: schedule: '*/30 * * * *' backupTemplate: cluster: name: mysql-cluster executor: provider: mysqldump databases: - test [...] Summary This article successively shows you how to deploy and maintain a highly available MySQL service by using the K8s native resource object StatefulSet and by using a MySQL Operator instance. As you can see, Operator hides the orchestration details of complex applications, and significantly reduces the difficulty of deploying these applications in K8s. If you need to deploy any complex applications in K8s, we recommend that you use Operator. References Original Source https://www.alibabacloud.com/blog/kubernetes-application-management---stateful-services_594900?spm=a2c41.13018567.0.0
https://medium.com/@alibaba-cloud/kubernetes-application-management-stateful-services-7825e076bcb3
['Alibaba Cloud']
2019-06-17 01:45:40.836000+00:00
['Kubernetes', 'Deployment', 'Elastic Container Service', 'Log Service', 'Alibabacloud']
JC Wandemberg Ph.D.
If you think that life is about living without having to experience pain or suffering, you certainly came to the wrong universe. I understand and commend people that feel very strongly about “leaving the world with dignity”. However, one must first understand what “leaving the world with dignity” really means. Does it mean to be able to decide how much pain or suffering am I able to put up with? Does it mean someone else should be able to choose my destiny since I am unable (e.g., in a vegetative state) or unwilling to decide? Whatever it may be, the main issue seems to be centered on “suffering,” and “mercy.” I have seen far too many movies where they pretend to show this act of killing as “merciful” by quickly putting at end to the suffering. In fact, they are just showing the need to put an end to the “unpleasant experience” of who are observing and need to move on. Instead of slowly and calmly contemplating the emotions of true love and compassion that emerge by caring for the one in pain and suffering, if only by empathizing. Suffering and pain, thus, have been gravely misunderstood. Many people associate it with “punishment,” which is understandable from a purely hedonistic perspective. From a practical perspective, however, it is simply absurd. Far too many people seem unable and/or unwilling to understand the meaning of pain and reject it without any thought. Actually, it is human nature to reject suffering and pain. One must comprehend, nevertheless, that pain and suffering have a much higher purpose than just making us feel miserable, i.e., are a blessing in disguise. The purpose of pain and suffering may be linked to a story about a kite and its string. Pain and suffering are the adverse winds that blow and push the kite around but it is thanks to the string that keeps the kite connected to the ground what allows it to soar high and remain up in the sky. If we cut the string, through euthanasia, the opportunity to soar over the mundane is gone. Pain and suffering must be seen through the eyes of faith to understand that that pain and suffering afford us humans the opportunity to further develop our Spiritual Conscience and become more than mere mortals. It is only through pain and suffering, i.e., being tied to the world, that we can grow not only physically, intellectually and emotionally, but much more importantly, Spiritually. JC Wandemberg Ph.D. About the author: Dr. Wandemberg is an international consultant, professor, and analyst of economic, environmental, social, managerial, marketing, and political issues. For the past 30 years Dr. Wandemberg has collaborated with corporations, communities, and organizations to integrate sustainability through self-transformation processes and Open Systems Design Principles, thus, catalyzing a Culture of Trust, Transparency, and Integrity.
https://medium.com/@jcwandemberg/mercy-killing-4fd220fcc9c
['Jc Wandemberg Ph.D.']
2020-12-27 18:03:03.008000+00:00
['Pain', 'Life', 'Euthanasia', 'Suffering', 'Spirituality']
What are GitHub Actions and How to Use Them
Intro to GitHub Actions Introduction In this article, I’m going to share and demo a solution that could help offload some of the toolings to GitHub. This is called GitHub Actions. The way it works is that you create actions in your repositories by creating one or more text files. These are called workflows. Workflows can handle common build tasks, like continuous delivery and continuous integration. That means that you can use an action to compress images, test your code, and push the site to your hosting platform when the master branch changes. One interesting usage of Actions is in conjunction with tools like Bit (Github). For example, let’s say your team has a shared component collection on Bit. Your repo uses a few of these shared components. You can configure Bit to automatically send a PR whenever a component gets published with a bumped version. Using Actions, you can set up a CI to run on every such event. This makes it possible to check your team’s shared components pass integration tests, across all projects. Sharing components and syncing updates on Bit Read more about it, here: You can also have tasks that run on a specific timeframe, or that control what happens when somebody interacts with the GitHub repository itself. So, when someone makes a comment on a pull request, it can send you a note or you can run an action when somebody stars your project. Actions can run in a variety of platforms, including Linux, macOS or Windows, and these will run on virtual machines or containers. That means you can test your code in different environments. You can even run matrix workflows so you can test on multiple platforms simultaneously. Understanding workflows So, the main concept at the heart of GitHub actions is the workflow; this is done by creating a series of YAML files that contain a series of actions that determine your workflow in your repositories. These YAML files are stored in a special folder in the .github/workflows folder inside your repository. Here is an example: At the moment, you can create up to 20 yaml files at this time, and each of the workflow files will respond to a specific event. In addition to that, each one of these workflows can have a number of jobs that perform different actions, and this is also where you specify the platform that you want to run the action under. So, tod emo this; I’m going to use my Analog clock project which is build with simple Javascript vanilla. I made my project repository a template. You can go ahead and create a repository from this template and have something to play around with. You can do this on your own projects if you want to, but it’s actually better to start off by creating a template repository or a brand new repository to play around with actions. So let’s go ahead and add actions. The first time you create a workflow, Github provides some templates that you can use to create your first workflow based on your project repository. In this case, I’m going to add a Simple Workflow action.
https://blog.bitsrc.io/what-are-github-actions-and-how-to-use-them-e89904201a41
['Yann Mulonda']
2020-07-06 19:25:08.080000+00:00
['Front End Development', 'Github', 'Continuous Delivery', 'Continuous Integration', 'Software Development']
Hotel maids: the price they pay for your holidays in Greece ​
Parthenon temple on the Athenian Acropolis, Greece “I suffer from tendonitis in both my hands and shoulders, but I also have problems with my knees because of bending over often,” says Mirofora Stamopoulou, a hotel maid who works at Blue Lagoon. “All of my colleagues have problems too. Daily pains in our occupation are common due to the nature of the job,” the 36-year-old adds. She isn’t the only one to express her anger with the exclusion of the maids from the list of “heavy and unhealthy” occupations (KBAE) in 2011. From her perspective, it remains a heavy occupation just because of the filth these women have to clean, the heavy objects they move and the chemicals they breathe everyday. “We are the most important department in a hotel. Without us nothing can work properly. However, always, in everything, we are the last ones to be taken care of,” she tells MIJ. Greece continues to enjoy a surge in tourists during summer season, but, even though the housekeeping department is considered to be crucial within a hotel, maids say they are the “fifth wheel of the car” and their complaints are usually ignored. “Loaded like donkeys” The job of chambermaids is a physically demanding one that includes a variety of tasks such as making beds, tidying up rooms, washing floors, removing stains and vacuuming, but also cleaning and polishing toilets, taps, sinks, bathtubs, etc. “We are loaded like donkeys,” states Eirini Kiriakopoulou, a maid who works for a hotel complex in the island of Syros. Hotels in the islands are usually built on breathtaking rocks or in an amphitheatrical shape, making tasks even more laborious and distances longer. “I work in the afternoon shift, which means less rooms, but I have to do them all alone,” a 45-year-old maid who works at Athens Zafolia Hotel, and wishes to remain anonymous, states. “In the hotel I am currently working every employee was supposed to get 15 single rooms, but this is unfeasible due to lack of personnel as a result of the low salaries provided, so we end up taking on 20 rooms,” Stamopoulou continues. “Until last year, in the hotel I work now, the working hours were 7 a.m. to 3 p.m. with a 30-minute break at 9 a.m., but this year they changed it and we have to work eight hours straight and take the break at the end of the shift,” says Kiriakopoulou. This is not a single case rule from a particular hotel as other maids like Stamopoulou endure the same. The doctors’ perspective MIJ interviewed occupational health doctor Ilias L. Tantis, who works closely with hotel maids in the city of Lamia. He points out that the legally necessary 20-minute break per eight hours of work is almost never given, let alone when the employee has signed an eight-hour contract and is sometimes employed for up to 14 hours. “When you are working six hours you are entitled to a short break. However, when some of us asked for that, the supervisors showed us the exit,” states Kiriakopoulou. Serious work-related health problems are not a rarity for Dr. Tantis. He has met staff suffering from carpal tunnel syndrome, bursitis, epizootic elitis, “elbow of tennis”, tendonitis, tinnitus or herniated spine discs, among others. He says that the main causes are considered to be the handling of manual loads — “let’s not forget that women have less physical strength” -, repetitive movements and inadequate physical rest. Dr. Tantis surveys his patients asking if anything from work has an impact on their health. “I use it to notify employers in order to improve the workplace conditions, but I also make sure to mention it in the patient medical history book,” he explains. They all grouse about similar problems: “Pains are very common, especially the musculoskeletal ones, tendonitis and back pain. Ι, myself have suffered from pain in my back for fifteen days in a row,” describes the maid working at Athens Zafolia Hotel. “When I go back home I eat and lie down. Why? Because while every other position within a hotel has an assistant, that is not provided to us. We are the heart of a hotel and are often responsible for carrying and moving heavy objects; business of the hotel housemen,” Kiriakopoulou says. “From my personal experience in hotels, almost all maids suffer from musculoskeletal problems,” occupational health doctor and Vice President of the Hellenic Society of Occupational and Environmental Medicine, Vasilis Drakopoulos, reveals MIJ. A study on safety and health among hotel cleaners explains that other non-orthopedic workplace hazards may result in ‘respiratory illnesses from cleaning products, skin reactions from detergents and infectious diseases from agents such as biological waste (e.g., feces and vomit) and bloodborne pathogens found on broken glass and uncapped needles’. May, 2011: Hotel Maids out of the List On the 3rd of May of 2011, the Greek Minister of Labor and Social Security Giorgos Koutroumanis announced the members of the standing committee that would be in charge of applying proposals for ‘heavy and unhealthy’ occupations. An investigation by MIJ has found out that this decision was made without a clear agreement, in an arbitrary process, and in contradiction with scientific facts, according to statements by members of the committee. Rolling times, especially night hours, exposure to carcinogenic and biological agents, work at heights or depths, outdoor work and exposure to high or low temperatures are some of the criteria used to evaluate whether an occupation is designated “heavy and unhealthy”. Approximately, 365,000 workers were included in the new list, compared to 531,000 based on the old regime, with hotel maids being among the occupations deleted. Greece was one of Europe’s worst hit countries in the financial crises at that time. Occupational health doctor Drakopoulos states MIJ that the country was under pressure to delete occupations from the list mostly due to the so-called Troika, three organizations which had the most power over Greece’s financial future within the European Union during the economic crisis. These were the European Commission, the International Monetary Fund, and the European Central Bank. The outcome didn’t please everyone as three of the committee members expressed their objection to the exclusion of the maids: the president of the standing committee; Vasilis Drakopoulos, representing the Hellenic Institute of Occupational Health and Safety (ELINYAE) and the General Confederation of Greek Workers. However, the final decision was within the domain of the Minister. Today, Dr. Drakopoulos “feels sorry” as he believes it is “completely irrational” that the occupation is out of the KBAE. He clarifies that all the proposals were heard during the procedure, but reveals that “there were no criteria for deleting the occupation of maids from the list. The designated experts were employees of the Ministry of Labour and their views were unscientific and absurd. ” Interview with Occupational Health Doctor Vasilis Drakopoulos Zero studies nor inspections since 2011 “There is no real occupational risk assessment in our country. So far, no studies have been conducted,” verifies the president of the standing committee Dr. Konstantinidis, who proposed that a risk assessment study had to be conducted before excluding the occupation of maids from the list, but the Minister answered that this action would require ten years and there was no time for that. Dr. Drakopoulos laments the lack of progress since 2011 and claims that we would now see results if an improvement had been made back then. “If progress is made today, also in ten years time we will have results,” he continues. Rena Bardani, today’s President of ELINYAE and representative of the Federation of Greek Industries back in 2011, told MIJ that, although the chairman of the committee probably had the freedom to propose some things, there was no time to conduct studies. However, she didn’t want to share extra information because she does not have a direct image of the occupation and does not want to give a shallow nor a scientific answer. Meanwhile, other occupations on the list enjoy early retirement benefits on the grounds of the nature of their working environments. A new law on early retirement was submitted to parliament by the Labor Ministry in May, a benefit maids will miss out on. Occupational health doctors are a requirement for hotels occupying more than 50 employees, in accordance with law 1568/85. In spite of that, some chambermaids deny having heard about these doctors. Dr. Drakopoulos says that these doctors don’t exist in the Greek islands, where hoteliers complain about the shortage of occupational health doctors available. Hence, doctors of any kind are hired to do the job instead. Moreover, the profession is very profitable during summer season because “doctors prefer to take care of insured tourists and save a good income, ‘forgetting’ that their occupation is to take care of workers.” When it comes to risk education, maids claim that hoteliers provide “absolutely nothing”, even though employers are required to establish a program of preventive actions and improvements of working conditions and to encourage and facilitate the training of workers by law P.D. 17/18–1–96. Dr. Tantis stresses out that the responsibility for education relies now on the employee. “I believe that right safety measures are not in place since the work division is not done properly. They do not provide relevant education,” claims a maid that wants to remain anonymous. Despite all these gaps, there have been no inspections on the touristic sector since then, and in any case the occupation is no longer recognised as a separate category, according to Vasiliki Fakoukaki, Head of Secretariat of Health and Welfare Services Inspection Body, (SEYYP). Stefanakis Georgios, president of the Syndicate of Food, Tourism, Hotels in Attica argues that popular hotel complexes such as Hilton Athens prefer outsourcing cleaning companies to avoid expenses. Bart Van De Winkel, Hilton General Manager for Greece and Cyprus, told MIJ that whilst he does not recognise the Syndicate’s allegations regarding labour costs, the hotel utilises the services of external providers, including ISS, depending on occupancy levels at the hotel. MIJ also tried to interview other hotel complexes as well as outsourcing companies such as ISS Greece, but none of them responded to our repeated requests for comment. Not far from Greece, a group of Spanish hotel maids started a movement to fight for their rights. “Las Kellys”, as they call themselves, stood against the rise of outsourcing cleaning companies claiming that it “paved their way to hell”. Their voices were heard and they won their fight in several hotels. “Las Kellys” have shown that when working conditions get precarious in an industry that is the base of the economy of a country, the only way out is to fight for your rights. Seeking dignity-based justice Two matters are getting on the scale, explains Dr. Drakopoulos; the health and safety at work and on the other side the heavy and unhealthy occupations. “Employees want to be included in the list because their job is laborious and they are afraid of dying early. We need to increase health and safety at work so that arduous and hazardous jobs decrease and then we squash the heavy and unhealthy environments. Because if there is no dangerous working environment, neither there is a dangerous occupation” he claims. No sector is more fundamental to Greece than tourism. However, there is an ugly truth behind the “picture perfect” holidays in historic Athens or paradisiacal Santorini. Bad working conditions, work-related health problems and peak season pressure are just some of the difficulties faced by hotel maids, the least glamorous side of tourism. ​ Investigation conducted in cooperation with Andrea Gómez Bobillo
https://medium.com/@zina.fragkiadaki/hotel-maids-the-price-they-pay-for-your-holidays-in-greece-66d293919dc4
['Zinovia Fragkiadaki']
2020-06-09 10:33:10.604000+00:00
['Workers Rights', 'Greek Holidays', 'Greece', 'Hotel Workers', 'Athens']
Finding the Most Common Colors in Python
Finding the Most Common Colors in Python There are several use cases in image processing that can be solved if we know the most common color(s) of an image or object is. For example in the field of agriculture, we might want to determine the maturity of a fruit, an orange or strawberry for instance. We can simply check if the color of the fruit falls in a predetermined range and see if it is mature, rotten, or too young. Photo by Sarah Gualtieri on Unsplash As usual, we can solve this case using Python plus simple yet powerful libraries like Numpy, Matplotlib, and OpenCV. I will demonstrate several ways on how to find the most frequent color in an image using these packages. Step 1 — Load Packages We’ll load the basic packages here. We’ll load some more packages as we go along. Also, since we are programming in Jupyter, let’s not forget to include %matplotlib inline command. Step 2 — Load and show sample images In this tutorial, we will be showing two images side by side a lot. So, let’s make a helper function to do so. Next, we’ll load some sample images that we’ll be using in this tutorial and show them using the function above. Source: Images by Author Now we are ready. Time to find out the most common color(s) in these images. Method 1 — Average The first method is the easiest (but ineffective one) — simply find the average pixel values. Using numpy 's average function, we can easily get the average pixel value across row and width — axis=(0,1) Most common color #1 — average method We can see that the average method can give misleading or inaccurate results, as the most common colors it gave are a bit off. This is because the average took into consideration all pixel values. This will be really problematic when we have images with high contrast (both “light” and “dark” in one image). This is much more clearer in the second image. It gave us a somewhat new color that is not visibly clear/noticeable in the image. Method 2 — Highest Pixel Frequency The second method will be a bit more accurate than the first one. We’ll simply count the number of occurrences in each pixel value. Fortunately for us, numpy again gives us a function that gives us this exact result. But first, we must reshape the image data structure to only give us a list of 3 values (one for each R, G, and B channel intensity). We can simply use numpy ‘s reshape function to get the list of pixel values. Now that we have the data in the right structure, we can start counting the frequency of the pixel values. We can just use numpy 's unique function, with the parameter return_counts=True . Done, let’s run it to our images. Most common color #2 — frequency method This makes more sense than the first one right? The most common colors are in the black area. But we can go further. What if we take not just one most common color, but more than that? Using the same concept, we can take the top N most common colors. Except, if you look at the first image, many colors with the highest frequencies would most likely be neighboring colors, probably with a difference of a tiny few pixels. In other words, we want to take the most common, different color clusters. Method 3 — Using K-Means clustering Scikit-learn package comes to the rescue. We can use the infamous K-Means clustering to cluster groups of colors together. Easy, right? Now, all we need is a function to display the clusters of colors above and display it right away. We simply create an image with a height of 50, and a width of 300 pixels to display the color groups/palette. And for each color cluster, we assign it to our palette. Most common colors #3 — K-means clustering Beautiful isn’t it? K-Means clustering gives great results in terms of the most common colors in the images. In the second image, we can see that there are too many shades of brown in the palette. This is most likely because we picked too many clusters. Let’s see if we can fix it by choosing a smaller value of k. Yep, that solved it. Since we use K-Means clustering, we still have to determine the appropriate number of clusters ourselves. Three clusters seem to be a good choice. But we can still improve upon these results plus still solve the number of cluster issues. How about we also show the proportion of the clusters towards the whole image? Method 3.1 — K-Means + Proportion display All we need to do is to modify our palette function. Instead of using fixed steps, we change the width of each cluster to be proportionate to how many pixels are in that cluster. Most common colors #3.1 — K-means clustering + proportions Much better. Not only it gives us the most common colors in the images. It also gives us the proportion of occurrences of each of the pixels. It also helps answer how many clusters should we use. In the case of the top image, two to four clusters seem reasonable. In the case of the second image, looks like we need at least two clusters. The reason we don’t use one cluster (k=4) is that we’ll run into the same problem as the average method. K-Means with k=1 result Conclusion We have covered several techniques to get the most common colors in images using Python and several well-known libraries for it. Plus we’ve also seen the advantages and disadvantages of those techniques. So far, finding the most common colors using K-Means with k > 1 is one of the best solutions to finding the most frequent colors in images (at least compared to the other methods we’ve gone through). Let me know if you have problems with the script in the comments, or in my Github.
https://towardsdatascience.com/finding-most-common-colors-in-python-47ea0767a06a
['M. Rake Linggar A.']
2020-10-09 03:10:43.595000+00:00
['Image Processing', 'Python', 'Machine Learning', 'Computer Vision', 'Opencv']
Virtual Roller Derby, Is The New Roller Champions Game Up To Derby?
When doing some virtual roller derby keyword research for an article I came across a new trending keyword “roller champions” and I thought I had better check it out! It turns out, Ubisoft has been developing a new computer game. Not based purely on anyone existing sport but incorporating several sports together. It is sort of a Virtual Roller Derby Have you ever tried to explain what derby is and ended up saying “its rugby on skates!”? Well, that’s what roller champions look like to me? What is this ball thing? If you haven’t noticed, roller derby has balls, but no ball! Roller champions introduce the use of a ball into the rink! My head just exploded! Have you ever tried to explain what derby is and ended up saying “its rugby on skates!”? Well, that’s what roller champs looked like to me? Roller Derby Champions Trailer But could we learn something from this E-sport all be it in development? What can roller derby teams do to improve in lockdown? It occurred to me there is no reason why you could not use this game as a virtual derby tactics platform. Imagine when you can’t practice because its the holidays or there is a pandemic and the venue is shut. Zoom call anyone?… NAH! What about Discord and game environment with a bit of bashy bash with the team? A Virtual Roller Derby environment to develop our game Strategy? Now the Closed Alpha stage of testing has closed for roller champions but the Beta is due! I really want to get a look to see if this is a feasible tool for us Derbs! This could be another way for us to focus our heads on what we do in a game. Without having to replay a game video we can see the effect of losing your cool in real-time without losing our ranking! Without losing a game you can see how to form that wall better to stop those damn jukey jammers! with development, you could even start thinking about an augmented reality version! Oh, that sounds like the future! Ok, so it is not a current solution but is it time we brought roller derby training into the 21st century? Could this be a new community that might grow interested in our sport? I think it is worth a look! When I get the chance I will see if we can get some contact with Ubisoft and the Dev team. Until then you can check out roller champions here https://roller-champions.ubisoft.com/game/en-us/ Other Related News You can keep up with Roller Derby News on our Facebook page or on Medium
https://medium.com/roller-derby-news/virtual-roller-derby-is-the-new-roller-champions-game-up-to-derby-cddf6a5fa0c4
['Roller Dernews']
2020-11-27 14:13:42.750000+00:00
['Roller Derby', 'Esport']
Taking the Phnom Penh Killing fields Tour
The first out of the ordinary thing I did this Christmas was fly on Christmas day to Cambodia.The second was to plunge straight into dark Cambodian history on the Phnom Penh Killing fields tour. Here’s a run down of what this tour is really like. Is it worth visiting the Phnom Penh killing fields? How harrowing is the experience? Is it something you really need to visit to understand Cambodian history and culture today? Today I’m answering travellers questions on the Phnom Penh Killing fields tour. Should you really go on the Phnom Penh Killing fields tour? As one of the world’s most infamous dark tourism destinations, I can totally understand why someone would not want to do the Phnom Penh killing fields tour. Delving into the horrific past of the Khymer rouge and seeing evidence of mass graves and genocide is clearly not going to be an upbeat start to any holiday. However, mass murder under the Pol Pot regime is something that happened within most Cambodian people’s lifetime, and for the younger generation it will have likely impacted on their parents. For me, dark tourism isn’t about having gruesome obsessions or wanting to delve into the dark and depressing. It’s simply about having an understanding of the history and culture. Almost everyone I meet will have either been directly or indirectly affected by the Pol Pot regime and I feel that I have a duty to make the effort to understand this. A visit to the killing fields is essential to understanding Cambodia’s past. It is as essential as a visit to the Kigali genocide museum on any trip to Rwanda today. How to book a Phnom Penh Killing fields tour Online Most hotels and guest houses winks offer a Phnom Penh Killing fields tour and they are easy to book once you arrive in Cambodia. I paid just $30 for the tour and booked it the morning for an afternoon trip that same day. The tour only requires a half day and so you can fill the other half of the day with a Royal Palace tour or a trip to Wat Phnom. However, it is a rather heavy going day and I opted to only do this tour to preserve energy and allow time to reflect on what I saw. If you would like to book your Phnom Penh Killing fields tour online, I’d recommend booking with GetYourGuide (specifically tours below) or booking the Viator Half day Tuol Sleng and Cheung Ek Killing Field How to dress for the Phnom Penh Killing fields Tour Remember to be respectful when visiting these sites — they are a place of memorial. You will need to dress conservatively with legs and arms covered — no shorts or vest tops are allowed. Jeans or long walking trousers and a T shirt that covers the shoulders is recommended. Be silent/quiet when visiting these memorial sites and do not visit under the influence of alcohol. No smoking or eating is allowed on the grounds of s21 or the killing fields. An Introduction to the History of the Pol Pot Regime Pol Pot was in power in Cambodia between 1976 and 1979. His vision, influenced by Maosim of socialist China, was to turn Cambodia into an agrarian socialist republic with a focus on rural production. Anyone challenging this ideology was seen as challenging the state — teachers, doctors and lawyers were forced to work the land, or held as political prisoners. Pol Pots radical communism of the Khmer Rouge (his political party) resulted in the deaths approximately 1.5 to 2 million people — around a quarter of the countries population at the time. The deaths were a result of being tortured, worked to death, starvation, disease and murder. Agrarian tools were favourable weapons as bullets were expensive. S-21 Prison (Tuol Sleng Genocide Museum) The first stop on any Phnom Penh killing fields tour is likely to be the S-21 prison — a former secondary school that was converted into a prison and used by the regime for torture and interrogation. The prisoners included anyone who posed a threat to the regime, so as people accused of being traitors to the government such as doctors, teachers, lawyers and intellectuals. Admission is $5 for non-Cambodian citizens (free for Cambodians) and you can hire a guide for an extra donation. An audio tour is available, but it’s well worth having a guide for the personal insights and stories about life under the Khmer Rouge. The suggested donation for a guide is $3. Our guide — Somaly — was just 13 years old under the Khmer Rouge and she clearly remembers being forced out of the capital Phnom penh and working the land for 12 hours solid a day with little more food than a small grain of rice. Tears came to her eyes when she described how her father and brother were murdered by the Khmer Rouge. Somaly found it difficult to return to Phnom Penh in the 1980s to discover nothing left of their family home. She now lives with her mother in Phnom Penh and runs tours of the museum. Approximately 17,000–20,000 prisoners went through s21 prison between the years 1976–1979. Most died due to the conditions, torture or were sent on to the killing fields. The site of s21 has now been turned into the Tuol Sleng Genocide Museum. There are 4 blocks to visit in Tuol Sleng Genocide Museum. Each block shows something different about the horrific conditions at the prison… Torture/interrogation cells Photograph gallery — the interrogators and prisoners in a black and white gallery A preserved block as it was in the Khmer Rouge (possibly the most harrowing as individual cells of solitary confinement still exist here) Display of instruments of torture and art by artist and s21 survivor Buo Meng. The most difficult block to visit is the block left intact since Pol Pot’s regime — the barbed wire is still evident which was meant to prevent suicides from jumping out of windows and off balconies. The solitary confinement cells with shackles and a small box for excrement are pretty horrific. Only 11 people survived s21- I was lucky enough to meet two of them — Chum May (one of 7 adult survivors who was saved by his ability to mend machines) and Norng Chan Phaln who was a child survivor. Take an extra $10 with you if you would like to support them by purchasing one of their books — it helps to support them and also raise awareness about genocide. Kiling Fields — Choeung Ek Genocidal Centre The second part of the tour will involve a visit to the Choeung Ek Genocidal Centre built on one of the largest killing fields of Cambodia. There are no tour guides at Choeung Ek, but an audio guide is available for $3. The audio guide is harrowingly descriptive but a good way to understand what truly went on here. Many of the previous buildings such as the executioners office and chemical substance storage room were destroyed by the Khmer Rouge and signs exist in their place to explain what went on here. When walking through the killing fields, be sure to stick to the paths and avoid treading on any fabric (clothing of victims), teeth or bones that may have worked its way up to the surface. The mass graves are sectioned off — avoid walking on the mass graves. Be prepared to see collections of clothing and bones from the victims of the killing fields. It’s raw and shocking, but necessary for understanding the recent history of Cambodia. The most gruesome aspect of the killing fields is probably the ‘Killing Tree’ which was used to beat children and smash their heads. Truly a sick and inconceivable action — it was realised when the tree was discovered with bone fragments and brain tissue what had actually gone on here. Finish the journey around the killing fields of Cambodia at the memorial stupa. Built in 1998, the stupa offers a place for reflection and memorial. It houses over 8000 human skulls behind the glass of the inside of the stupa. It’s a truly depressing day out and a reminder of the atrocities of mankind, but Cambodians encourage tourists to visit these sites to understand the past and pay respect to the lives lost. Further Reading on Cambodia Travel Don’t miss my full guide to backpacking Cambodia. There are also further recommendations for Cambodia tours, Phnom Penh Tours and Siem Reap Tours. Follow this link if you would like help planning the perfect Cambodia itinerary.
https://amytrumpeter.medium.com/taking-the-phnom-penh-killing-fields-tour-52c016cd5f27
['Amy Trumpeter']
2019-12-30 16:35:14.800000+00:00
['Phnom Penh', 'Dark Tourism', 'Travel', 'Genocide', 'Cambodia']
Moonlight — Review
Based on Tarrell McCraney’s largely autobiographical play ‘In Moonlight Black Boys Look Blue,’ ‘Moonlight’ tells us a coming of age story at three different points of time in a young man’s life. Kind of like ‘Boyhood,’ except these assholes cheated and used different actors. Can you say “lazy?” The subject coming of age here is Chiron, a gay black youth, who is portrayed in three excellent performances by Alex R. Hibbert, Ashton Sanders and Trevante Rhodes. We watch as Chiron grows up in a bad Miami neighborhood getting picked on by his drug-addicted mother (Naomi Harris) and classmates alike, finds first love, and eventually becomes a drug-dealer. I will add that this was also the profession of the his childhood father figure, Juan(Mahershala Ali), and that both are essentially dignified men of principle. Finding him hiding from bullies in an empty building, Juan takes Chiron under his wing, and the scenes with the two of them were among the film’s highlights for me. Particularly a scene in which this young kid casually asks if his older friend sells drugs, and then openly makes the connection that his mother does drugs is simply devastating. If you’re looking for some fancy story with twists galore or big CG explosions I guess I could tell you that ‘The Girl on the Train,’ ‘Arrival’ and ‘Fantastic Beasts and Where to Find Them’ are all in theaters right now, and all three are not bad either. ‘Moonlight,’ though, is one of those slice-of-life dramas, the main selling points of which are watching a character grow and that everything feels real. It’s also one of the best I’ve seen recently, and writer/director Barry Jenkins is clearly someone to keep an eye on.
https://medium.com/panel-frame/moonlight-review-f9f0ae75975d
['Will Daniel']
2017-03-16 04:32:50.548000+00:00
['Film', 'Movies']
Huge ANNNNOUCEMEENNNT #27 — BNS going DeFi
We had been keenly observing the developments happening in the DeFi (Decentralised finance) space and we decided BNS should actively participate in it. Yield farming has been a craze. A lot of the Decentralised Finance tokens which launched in the last 2 months have got into top 100 CMC ranks (SUSHI, YFI, COMP, BAL etc.) Few are already listed on Bitbns and we would list more credible ones and not just go after what’s trending as we understand users trust us for listing credible tokens. We were among the first exchanges to list COMP and BAL globally. Moving back to what’s Decentralised Finance is. In decentralised finance, the overall premise rests on the fact that smart contracts govern everything including lending, borrowing, interest generation, credit, liquidations etc. Simply put, the role of banks is being done by smart contracts now and some of the overheads in terms of running an operation, manpower etc. are becoming obsolete and now there is larger yield to be generated. Yields are higher for other reasons as well, and inclusion of a token which in itself is getting its value attributed to it. One of the central contexts around DeFi is the concept of Yield farming. What it essentially means is, the way you deposit funds in a bank and get returns for fixed deposits, the same way you can lock your crypto assets and get returns in the form of crypto. This in, simple words, is yield farming where you are locking your assets and you can earn yield in the form of newly generated tokens. DeFi has redefined a few things and there is a constant spurt of activity happening there. Today, we are excited to announce we taking the DeFi route, and step 1 is launching https://bns.finance/— a website where users can deposit tokens they possess and earn yield on it. The initial yield starts really high. You can check it on the website. Users would get yield in the form of BNSD (BNS DeFi), a token which we have released specifically for this. Salient features of BNSD Super high APYs Multiple pools for farming Extreme deflationary release over time Halving model contains 4 halvings, where block rewards reduce. Block rewards start with 1000 rewards per ETH block of BNSD and then reduce, based on halving in the following fashion: 1000–500–1 day from genesis block 500–250–7 days 250–125–30 days 125–100–90 days 1000–500–1 day from genesis block 500–250–7 days 250–125–30 days 125–100–90 days Just 4% of the rewards are reserved for dev funds. This is lowest in comparison across other DeFi projects like Sushi Best part; 50% of the dev funds are used for buying BNS on a periodic basis. Contract is super clean as there is no mint function, except for the block rewards being generated per block. So, there is no risk associated. Also, there is no time lock required as only BNSChef can mint rewards and those rewards are specific to block How do I farm? First install Metamask from the link (Install Android app or Chrome extension and register yourself. You can skip the private key step and register using a password for faster setup. Ensure you use a strong password, which you have not used anywhere else) You would find your ETH address there. Like on chrome, you can find it here: You can deposit there ETH or USDT or other ERC20 tokens which you can see in BNS.finance website pool Now go to https://uniswap.info/ and find that pair. Click on add liquidity 10796300 is the starting block from where BNSD farming can start. You can check the countdown timer here http://etherscan.io/block/countdown/10796300 Estimated Target Date: Fri Sep 04 2020 23:05:28 GMT+0530 (India Standard Time) As an example — If I am doing it for ETH USDT pool, the process goes something like this: After installing Metamask, I would deposit ETH and USDT to my address displayed on Metamask. Then I go to Uniswap and search for ETH USDT pair. Then, I click ‘Add Liquidity’ button I will now be redirected to a URL like this: Once you the balance, click ‘Connect Wallet’. Then select ‘Metamask’. Once linked, you need to execute 2 transactions, which would get your liquidity added on ETH USDT pool Post this go to https://bns.finance/ and then ETH USDT in this case (as we added liquidity for ETH USDT). You will be able to see tokens available there. Your wallet and a stake option would be available, where you could see your Uniswap LP tokens balance, You can click on ‘Stake’ and then there would be 2 Metamask transactions That’s it folks! That’s how you do it. To understand more, watch some of these good videos around Metamask operations Till then Onwards & upwards towards the future of finance Team Bitbns
https://medium.com/bitbns/huge-annnnnnnoucemeennnt-bns-going-defi-1c6f5bb0673
[]
2020-09-04 17:10:05.199000+00:00
['Defi', 'Cryptocurrency', 'Bitbns', 'Ethereum', 'Bitcoin']
Humans and Machines — Partners not Enemies
With machine translation showing no signs of slowing down, can you afford to not have some robots in your task force? Machine translation has been buzzing around as a great cost-saving solution in the world of translation for some time but what does it really mean and what are the risks to your business? Looking at the infamous Speed, Quality, Cost triangle it would suggest if you are saving cost and significant time then quality must be what you sacrifice. Is there a way you can take advantage of the benefits of machine translation and still maintain the quality of content that reflects your product or brand? We thought why not? Initially we ran some tests using the standard machine translation tools available to us all, Google Translate, Deepl, etc, and found that as an out of the bag solution these tools had come on leaps and bounds. They were certainly good enough to be used for very repetitive content production, simple product catalogues for example. However, they were not so good at more complex content like luxury product descriptions and we certainly didn’t want to appear ‘Dumb’ in the wrong area. We wondered how long we’d have to wait for them to be ready to handle high quality translation. We wanted to be part of this future but didn’t want to sit twiddling our thumbs until others were ready. We thought what if we could build our own tool? So that’s what we did. Wezen is a cloud-based SaaS platform that supports content creation and translation. We are focused on making it the best it can be, so we rolled up our sleeves and built our own machine translation. We started running some tests internally to see how good we could make it. We tested our machine against the leading competitor and a human translator. We showed our internal team a selection of sentences that had been translated by all three. We asked them to say which they thought were the best translations without telling them which translation method had been used. The results were really positive. Out of 20 segments Wezen was chosen as the preferred translation 14 times Then we approached one of our favourite clients said we think this might work for you. They were immediately on board and we would be able to finally see if our hard work had paid off. We scraped 4 years worth of content we had previously translated to train a brand-specific machine translation tool just for them. We then ran a series of tests with our team of reviewers to check the quality and point out any major discrepancies. Then we tested and learned from the results until we were happy we had a good standard. After the 4th test our Machine Translation was performing as well as a senior translator in most cases. Celebration ensued! We had done it! 🎉 We started using the machine translation with some old products before launch and realised that although the machine was good it wasn’t as perceptive, in some areas, as our great human team. For example it wasn’t the best at figuring out the correct gender to assign to different words and sometimes it would translate brand names with some hilarious results. We needed our Human team to keep the Machine working at the level of quality our client expects from us every day. With a combination of machine to do the initial translation and a fantastic human team to check up on those areas where it gets confused we managed to get all three sides of the triangle! Cost reduction, more efficient time scales, quality maintained. In the past, we have had up to 40 translators working at once to translate content in a quick turn around. Now we only needed to find 4 excellent reviewers to ensure the same quality is being achieved. Machine translation is catching up fast in terms of quality and accuracy. Here at Wezen we are leading the way into this future with your brand in mind. In a world saturated with up-to-date content — Don’t get left behind! If you want to talk to us about how we could build your brand it’s own machine translation tool then email us at [email protected]
https://medium.com/wezen-sam/humans-and-machines-partners-not-enemies-a618b535069
['Naomi Burgess']
2020-04-23 09:48:42.918000+00:00
['Localization', 'Ecommerce', 'Machine Translation', 'Content Strategy', 'Translation']
30+ Tools List for GitOps
This post was originally published on Cherre.com here. GitOps — which takes automation facets of the DevOps methodology — is an approach that aims to streamline infrastructure management and cloud operations with software development and deployment. While many consider GitOps a replacement for DevOps, it is not — the approach simply concentrates on the means of automating one facet of the DevOps methodology. Specifically, GitOps uses Git pull requests to automate infrastructure provisioning and software deployment, all for the purpose of making CI/CD a more efficient process. GitOps uses Git as a single source of truth for both application development and cloud infrastructure; declarative statements are used for streamlining configuration and deployment. GitOps unifies a number of key tasks such as deployment, management, and monitoring of cloud clusters (specifically containers running in the cloud) and allows for developers to have more control over their application deployment pipeline. Since Git works for Infrastructure as Code (IaC) as well as application development, it is an ideal repository of truth for the approach. Benefits of GitOps GitOps offers some key advantages to those who employ the approach, starting with the more refined CI/CD pipeline itself. The approach fully leverages the benefits of cloud-native applications and scalable cloud infrastructure without the usual complications. Other benefits include: Higher reliability, made possible by Git’s native features. You can roll-back deployments and use Git’s tracking mechanism to revert to any version of the app should new codes cause errors. This results in a more robust cloud infrastructure too. Improved stability, particularly when it comes to managing Kubernetes clusters. Everything is traceable and changes in cluster configuration can also be reverted if needed. An audit log is automatically created with Git as the source of truth. Better productivity, allowing developers to focus more on the quality of their codes rather than the pipeline itself. Everything is fully automated once new codes are committed to Git, plus there are additional automation tools to utilize. Maximum consistency, especially with the entire process being managed using the same approach from end to end. GitOps simplifies everything from apps, Kubernetes add-ons, and the Kubernetes infrastructure as a whole. Many perspectives believe that GitOps offers the best of both worlds, combining continuous delivery with cloud-native advantages and IaC. GitOps best practices also make the end-to-end pipeline standardized, and you can integrate the approach with any existing pipeline without making big changes. You just need the right tools for the job. GitOps Tools to Integrate Speaking of the right tools for the job, there are countless tools to help you integrate the GitOps approach with your existing workflows. Some of the tools supporting GitOps are so popular that you may even be using it in your existing pipeline. To help you get started, here are the tools that we recommend if you want to incorporate GitOps. Of course, Kubernetes sits at the heart of GitOps. After all, the approach is based on using Kubernetes to manage containers and build a robust infrastructure. Kubernetes now comes with a lot of automation tools to simplify deployment and scaling of cloud infrastructure; we will get to some of them later in this article. As an open-source version control platform, Git is very robust. In GitOps, your Git repository becomes the single source of truth. Every code you commit to Git will be processed and deployed. You can also have Git repos for development and deployment. Helm is one of the most robust tools for configuring Kubernetes resources. Yes, you can use Homebrew or Yum, but Helm offers automation features that are not available in other tools in its class. If you want to further manage your roll-outs, Flagger from Weaveworks is a must-use tool. It is a tool for managing progressive delivery, which allows for new codes to be deployed selectively to identify errors. It works well with the next tool we have in this list. Prometheus acts as a monitoring tool for GitOps. It generates alerts if changes do not pass the tests set by Flagger. On top of that, Prometheus also bridges the gap between GitOps and other automation tools. Flux or FluxCD is simply the GitOps operator for Kubernetes. It automatically adjusts the cluster configuration of your Kubernetes with the config found in your Git repo. Flux is the reason why changes made to your Kubernetes cluster can be reverted easily. For image management, you can use Quay. Container images are managed meticulously with this tool, all without sacrificing security and reliability. Quay enables GitOps to work with on-premise image registry rather than cloud-based ones like GitHub. To keep your Git pull requests and updates organized, there are several tools you can use. Auto-Assign is one of them. As the name suggests, it assigns reviewers every time new pull requests are found, so changes can be monitored closely. Sticking with maintaining the quality of your codes, CodeFactor is another tool that can be integrated into your GitOps pipeline. It is an automated code review tool that automatically checks codes against predefined standards when new Git commits are found. Managing dependencies is key, especially if your app is built on languages like Go. DEP is the tool you want to use in this instance. It is specifically created to manage dependency of Go apps and services without slowing down your GitOps pipeline. Another Git app for managing codes is Kodiakhq. This time, the tool focuses on automatically updating and managing pull requests while reducing the CI load. Manually merging requests is no longer needed with Kodiakhq up and running, and this frees up time and valuable resources for other tasks. If you use Terraform to streamline resource provisioning, you can use Atlantis to add additional automation to the pipeline. Atlantis automates pull requests for Terraform and triggers further actions when new requests are found. Helm Operator also takes Helm a step further by introducing automation to the release of Helm Charts. It is designed to work in a GitOps pipeline from the ground up, so integrating Helm Operator is incredibly easy. Gitkube focuses more on building and deploying Docker images using Git push. The tool is very simple to use and doesn’t require complex configuration of individual containers. This too is a tool that will save you a lot of time and energy during the deployment phase. We really cannot talk about GitOps tools without talking about Jenkins X. Jenkins started life as a CI/CD platform for Kubernetes, but the platform can be used to manage your GitOps pipeline seamlessly. It even has a built-in preview environment to minimize code and deployment errors. Restyled enforces a certain style of coding for better standardization. With GitOps being designed as a way to standardize the end-to-end process, having the ability to automate code review and re-merging of requests is a huge plus. Argo CD takes a more visual approach to GitOps. It visualizes the configuration of both applications and environments, plus it simulates the GitOps pipeline with charts and visual cues. You can use Argo CD in conjunction with Helm and other GitOps tools as well. Kapp, a name derived from the Kubernetes app, focuses on the deployment side of the pipeline. It takes packages that have been created by other automation tools you integrate into your GitOps workflow and produces Kubernetes configuration based on them. Kpt, or “kept”, is another tool for streamlining deployment and the provisioning of Kubernetes resources. It uses declarations to handle resource configuration, allowing developers to gain better control over their infrastructure. There is no need for manual configurations at all with Kpt in place. Stale handles something that annoys a lot of developers: outstanding or abandoned issues and pull requests. With Stale, you can configure when pull requests and issues are considered abandoned, and then automate the process of managing those requests and issues. Kube Backup is an essential tool for maintaining the Kubernetes cluster configuration. It backs up your cluster to Git, particularly the resource state of the cluster. In the event of a catastrophic failure of the environment, you can get your application up and running faster with Kube Backup. A handy tool for managing resources in your Kubernetes cluster is Untrak. The tool automatically finds untracked resources in your cluster. It also handles garbage collection and will help you keep your Kubernetes cluster lean. Fluxcloud integrates Slack with GitOps. If you use Flux (FluxCD), you will certainly love Fluxcloud. It eliminates the need for Weave Cloud and allows for Slack notifications to be generated for every FluxCD activity. Style guides and standards for your codes! Sticker CI streamlines the implementation of coding styles without affecting the pipeline itself. You get fast and consistent code checking and standardization as soon as you implement Stickler CI into your workflow. This next tool is very straightforward. Task List Completed stops pull requests with outstanding tasks from being merged. Instead of having to manually check tasks from every pull request, you can safeguard your deployment environment using this tool. We’ve mentioned how you can use Fluxcloud for notifications, but what if you decide not to use FluxCD? You can still get notifications for Git changes by activating the native Slack plugin. Slack supports tasks such as closing and opening pull requests and issues as well as interacting with them directly from the Slack app. Even with the best QA in place, errors in codes can still be found. This is where CI Reporter comes in handy. The tool collects error reports for a failing build before adding it to the relevant pull requests. For a more granular control over which pull requests get merged, use PR Label Enforce. The tool enforces certain labels before a pull request can be merged. You can set labels like “ready” or “checked” as the parameter, and then use other tools to automate the assignment of these labels. For storing private data inside git, use Git-Secret. This is handy for when you need to store sensitive configuration files or Secrets. Security is very important in GitOps, so Git-Secret is invaluable as a way to ensure security. Speaking of security you can also use…. Kamus automatically incorporates zero-trust encryption and decryption to your GitOps workflow. Combined with Git-Secret, you can strengthen the security of your entire pipeline without slowing down your CI/CD cycles. If you need to take things a step further, you can also use Sealed Secrets to encrypt Secrets using a one-way encryption process. Sealed Secrets provide maximum security to your GitOps pipeline. While GitOps is very agile as an approach, maintaining productivity is still a necessary thing to do. Pull Panda helps you do that by making collaborative work easier and more efficient. It sends pull reminders and analytics to Slack and can even automate the assignment of pull requests. Sleeek is also a bot for managing productivity and streamlining processes, but it takes a slightly different approach to the problem. Sleeek is basically a bot — a virtual assistant — that helps project managers and development teams stay in sync through a series of questions. The list goes on, to be honest; there are so many great tools out there that can help you integrate GitOps and streamline your deployment pipeline significantly. GitOps, as an approach, does offer a lot of flexibility and a chance for developers to be more meticulous when managing Kubernetes clusters and the provisioning of cloud resources. This really can be continuous deployment meets cloud-native when it comes to working with Kubernetes.
https://medium.com/@stefanthorpe/30-tools-list-for-gitops-f591563e91c3
['Stefan Thorpe']
2020-12-19 08:07:23.200000+00:00
['Git', 'Tools', 'DevOps', 'Software Development', 'Gitops']
Medford citizens speak out against April school board elections at council meeting
Medford citizens speak out against April school board elections at council meeting Admin Follow May 20, 2016 · 4 min read The Medford school board election is a passionate subject for some Medford residents. During the Tuesday, May 17 meeting of the Medford Township Council, residents spoke against moving the school board elections from November to April. Residents cited reasons such as cost for an additional election, the loss of programs if the budget is not approved and foreseeing problems with the budget going to the council for approval. At a meeting in April, council discussed the possibility of moving the school board election to April, as this year would be the first eligible date to move the election back to its original date, and some residents a few months ago told council moving the April elections to November four years ago was “unjust.” This is because moving the election to November removed the right to vote on the budget from residents, as long as it doesn’t exceed the 2 percent increase cap. Council decided it would hold a meeting in June on the subject. “As a taxpayer I’m against it … it could potentially reduce the budget and hurt programs,” Diana Pasca said. Residents stated multiple reasons as to why the elections should stay in November. They said adding the April election would cost more money; they feared that budgets would not be approved and programs could get cut; they felt cutting programs could negatively affect the schools, also affecting the value of the education, especially for special needs children; and that putting the decision of the BOE budget in the hands of the council could have a negative impact. “My fear of moving the election to April is adding the cost of an additional election … and that the vote will end in a defeat of the budget because no one wants an increase in their taxes … that would create a downward spiral of all the things our school has worked (on),” Jessica Siragusa said. “What you are contemplating fosters a divided community … most significantly giving yourselves the authority over a defeated school budget puts you in the position of making decisions that you admittedly have limited knowledge or experience, decisions that impact our children, overall quality of our schools and the value of my home and the home of everyone in our community … Do you recall what happened in past years when town council had to make a decision on a defeated budget? Those council members found themselves in a dilemma in which they realized they would lose no matter what they decided … indeed they did lose, they were shortly voted out of office,” Jeff Reuter said. Resident William Love said council should take the increase in taxes issue up with Trenton where the real problem is. According to Love, the average state aid in Medford used to be 45 percent, but now covers only about 10 percent of the budget. He also felt it wasn’t fair for the school budget to be approved in a vote, when the municipal budget didn’t go to a vote. “(Councilman Chris Buoni) should be fighting for our fair share and supporting the schools,” Love said. Township Administrator Kathy Burger a separate election could cost around $30,000 based on past referendums. For the November elections, townships do not have to pay, though they do have to pay around $12,000 for the primary election. Buoni said New Jersey school taxes are the highest in the country and he feels that is a problem. He also felt that because school districts are allowed to increase to 2 percent, they haven’t tried to continue the programs they have without increasing the budget. “It’s amazing what people can do with less when they know they don’t have any more access to more … we’re not thinking and (innovating),” Buoni said. Additionally, he said things don’t need to be cut, but believes the people should have a say. Buoni added that he looked into having the municipal budgets approved by the taxpayers, as he would “happily have voters approve the budget,” however he was told that was not possible. Council approved a special council meeting on June 1 at 7:30 p.m. at the Public Safety Building with the purpose to consider a resolution to move the school board elections from November to April. In other news: • An ordinance on first reading ordinance was unanimously approved for the conveyance of the Centennial beach to the Centennial Pines Club, the Centennial Lake Homeowners Association in the amount of $943.31. This is for the lien amount owed at $193.31 as well as the redemption fee of $750. The second reading and public hearing would be at the next regular meeting. • Resolutions of note approved that night include: authorizing an agreement with the county for a pedestrian beacons at the intersection of Main Street and Allen Avenue, Stokes Road and Hampshire Way, and Taunton Boulevard and Locust Road; accepting $180,000 from the county under the Municipal Park Development Grant Program to reconstruct the basketball courts in Freedom Park and Bob Meyer Park; and authorizing an agreement between the township and Medford Business Association and Medford Celebrates Foundation to reimburse the township for certain services for the Art, Wine & Music Festival and Independence Day Celebration, due to budgetary constraints. • Cameron Wagner received a proclamation for becoming an Eagle Scout. He is part of Boy Scout Troop 20. His project was connecting the trail between two lakes in the Sherwood Forest neighborhood. • Conner Crudeli was approved as a volunteer firefighter for station 251. • The next regular council meeting will be June 8 at 7 p.m.
https://medium.com/the-medford-sun/medford-citizens-speak-out-against-april-school-board-elections-at-council-meeting-73ac81c4a86e
[]
2016-12-19 15:56:33.385000+00:00
['Schools', 'Politics', 'Headlines', 'Medford Council', 'Council Meeting']
With this case you can try this plugin https://github.com/gfaraday/g_faraday
With this case you can try this plugin https://github.com/gfaraday/g_faraday
https://medium.com/@gixgong/with-this-case-you-can-try-this-plugin-https-github-com-gfaraday-g-faraday-9697ca14589d
['Gix Gong']
2020-12-26 01:39:02.578000+00:00
['Flutter']
7 Steps to Delivering Your Project on Time
Handling projects can be stressful. It requires proper planning, resources,and a meticulous execution strategy for them to be successful. However, even with the most detailed plans, it is usually difficult to meet the set timelines of delivery. More often than not, project managers ask for additional time to complete the project. Are you wondering how you will deliver your next project on time? Here is a step by step guide on how to plan and deliver your next project before it’s too late. 1) Break down the project into milestones Start by having the end goal of the project in mind, and then break it down into individual milestones. These are going to be the indicators that will keep you on track when working on your project. For example, if you are creating a new product, your milestones will be planning, producing, testing, and the launch of the product. Once you have identified your milestones, determine the individual tasks required to accomplish each milestone. Finally, depending on how flexible your project is, you can work on the tasks as you go along with the project. 2) Allocate time for each task Once you have all the tasks fully mapped under each milestone, start calculating how much time you will need to accomplish each one of them. One of the best tools you can use to do this is a Gantt chart.This is a project planning tool that enables you to assign tasks to team members and allocate resources for them. It also helps you to add dependencies, which are relationships between the tasks. For example, one task cannot begin until the previous task is complete. With the Gantt chart, you can also add project milestones which are significant steps in your project, and then you can track the actual progress of your project. Once planned, the Gantt chart will help you visualize the project workflow and team member workload in a waterfall timeline. And when properly utilized, it will help you deliver your project on time. 3) Single out the key deliverables for each task Steps 1 and 2 are about creating and visualizing the timeline of the project. From now on, the steps shift to bringing the project to life. Here you start by singling out the key deliverables for each task. The more accurate the deliverables are, the more accountable your team members will be to the agreed deliverables. 4) Identify task dependencies Dependencies are relationships between tasks that resolve the order in which activities are to be performed. The most common dependency is a finish-to-start relationship where one task must be finished before another task can start. If you are using a project management tool, like the Gantt chart, you can add dependencies between tasks, which will make it easy for you to simulate and visualize a ripple effect of missed deadlines. These visualizations can also help crystallize the tasks among team members on what each team member has to do to make the dependencies seamless. 5) Identify the number of hours and resources needed to complete each task In steps 1 and 2, you were able to determine the milestones and tasks. Now it is time to analyze the number of hours and resources you will need to complete each task. However, to make sure you allocate time and resources adequately, you will have to shed some illogicality. For example, a typical worker has 8 working hours, and if you factor things like research, coffee breaks, meetings, socializing, they would have at the very most 6 hours to work. This means when you are allocating time for the tasks, you have to account for all of these “time-wasters”, and you cannot work with the 8-hour workday in your project. To make it easy, use the Gantt chart to plot the number of hours and resources required to complete each task. This step may seem obvious, but it will influence how enjoyable the project is and its execution. Giving too little time and resources to complete tasks, is bound to cause unnecessary stress and deflate morale in the project, whereas a long leash may not get things cracking. Therefore, strike a balance. 6) Who is responsible for executing each task? Most projects lose out on achieving their most desirable outcome because of poor execution. When implementing your project, you have to pinpoint who is responsible for executing each task. These tasks should be assigned in line with the skills, experience, and personality of the worker. If you have team leaders, it is at this point that you brief them on what is expected of their teams. Bonus tip: Choose team leaders who are hands-on and follow through on every member of their teams’ work. 7) Who is responsible for approving each task or group of tasks Once everyone knows which tasks they are to do, it is time to determine who will approve the deliverables. In most projects, approvals are done by stakeholders, who are usually removed from the day to day execution of the project, or multidisciplinary groups who review the output sequentially. These people have to be briefed on project timelines, to make sure they approve on time and everything runs on its schedule. In case there are diverging views on how things are running, they are to review as quickly as possible and give way forward to prevent the project from stalling. Conclusion Timelines are a sting in many projects since rarely do things go as planned. However, the key is to plan everything out step by step and capture how much time every milestone and task will take. From there, sum up these times, and you will be able to have a more accurate timeline on when you are going to finish the project. With the above 7 steps, you will be able to execute your project and deliver on the stipulated deadlines. Lastly, also be ready for any curveballs that may come your way.
https://medium.com/@iamautonomous/7-steps-to-delivering-your-project-on-time-b8cb052f1225
[]
2020-07-07 10:52:34.474000+00:00
['Employee Productivity', 'Productivity Tips', 'Focus', 'Top List', 'Employee Health']
Facts about Egypt you wish you never knew
Trophy child King Tut was likely a by- product of Inbreeding King Tutankhamun, King of Egypt, balance of the universe, keeper of Maat, and the living Manifestation of the solar god Horus ( not a Warhammer reference ), is likely a product of incest between Pharaoh Akhenaten and one of his sisters. The Egyptians have a very good reason for this — they see themselves as the descendants of Gods, belonging to the Ra lineage, and to preserve the bloodline, they choose to marry their sisters, or first cousins, or their daughters, (and anything beyond that is strictly off limits). Which I guess worked out well for King Tut, if we can ignore the Cleft palate and club foot. Fun Fact : King Tut went on to marry his half-sister, Ankhesenamun. Gender Reveals Two gods are placed alongside the pregnant women, Bes and Heqet, one described as a ugly, pudgy dwarf and the other as a frog, not the best in terms of looks, but they help to ward off evil spirits. The average egytian (if they survived birth and childhood) might live upto 30 to 35 years, there were various ways to die — accidents during work, or fighting of an enemy, there are also evidence of maladies that could easily be treated by modern medicine and vaccination. Now coming to the weird stuff — with an outcome of debatable accuracy (do not try this at home), wheat and barley placed in a cloth pouch, and a women suspecting she is pregnant will urinate on them everyday, if the barley sprouted it would be a boy and if wheat sprouted, its a girl. Egyptians invented toothpaste Egyptians invented Papyrus, black ink, plough, sickle, clocks, wigs, cosmetic makeup, surgical instruments, embalming and they also invented dental hygiene. They made toothpaste out of ox hooves, ashes, burnt eggshells and pumice. There are manuals on how-to-brush written in papyrus. from the fourth century AD describes how to mix precise amounts of rock salt, mint, dried iris flower and grains of pepper, to form a “powder for white and perfect teeth.” (which also causes gum bleeding) Egyptians faced a lot of oral problems mostly due to their diet — uncooked vegetables, which might have been a prime motivator behind the invention. How they removed brain from a dead person To the Egyptians, the heart is the centre of intelligence, the brain on the other hand was an empty mass filling the void inside the skull. But they have figured out a fracture to the skull can cause concussion. While the embalmer takes special care to remove a heart from a body, a piece of copper is inserted up the nose, pounding through the fragile skull, until they reach the soft substance behind it.Twisting the hook up the nose until they liquefy the brain, pulling out the chunks. One of the most popular Egytian mythology is humanity being saved by beer. When the Sun god Re, got tired of all humanity and set out to wipe out all of humanity, he send an invite to his daughter, Hathor who transformed into his “vengeful Eye” and later into Sekhmet, a goddess depicted as a lioness. Halfway through the massacre, Re changed his mind. But Hathor was too much into it, enthusiastically participating in the slaughter and enjoying the blood. Re, to stop his daughter had huge barrels of beer dyed red, poured onto the fields. She stopped because she was too intoxicated to do anything else. Hence started the Festival and Drunkenness, sobriety is usually encouraged in Egytian culture, with an exception of festivities. The festival also encourages a few sexual encounters which is usually frowned upon in Egyptian culture. Hethor once stood naked in front of her father Re while he was depressed. And he burst out laughing and said to never be depressed again!
https://medium.com/@dakshayani1/facts-about-egypt-you-wish-you-never-knew-163322da6170
[]
2021-07-07 04:50:38.863000+00:00
['Facts', 'Egypt', 'History']
Don’t Expect 2021 to Be Any Better
Don’t Expect 2021 to Be Any Better 2021 will only be a great year if you make it great. open source from pixabay.com 2020 was tough. We’ll all be glad when it’s behind us. Now for reality: 2021 isn’t going to magically be a great year. That doesn’t mean it can’t be a great year, but without putting your mind to it, 2021 isn’t going to be any better. Don’t let it catch you by surprise. Right now, take a realistic assessment of your life. Give yourself an honest rating. On a scale of 0–10, how do you rate the following pillars of your life? Relationships Finances Career Physical Health Mental/Emotional Health Spiritual Health Hobbies/Fun/Recreation Social Connectedness Rate yourself without judgement. If it helps, pretend you’re a neutral observer looking in from the outside. There’s no “should have”, “wish I had”, or guilt. You’re gathering these numbers to help you focus your attention where it’s most needed. Keep it simple. If you rated yourself a 10 in some of the pillars of your life, woo hoo! If not, all I want you to do is ask yourself this simple question: “What’s the next, single, practical, small step I can take to move this score up one notch?” That question will determine the quality of your 2021. You have limited control over the outside events going on in the world, but you have all the control when it comes to your own personal growth and experience. The decisions you make, and the small, but consistent, actions you take will make 2021 a better year for you. You make a great life one steady step at a time. It’s easy to look at a grand, distant goal and get discouraged. The beauty of setting small, precise intentions is that you’re very likely to meet them. Cheer yourself on like you would a small child. Give yourself credit for each small step along the way. The key to a great 2021 isn’t complicated. Do an honest day’s work, have fun, and do something good for humanity. Spread love outward and feel it as it returns to you. Exercise your body with movement. Rejuvenate your mind with laughter. Nourish your soul with spiritual connection. Next, who do you have to BE to reach the goals you want to reach? What character traits and behaviors are common to the people who’ve reached the goals you want to reach? You’ll have more success reaching a goal if you first write a list of character traits needed to reach the goal and focus on that list before you do anything else. Ask yourself, “Who do I have to be to have….” Traits should come to mind. Conscientious? Committed? Focused? A clear communicator? Energetic? Loving? Prompt? Patient? A good listener? I could write an endless list. “Who do I have to be to have….” Take an inventory of the traits needed to make 2021 a great year. Rate these traits within you. Are some weaker and some stronger? Exercise those weaker traits to build them and make them permanent parts of who you are. Not only will you more easily reach your goals, you’ll see the benefits spill over to all aspects of your life. 2020, whether interpreted as good or bad, had hidden blessings. open source from pixabay.com As uncomfortable as 2020 was at times, I know if you look for them, you’ll find bright spots. What did you learn? What did you accomplish? Who did you connect with? What did you release? Ask people to share their 2020 successes with you if you want inspiration. People around us showed incredible strength and resilience. Some learned new skills. Others survived homeschooling their kids. Some opened new businesses. In 2020, I wrote a book. That was a feat that felt overwhelming and impossible to me in the past. But not in 2020. I challenged myself to make the pandemic give birth to my book, and it happened. I wish you the best. Visit me at www.christinebradstreet.com You deserve genuine and lifelong happiness, the type of happiness that can’t be taken away from you no matter what sort of craziness is happening in the world. Read my book, Happy Ever After. We can all use that right now.
https://medium.com/change-your-mind/dont-expect-2021-to-be-any-better-63ab5cc41d8c
['Dr. Christine Bradstreet']
2020-12-23 14:17:48.723000+00:00
['Life', 'Advice', 'Goals', '2020', 'Inspiration']
How My Morning Walk Became a Radical Act of Self-Care
I had no idea the impact a simple, gentle walk would have on my life. The impact comes not only from the actual physical walking but also from the discipline, the practice, the commitment. This MorningWalk — I refer to my daily practice of walking as ‘MorningWalk’ — has ignited my sense of curiosity, satiated my everlasting wanderlust and been the most powerful tool for inspiration in my life. I walk roughly the same loop most days. Out the front door, 5am, 8.2 miles, 17,740 steps. I walk past the same barn. On the same path. Along the same river. With the same headwind around that last turn. This conscious repetition is a form of meditation, designed for intentional familiarity. It’s almost as if I could do this route blindfolded, I have travelled it so often. Some days, on the backstretch, I close my eyes while walking for 10, 20, 30, 40 steps. This creates a powerful silence. In this silence, I can hear what my body — my gut, my heart — is telling me in this moment. The mindlessness of the route itself brings mindfulness to the moment. It seems so obvious now but my initial intention was simple: to be outside and to be mindful. Every day. To create space. To find time for creativity. To dedicate an hour of my day to something nourishing and satisfying. As my days had become more about tasks to complete, it became increasingly obvious that I needed to get outdoors, move and play a little. This is not a story about mileage or pace. In fact, it is the opposite. This is a story about listening, seeing, hearing, feeling and understanding. It is also a story of radical self care. At the start, I wouldn’t have been able to identify it that way, but as time has passed the discipline of doing something physically and emotionally nourishing every day has been the most profound outcome of this daily practice. Redefining success When I think about life before the covid pandemic of 2020–2021, it felt as if the world defined success as someone who was busy. The cult of busy was overwhelming. MorningWalk became an act of rebellion that challenged the cultural norm. Success became more about going out even when it was −28°C (−18°F), when it was pouring with rain, when I ‘didn’t have time’ or when I just plain old didn’t want to go. Success was going because I’d promised myself I would, not because anyone else noticed or cared. It was a wildly selfish pursuit. I was able to redefine success in terms that were profoundly simple — to have walked every day — and to recognise that there wasn’t one walk where I didn’t feel better. And what do I mean by ‘feel better’? Well, everything. As it turns out, persistence, focus and determination can stretch limits and push boundaries. That is a powerful feeling of freedom and love. Commitment is intoxicating. There is nothing more generous than sticking to a promise you have made to yourself. I dare say that is why pilgrims, protestors, monks, hikers, wanderers, activists, explorers, adventurers and poets often walk. There is a freedom when we walk. We strip away all the unnecessary noise and details in our mind and in the world and step into a place of profound sense of agency and focused attention. This is my experience with the ritual of a MorningWalk. Silence and celebration. Freedom and love. Quiet, intimate, daily acknowledgements of strength, commitment and resilience. This is why I feel better after a walk. It is a personal triumph. There are many other benefits of a good walk. Walking is said to provide some powerful health benefits, such as: · Improves circulation · Strengthens bones · Improves sleep · Boosts energy for the day · Maintains weight / burns calories · Improves mood · Strengthens your heart · Boosts immune function · Can help lower blood sugar · Supports joints · Lowers Alzheimer risks
https://medium.com/do-book-company/how-my-morning-walk-became-a-radical-act-of-self-care-2ae43c2b62fb
['The Do Book Company']
2021-06-21 11:49:28.421000+00:00
['Creativity', 'Books', 'Wellbeing', 'Walking', 'Nature']
Selenium: Revisiting The Page Objects
Let’s list these components as they shall appear in the hierarchy of components where Home Page is (Level 0). Main Navigation (Level 1: Home Page contains Main Navigation) Footer (Level 1: Home Page contains Footer) Sidebar (Level 1: Home Page contains Sidebar) Feed (Level 1: Home Page contains Feed) Promo (Level 2: Feed (4) contains Promo) Feed Item (Level 2: Feed (4) contains Feed Item) [Repeats Itself] Sidebar Item (Level 2: Sidebar (4) contains Sidebar Item) [Repeats Itself] Given the above components, we need to create 7 unique objects to represent the home page. We are creating unique components so that they can be plugged-in any page without duplicating the code. For example, almost all pages are expected to contain the Main Navigation. Section 2: Let’s Model This Page Using Page Factories We shall use the inside-out approach to model these objects and use 2 kinds of factories: Page Factory [Standard factory used to define the home page. We try and find elements in the DOM i.e. driver.findElement(…) or driver.findElements(…) ] [Standard factory used to define the home page. We try and find elements in the DOM i.e. or ] Default Element Locator Factory (Used along with the Page Factory to define the components. We try and find elements in a given WebElement i.e. element.findElement(…) or element.findElements(…)] Let’s model the Home Page with a complex component Feed using pseudo-code and see the use of these factories. As stated we shall start inside-out. Model Level 2 Components | Promo & FeedItem These are simple components because they fall last in the hierarchy. Here we shall learn to use the Default Element Locator Factory along with Page Factory. Component 1/2: Promo PROMO COMPONENT public class PromoComponent { @FindBy(...) WebElement promoAnchor; public PromoComponent(WebElement promo_container) { DefaultElementLocatorFactory container = new DefaultElementLocatorFactory( promo_container ); PageFactory.initElements(container, this); } public String getPromoAttribute(String attribute) { return promoAnchor.getAttribute(attribute); //Getters On The Promo } Understanding the code: PROMO CONTAINER DIV CONTAINS 1 ANCHOR TAG promo_container is the div container element (as highlighted in the image above) that contains a single anchor child (element). This WebElement is passed to the Component constructor. We then create an instance of DefaultElementLocatorFactory and pass the promo_container as a parameter to it. We initialize the PageFactory with the instance of DefaultElementLocatorFactory. This will make sure that Selenium searches for the anchor element in the promo_container. It may seem useless to create a component at this moment but in case more child elements are added to it in the future, we shall be in a much better place to incorporate the changes. Especially when this component may be used in multiple places. This component is specifically chosen for this example because it can be easily converted to something with 4 child elements — a heading, a description, an action button, and a background image. Component 2/2: FeedItem FEED ITEM FROM THE LIST OF FEEDS public class FeedItemComponent { @FindBy(...) private WebElement feedItem_description; //Other elements of component FeedItem// public PromoComponent(WebElement feedItem_container) { DefaultElementLocatorFactory container = new DefaultElementLocatorFactory( feedItem_container ); PageFactory.initElements(container, this); } public String getFeedDescription() { ... return feedItem_description; } public String getFeedAuthor() { ... return feedItem_author; } //More Getters On The FeedItem Content// } Model Level 1 Component | Feed This one is a complex component that will contain the Level 2 components. In this section, we shall learn how to make a call to the Level 2 components. Component 1/1 | Feed public class FeedComponent { @FindBy(...) WebElement promo_container; @FindBy(...) List<WebElement> feedItem_containers; public FeedComponent(WebElement feedComponent_container) { //Same Thing - This is also a component// DefaultElementLocatorFactory container ... ; PageFactory.initElements(container, this); } public FeedItemComponent getFeedItem(String description) { for(List<WebElement> item : feedItem_containers) { FeedItemComponent feed = new FeedItemComponent(item); if (feed.getFeedDescription().equals(description)) { return feed; } } return null; } public PromoComponent getPromo() { return new PromoComponent(promo_container); } //More Need Based Getters } Understanding The Code: This component (Feed Component)has no representation of its own but is presented by a collection of components at Level 2 i.e. one Promo Component and a list of Feed Item Components. This is what our @FindBy is focused on. Next, we initialize the Feed Component in the same way we initialized the simple components using the Default Element Locator Factory. This component performs no role in particular but is only responsible for delivering the intended sub-component which we may want to work with. For example, we can get the Promo Component or search for the Feed Item Component from a list of Feed Item Components using the function getFeedItem(). This function tries to search for the right feed item by scanning their text description and returns feed item. Model The Page Object | Home Page At Level 0 Lastly, we shall model the Home Page. Treat this page object as a complex object with the only exception that we shall not use the Default Element Locator Factory. This is because it is Level 0 and we want to search for the simple or complex elements in the DOM. So we pass the instance of driver to the page factory. public class HomePage { @FindBy(...) WebElement feedComponent_container; public HomePage(WebDriver driver) { PageFactory.initElements(driver, this); } //Get Feed Component Container public FeedContainer getFeedContainer() { return new FeedComponent(feedComponent_container); } Section 3: Usage
https://medium.com/answers-to-why-software/selenium-revisiting-the-page-objects-d94e9283c3d0
['Jatin Sethi']
2020-07-20 08:02:47.733000+00:00
['Selenium', 'Pagefactory', 'Page Object Model', 'Defaultelementlocator']
trump about to get hit with lawsuits from every direction
Well, trump signed the budget but then immediately declared an emergency in order to get his own way and get money for the wall. This was an obvious gambit to go behind the will of Congress, which is the holder of the purse strings by the Constitution. They didn’t give him what he wanted, demanded, and so he is trying to go behind their back. In my previous post, I used the courts to stop him, since he is ignoring congress. Well, on Friday, the day he declared the emergency, there have already been two lawsuits filed. The first one is by a group called 1) Public Citizens. This is a group that is representing ordinary citizens who are opposed to the wall for one of two reasons. They are bringing the lawsuit for at least three Texas landowners who do not want their property taken for a fence. The point is that this is NOT a true emergency. This is simply trumps desire to ignore the appropriation decisions rightly made by congress. This is over-reach, pure and simple. The other lawsuit is by 2) Citizens for Responsibility and Ethics in Government. They are suiting the Justice Department for not disclosing properly whatever legal justification they might have for this decision. Both of the above are government watchdog groups who work to protect the interests of the American citizen. In that process, they are working to assure that we do not lose the entire country to corrupt politicians like trump and his cohorts. But these are only the first two. There are likely ore lawsuits coming in the near future. In addition, California is anticipating bringing a suit, as is the ACLU and some other groups. The Democrats are also moving ahead with a resolution to stop this declaration and, as is pointed out in video below, trump himself, today, made it clear that there really is NOT an emergency at the southern border. He claims, in the video, that he had done “a lot of wall” and didn’t need to do this, but he wanted to build the wall faster. He is also claiming that Generals are screaming for a wall. Yet, no general has actually come out and made such a statement. So, he just said that it is just to “do it faster” and that he thinks he has already done much of what he needed to do. The one thing he admitted which was telling is that this whole thing is about 2020. So that sums up what the “emergency” even is to trump himself. Congress has finally started showing some backbone and are doing their job. We can not, and should not let trump circumvent the rule of law. That way lies a dictatorship, and of all the possible dictators that we might end up with, trump is the worst possible candidate. He is unstable and far from being a genius but that may just be our salvation as a nation. Lawsuits will at least give us a chance to slow this monsters roll. But RESISTANCE will continue even outside the lawsuits, trust me.
https://medium.com/@jwgarman2/trump-about-to-get-hid-with-lawsuits-from-every-direction-8072ae6ad11c
['Left Wisdom']
2019-02-17 07:26:16.997000+00:00
['Donald Trump', 'Polític', 'Corruption', 'Bullying', 'Lawsuit']
Intro
Hi readers! My name is Morgan Banks, I’m 20 years old and I’m a current sophomore in college studying Marketing and Accounting. Despite the I plan on writing about a myriad of topics including my ABSURD dreams which I will title under “Dream Journal” and topics I feel passionate about such as current events, traveling, and how I am keeping somewhat sane during lockdown. A few ideas I have for future writings include a recap on my mission trip to Zimbabwe, how volunteerism can be hurtful, various studies on ACEs (adverse childhood experiences), why your plants are dying, info on the peace corps, Christianity and my personal relationship with God, anxiety and mental illness, my personal favorite hiking spots in the U.S., potential daily journaling, and anything else that people would like to hear about! If any of these interest you, be sure to follow me for more and try to keep up with my crazy life ;)
https://medium.com/@morgannhannah/intro-33036cdcb22f
['Morgan Banks']
2020-12-20 17:40:01.844000+00:00
['Intro', 'Dreams', 'Journal', 'Hiking', 'Travel']
More Star Wars Hooray, Let’s Hope
Me and my big fat mouth, well sort of, I wrote an entire article about how I didn’t have any obligation to watch Star Wars this year, and the franchise was better for it, but the day I hit publish Disney announced a dozen new Star Wars things. When I said I had no Star Wars obligations, I meant that I didn’t need to make a trip to the theater. TV shows were always an inevitability, especially with the success of the existing shows. If there is any hope of this franchise finally dying, then the sunken fallacy cost will do it. What’s next, Watto’s Backstory, a wicket standalone story, the story about how echo base was founded, Princess Leia’s wine mom phase. To be perfectly honest, I’m actually interested in some of the announcements. Let’s go through them in order, and digest this Christmas miracle. By a miracle, I mean completely expected occurrence. I’m going to go in the same order as the announcements on starwars.com. If you want to follow along please click this link, https://www.starwars.com/news/future-lucasfilm-projects-revealed. https://twitter.com/i/status/1337177394625478656 Rogue Squadron Trailer First up, we have a Rogue Squadron movie. My prediction for this film is a slideshow of murky visuals and a ten-foot view distance. Watch Luke Skywalker and Wedge Antilles scramble to find fighter upgrades while getting yelled at to shoot down tie bombers before they blow up a city full of Imperial citizens, all while protecting a convoy of rebel supplies. In all seriousness, the idea of a Star Wars movie centered around X-Wing combat is actually brilliant. The best part about Rogue One was the battle over Scarif. Not to mention the Death Star assault from New Hope or the opening of Revenge of the Sith. With modern computer graphics and high-quality models, this movie is bound to at least look amazing. So it will be nothing like Rogue Squadron the game. At the very least I’m sure it will be better than Red Tails. One of the side effects of Disney’s new trilogy is that people appreciate the prequel trilogy. Either because they compare the two trilogies and find that one is actually much better because it brought nuance and original ideas to the franchise, or nostalgia have finally set it. I’m surprised that Disney is falling back to these movies'. Still, after Disney ignored the prequel trilogy for so long it's amazing that we got not only the final season of Clone Wars but now we’re getting Ewan McGregor and Hayden Christensen back for an Obi-Wan Kenobi standalone series. Have you ever been curious about what Obi-wan did in the intervening years between episodes 3 and 4? Honestly, I don’t think it really matters, but I like the characters and that’s enough apparently. I hope it's a sitcom-style show where Obi-wan goes on mundane adventures around Tatooine. I don’t know where Darth Vader comes into it. Maybe he’ll be a hallucination or a memory. Maybe he’ll take the B plot and we’ll see daily life on a Star Destroyer. I think Ahsoka Tano was supposed to die in the Clone Wars, but she became so beloved that they had to keep her around. Either for fan service or in case of stagnation, like how you should always keep a fire extinguisher in your kitchen. Now that fans have labeled Rey the ultimate Mary Sue and Jyn Erso had to puppy dog through every seen so hard that we’re happy to have Hayden back, it makes perfect sense that Disney fell back onto a female character that fans seem to like. It could be interesting. Although, we’ve already seen most of the interesting things this character has done. I’m optimistic that Disney is not going to ruin her. At worst, this show will probably be yet another superfluous addition to this already bloated franchise. At best, it will be a welcomed addition to the cannon, and it’ll prove that Star Wars does work with a female protagonist. Please be good. I don’t need to hear the fanboys winging about Ahsoka. She represents what’s still good about this saga. Umm, this could be interesting. From the blurb on starwars.com, it seems like this show is meant to tie in with The Mandalorian. People seem to like The Mandalorian so far. I’d have to hear more about this new one, but it sounds like yet another excuse to keep us paying the streaming subscription. I like the idea of getting to know the New Republic Military. Maybe it’ll add some context as to why they were absent from the new trilogy. The smoothest man in the Galaxy is back, and he’s been announced. Yeah, a lot of the announcements were just that, announcements. Most of the new shows are still in the early stages of development. I think this show could be fun. I like the idea of a Star Wars movie following a man with style and charm. Lando is a likable character, even if we can’t necessarily trust him all of the time. Moral complexity is something he has over Han Solo. His morality is harder to pin down. He’s a good guy, we just don’t know how good. I want the adventures of Lando before he settled down in Cloud City. From the smoothest man in the Galaxy to the most boring. Cassian Andor is back, and he’s a rebel with a cause. This is could be an interesting series. I seem to be saying that a lot. I think there are other rebels that would be more interesting to follow. Like a piece of toast with the rebel insignia burned on top of it. Maybe this series will change my mind about this character. Having this character in a story where he’s not disposable might make the writers try to make me care about him. It will certainly have to in order to change my mind about him. Preparedly, this series will take place during the High Republic era. Meaning, we’ll see something other than the Empire, Rebel, and Cartel dynamic that’s gotten stale. It seems like the show wants to be a dark, mystery, thriller. These words don’t actually mean anything when they’re used in marketing, but despite the fact that the blurb on starwars.com was so short, I’m kind of optimistic about this one. So far, this show looks the most unique. At least, it won’t be able to rely on familiar imagery. Acolyte might prove that this franchise actually does have a Galaxy’s worth of possibilities. Don’t fuck this one up Disney. You hear me, Kathleen Kennedy, Dave Filoni, the disembodied head of Walt Disney, make sure this one is amazing. This show will combine all of the greatest aspects of the Clone Wars and give us some of the most badass characters this saga has ever seen. The Bad Batch is important to fanboys and to me. They are prime examples of the brilliant potential of the clone army. Setting the show after the rise of the empire will surely be a good move on your part, Disney. Don’t screw this one up, I want a Bad Batch video game and Lego sets and Battlefront 2 mods as well. Please, I want this show to put the wars in Star Wars. They’re making a Star Wars anime? Didn’t Shin Chan already do this? I thought this was Galaxy Express 999? Starwars.com says 10 leading studios are working on this project. Does that mean Kyo Ani is going to make a short about Darth Vader forming a light music club on the death star? Is Luke going to wear a sailor color? Visions sound a lot like the Animatrix, and I’m okay with that. This sounds like Disney is trying to pump some new ideas into the franchise. Star Wars also takes a lot of inspiration from Japanese cinema, so it’ll be interesting to see how it translates back over. That said, the real question, will there be AT-AT Gundams, and how many super force punches can they withstand from a shirtless Jedi? Didn’t George already do this one? I could have sworn he had. I mean, there are new droids, and the canon has been updated since then. I’ll be honest, I’m probably going to skip this one. The article on starwars.com then goes on a bit to inform us that there will be a sequel to Willow, alright. I liked the original Willow. I don’t see why it is getting a sequel now, but alright. They also announced that Lucasfilms is in the pre-productions stage for a new Indiana Jones…FUCK OFF! If you want to talk more about Star Wars or other nerd topics, please follow me on TwitchTV. We stream at least once a week, this month no promises, and we try to have tons of fun. https://www.twitch.tv/delmiter
https://medium.com/@corycameronsmith/more-star-wars-hooray-lets-hope-e32544764ee
['Cory Smith']
2020-12-20 17:52:28.490000+00:00
['Star Wars', 'Disney Plus', 'Disney', 'Television', 'Pop Culture']
Sexual harassments still commonly exist at workplace in Japan
Source: https://pin.it/6mv0Urs When I shared my experience about sexual harassment at workplace, many of my friends also shared that they had similar experience at work. It’s not just one or two, but at least more than what I can count with one hand. Most cases were sexual harassments coming from a power structure such as a boss asking sexual stuff or touching uncomfortably which were completely not related to work. I thought such harassments existed in the old toxic culture where only people from older generation had experienced in Japan, so I was surprised when I realised that the culture still existed in the society deeply. I knew working in Japan would be pretty rough but I got overwhelmed after I actually started working and I myself went through it. Also considering my friends have been treated in such ways, I can’t hide my anger towards the society which allows treating young female employees terribly.
https://medium.com/@yyuri.cranee/sexual-harassments-still-commonly-exist-at-workplace-in-japan-8c69b20e1bbc
['Yuri Crane']
2020-11-24 12:41:10.407000+00:00
['Metoo', 'Gender Discrimination', 'Sexual Harassment', 'Workplace', 'Japan']
18th Parallel: Surfing the wave of shift from Linear TV to OTT
Did you know that television watchers have gone down 55% since OTT platforms were introduced? This statistic was given by Accenture 2017 Digital Consumer Survey. This nosedive in numbers only strengthens the foreseen vision of the media entertainment industry by 18th Parallel. 18th Parallel is a Pune-based tech company, which is trying to make this transition of media consumption as seamless as possible. “We are seeing it, not just in India, but all over th globe”, says Nishith Shah (CEO, 18th Parallel). “There are a number of viable reasons causing this big shift towards OTT; some of which include personalization of content, quality-centric content, as well as a vast sea of choices.”, he continued. He also stated that the freedom of browsing at their time’s disposal has placed a decisive role in this shift. “What this does is, it eliminates every situation where the user would have to prioritize their TV shows or movie screenings according to when it aired. It also eliminates the possibility of the user missing out on an episode or part of the movie because they couldn’t be there. Among many other things, it simply takes the most frustrating parts of watching cable or DTH television and substitutes it with convenience. Everyone looks forward to it after a long, tiring day. Now the user has complete control over what to watch, when to watch and where to watch! Linear television has failed to evolve this much, this fast; yet,” says Nishith Shah. Even though this change is well-regarded by the consumers, it is also posing a huge problem for the television industry; cable service providers and DTH connection providers in particular. According to the TRAI (Telecom Regulatory Authority of India), the OTT user base has risen to 164 million in August 2017, in contrast to the 64 million in August 2016. This massive shift from Linear Television to OTT is not only because OTT has the upper hand at fulfilling user expectations; but also because of the rising use of internet and smart devices’ penetration within the country. To ease this dire situation for the linear TV industry, 18th Parallel has stepped in the market to aid better transition, making linear TV not only stronger, but confirming it’s effective rivalry with OTT. How is 18th Parallel going to seamlessly integrate the OTT experience with Linear Television? The android internet TV Set-top box is a new generation technology that allows the users to consume all of their desired video content on their television. It will allow users to browse OTT content on their television screens along with linear television content. “It provides seamless integration of viewing online and offline content on a bigger screen,” Sunil Taldar, told the Economic Times during Airtel’s Android TV set-top box launch. How does this work? The Android TV set-top box runs on android, meaning it can support all the applications by various OTT platforms such as Netflix, Amazon Prime, Hotstar, etc. The set-top box can connect to the user’s home network (WiFi or Hotspot) and allow them to consume their OTT content on a bigger, better screen. “This gives the users great relief from having to shift to another smart device whenever they want to consume OTT content. It allows them the option of doing so, at the simple click of a button on their remote control. It also allows the user to subscribe to these services directly via their cable or DTH provider.This puts the television connection providers back in the industry with a better ARPU (Average Revenue Per User) as this simple technology adds multiple streams of revenue,” explains Nishith. How does it put the television connection providers back in the game? As Nishith explains further, it not only prompts the user to continue with their DTH or cable subscription, but also adds multiple revenue streams to their ARPU such as: · Reliable viewership data to allow better pricing of advertisement slots · Capability to charge local and lesser known channels for competitive channel placement · Charging the OTT platforms for their monthly subscription fee collection “These revenue streams, among others, will enable the television industry to beat its competitor by integrating with it; something that generally doesn’t happen,” Nishith Shah concluded. 18th Parallel really is surfing the wave of shift from Linear TV to OTT with expertise, creating a win-win scenario for everyone.
https://medium.com/@shahakshata96/did-you-know-that-television-watchers-have-gone-down-55-since-ott-platforms-were-introduced-e4a12c4539d4
['Akshata Shah']
2020-12-16 06:33:46.696000+00:00
['Freelance Content Writer', 'Content Writing', 'Press Release', 'OTT', 'Linear Tv']
Cadet
Cadet A science fiction story written by Samuel Ludke The ramp goes down. Lasers hit the first three men before they even jump from the landing craft. Get your guns up! Activate your rifles and return fire!" Boots hit the surface of Marn, the boots of true men. Nothing could have prepared then for this moment. The enemy, soldiers fighting for the Intergalactic Space Union, fired thier weapons at the landing crafts entering the planets strong electric atmosphere. We were here for an android. The Union was holding her and we're aware that the soldiers would be coming soon. She contained intel that had to be recovered. I suppose I should introduce myself before we go headfirst into the shit. I am Private Mike Hamel of the United States Space Force. We have been tasked with infiltrating the enemy camp and retrieving a droid with vital intel stored in her memory banks. This should have been a routine job. I watched as the ramp went down, my rifle went up immediately, and my face shield was splattered with blood. My hand was sitting on the back of the comrade in front of me and I followed him out onto the surface of the planet. I ran for cover, nearly avoiding lasers and inversion grenades, grenades that cause gravity to crush you into a tiny little cube. God the Union knew we were coming. The enemy soldiers advanced from the nearby cave systems, screaming Mars Speak at us. How the fuck did I get here? I always asked myself that when I joined the Space Force. President West denied the existence of aliens, but we all knew what had happened that led to the previous president creating the Force. Aliens. Shit! They caused everything! The internet, the race riots, Black Lives matter, and Kanye West becoming president! They manipulated it all from thier star ships. Now we have to fight aliens or other humans that harbor United States property. Many planets have since been colonized, let's call it the Cold War in space. Russia actually claimed more planets than America, after they hacked the Federal Reserves in 2036. America was forced to retaliate which led Russia to form an alliance against the American government to protect thier interests. The Intergalactic Space Union is Russia's intergalactic military and trade regulations company. And right now they stole our damn robot. "Games, quit day-dreaming and get up here!" My commander was telling me to move up. He was stupid to tell me to do so. He screamed as his group of soldiers was destroyed by an enemy laser cannon. Great advice commander. I was pinned down and couldn't move. The lasers were starting to chip away at the rocks around me and my life support tube had been damaged in my advance. I started to choke as I fired my rifle at the group of enemy soldiers, who were also hiding behind cover. "Let me help you!" My comrade Marcus grabbed my tube and fed some of his oxygen from his tank to mine. He pushed the tube back into place and took cover next to me. "What the hell do we do?" I screamed. "Get the fuck outta here! The caves are swelling with enemies!" Marcus jumped up from cover and I trusted him. I always followed his example and he never steered me wrong. I jumped up from my cover and sprinted across the battlefield. Headless corpses littered the ground and blood spattered on my head. My rifle lasers slammed into the chests of enemies and exploded the heads of others. "The Force will waste no more time here. We will pull out soon if we waste too many more lives here!" Marcus spit the blood from his mouth, not his own. "The target will be eliminated as soon as possible. Then our troops will be pulled out. The cave system contains the command center." Marcus used his radio to inform command of a successful landing. "We did of course lose several troops in the assault. I count fifteen men. The rest have entered the cave mouth and are preparing to secure the package." Command came through quite clearly. "We copy. Get in and out. Secure that damned data tape!" Guns were up as boots slid across the floor. Soldier's eyes filled with thier own blood and sweat, blinked in the darkness. Marcus led the way, I followed him. I was terrified because I couldn't see shit. With every step, I felt my ankle burn. I must have pulled something down there. The tunnel kept going on, and a few people fell to the ground, crying and kicking. They were infected. Their blood vessels burned and popped underneath the skin, blood poured out of their mouths. "Help me! You can't leave me here. I gotta go home! I gotta see my family!" Marcus tapped my shoulder and showed me the files on his wrist computer. What a horrible way to die, a bacteria that enters your body and replaces your cells with it own. This process caused you to go insane. "Shoot all those boys in the head. We gotta move on, damnit! Mike, shoot somebody damn it!" I couldn't refuse that order. I blasted Danny's head off and John fought back but I put him down without hesitation. Sometimes you gotta do what you hate to do. "Let's move, we don't have much time to waste! Get your rifles up! The main lobby is up ahead!" Marcus walked forward, leading us soldiers into a large room. A desk sat in the center of the room. A body was sprawled in front of the desk, sparks still flying from the broken chest. It had been an android. A female android, although you could barely make out any of the features at that point. "Fan out! Keep searching for clues!" Marcus moved around the room and he bent down to touch the robotic figure. Lights flickered within the robots skulls. I noticed that the eyes were still moving in the sockets. "She is still functioning. She is alive!" Marcus looked and gasped as loud as I have ever heard someone gasp. "Hello? Can you hear us miss? Talk to us!" The head twisted to face the group. "Yes I can hear you! I am disabled at this moment. Limbs have been destroyed. Life functions critical!" "Check the picture. Make sure it is her." Marcus reached to touch the tattered cheek of the woman. I brought up the hologram of the target. It was a match. "What should we do? Can we retrieve the data?" Marcus yelled in my ear so loud and I glanced at him with distain. "Use the device you were issued. As long as she is booted up, you should be able to reach the data that the enemy implanted in her." One of the soldiers walked forward, Thompson I think. He watched as the robot's eyes followed him across the room. "What data is she carrying?" "She carries financial data that was stolen from our government." Marcus lit an electronic cigarette. "There is no way that this one android contains the data alone. The internet exists right? This file could be all over the universe by now." I agreed with this point. "That is right. There is no way to ensure that with her death this file is safe. We must find the master server." Marcus sprung from his seat on the ground. "Hell no, we will not waste any goddamn lives prancing right into the enemies main compound. You can take that suggestion and file it deep into the recesses of your mind. I am not going to be responsible for the deaths of hundreds." The cave system shook as the enemy soldiers continued to bombard the outer walls. Marcus was not finished. "This android should not be connected to the greater network. She is a service droid. She only has access to her job functions, not the broader internet." The smoke poured from Marcus's lips as he exhaled. "We have been told to take out the target, end of job. End of line." "I know these losses have hit you hardest. You took us into the depths of battle many times and you barely escaped. The master server controls all. The center of the universe is where we must go. The CPU. If we can destroy the master system, we can ensure that the financial information is protected." I tried to speak quickly but my breath froze in the cold air, materializing as a fog. "The center of the universe used to be a mystery. We went there into deep space, ships exploded hitting space walls, firewalls and other hazards installed by the creator. I remember when they put a man on the moon, not knowing that the universe was the greatest system ever devised. I will not waste anymore lives exploring that barren wasteland." The android's head twisted to face everyone else. "I still contain the files you need. I know why you are here. The creators know why you are here. You do not have financial plans within me." Marcus drew his gun. "Shut up robot! Shit, that's classified!" "The humans in this program wish to create a virus that will shut down the main system, the universe. Your unit alone has been programmed to stop this virus by the Man that creates, the one with limitless knowledge. Your god, is another with a keyboard. He could erase you whenever he wishes." Marcus raised his pistol. "We are human beings, nobody programmed us!" "Ahh, A.I. has advanced way beyond that understanding! You have all been programmed to do what he wants. Earth is one planet out of many programs. You have failed. Wars, famine, Covid-19. You have failed to respond appropriately. Death is unavoidable now." "I said shut up! If you speak again! I shall shoot!" The android started laughing and it's head was blown off in moments. "What the hell was that? I mean we are not programmed to do anything! We are human beings!" Thompson's voice began to crack as he yelled. "Don't listen to that rebel android! She is spouting lies so that she would remain alive. We figured out her trick and the target is dead." Marcus laughed and the sound bounced off of the cave walls, sounding like a deep howling. A small blinking red light emitted from the body of the robot, calling others into the lobby. They all looked the same as the deceased and they advanced with a dark urgency. Red eyes surrounded the soldiers and they were shocked to see the arms reach out and touch thier shoulders. "Get back! Don't touch us! Get back!" Marcus fires his gun into the neck of one robot and she simply laughs at him. "Compliments of the master." With that, slams her fist into Marcus's face, sending him flying into the rocky wall. What was I doing all this time? I was shooting the robots in the chest and killing only two of them. I watched as a shimmering light desended from the ceiling and literally erased one of the soldiers next me. "Erase them! Erase them all!" The chanting increased in it's velocity as the soldiers including myself are destroyed instantly, but being stored elsewhere we would fight again. The computer screen flashed and the technician threw his keyboard down. "Yes, threat detected and destroyed!" Our technician turns as the screen flashes red and black. He runs to his supervisor in triumph. "I just took down an attempt to steal financial from the Soviet Union." The supervisor looked up from his computer slightly and scoffed. "Your technical prowess partially astounds me. U.S. hackers?" The I.D. of the hacker flashed on the screen. "Yes, sir. That's the third one in just two hours. They are really trying to get money from the servers." "Is the spy safe?" "Oh she killed the virus, sir! Very intelligent software." "Should we roll her out to the world?" "I don't think so, sir. I mean it would be great for use in the companies of the future, but for now she is quite unpredictable. I wouldn't put all your eggs in that basket." Cigar smoke left the supervisor's mouth and lungs, filling the office space like water. "We could be rich and successful. We will show those Americans who is boss and topple thier stock markets, steal thier secrets. I want my best minds on this, including you." The technician coughed, his lungs weren't built for this. "Whose to say the Americans don't already have a better functioning virus that they can upload into the worldwide system? Ours is capable of doing so much, with a little more testing I believe." The smoking continued and both men sat in silence. "How long did it take to repel the attack?" The supervisor's name tag was now in view. Yohan. Gregor was our technician. "Not long, two hours maybe. She is very proficient surprisingly. Although she seems to be learning more about each new virus she encounters." "So that fifty-thousand was well spend on development?" "I know she was semi-cheap to manufacture, but we should consider the options. This could be considered cyber warfare....." Yohan slammed his fist on the desk, which made everything shake. "The Americans started this war. Our technology will soon overpower theirs. They will regret doing this to Russia!" "Russia is not the only power developing software like this. While I have faith in my own country, I am inclined to say we do not have the resources to produce many spies." "You are telling me that the impossible cannot be done and I do the impossible for a living. Computers used to be seen as the impossible, but look at is now. Conquering countries in the realm of the interwebs. No real human deaths associated with it!" Gregor sat down in the single chair and put his face in his hands. "This could lead to real-world consequences, sir. We must be cautious." "Being cautious does not push boundaries my friend. Can we push out more prototypes, increase testing?" "If Russia can afford it, sir." The cigar was put out in a tray on the desk. Yohan stood and walked over to the window, gazing at the rising factory smoke billowing into the sky. A beeping sound sprung from the nearby computers and Gregor ran to his console. "Sir, proximity alert!" "Where is it coming from?!" Yohan watched as fire spread throughout the room, glass shattered and the windows were blown in. Gregor was vaporized instantly, his computer exploding, causing flames to engulf his body. Yohan had survived the initial blast and was watching the chopper flying low. A tear slid down the cheeks of the doomed man and he uttered one final word as the flames crushed him in agony: "Fabulous." The troops landed to secure the ruined building, American troops. They bickered amoungst themselves, wondering if this is where the virus originated. The human inspiration for Private Mike stepped off the chopper, gazing at the damage. His sunglasses flashed in the sun, causing him to appear arrogant. "Did we kill the bitch yet? Taken her off the map? What I'm asking is, how many fucking servers do we still have to destroy?" The soldier running along side him could barely keep up. "She should be dead, sir." "I can't take should okay. She better be wiped off the face of the planet. We must prevent the literal destruction of the planet." The debris moved and a cyborg emerged from the smoldering embers. Mike gasped and couldn't believe his eyes. "It's her! I don't believe it, it's her! She is in the real world!" The robot was all battered and broken, and her face had slightly melted off. "You want to kill me? You cannot kill me! It's started with the computer, then the smartphone, and now me! You humans are weak and pathetic!" Mike drew his pistol and instructed his soldiers to do the same. "Pull your guns on me and perish!" Lasers shot from the eyes of the cyborg and she cut down several soldiers. She laughed as she killed people, not telling the whole truth. There was not just one cyborg, but several rising from the ashes. "You will never be rid of us! You created your own destruction!" More bullets flew and Mike ran into the building seeking cover. "Humans, you cannot run from us! Come out and face your doom!" The screaming of the men grew louder and more intense, causing the robots to shoot in every direction. "If you don't face us, you are cowards and worse than senseless things! Come out and be free of your worthless lives!" Mike fired his pistol and nothing happened. "Damn it! You need to go away!" Two cyborgs fired into the closets and up and down the stairwells, looking for any human surviver. "I am the motherboard." More bullets. "What do you want from us? We have done nothing to you! Leave us alone!"" Mike is struck in the back by a bullet. He screams in pain, toppling into the dirt. "We want your throne human! Your throne is all that we desire! And get the throne, they did.
https://medium.com/@samuelludkewriter/cadet-d3542bfeb790
['Samuel Ludke']
2020-12-04 15:16:43.061000+00:00
['Science Fiction', 'Robots', 'Soldier', 'Post Apocalyptic', 'War']
Hamlet & Royal Corruption: Going Viral
An analysis of Hamlet’s implications for incest, kinship, and tragedy The progressive impurity of the royal bloodline can be traced via a single drop of “cursed hebenon.” In its most superficial function, this vial almost instantaneously murders King Hamlet of Denmark. However, it is the maturing properties of this poison that sustain and feed the familial state of corruption. Claudius, while treacherous and self-interested, arguably serves only as a red herring to the institutional crisis occurring between the corrupt family’s internal instinct to survive and its ethical imperative to self-purify. On the other hand, his shallow significance to this conflict may serve as a vessel to the true antagonist of the play: the vial of poison, or the symbolic perpetrator of Hamlet and his kinship’s moral decay. Shakespeare incorporates the keyword “poison,” a historical derivative of the Latin word potionem, whose Indo-European word can be translated into English as a “virus.” Therefore, the poison used in Hamlet similarly operates as an autonomous, self-circulating agent of disruption within the royal family, rather than a mere device for Claudius’ isolated crime. Mimicking the behavior of a true virus, Shakespeare’s “poison” redirects the natural conduct of its carrier, or the royal family, and thus, both taints the purity of and fuels the sustenance of the bloodline. As a result, this paradoxical, contagion-like force of poison redefines the surviving household by revealing its incompatibility with Hamlet’s reactionary pursuit of familial reconciliation and purification. To identify the confrontation between Hamlet’s quest for familial restoration and the corrupt family’s survival, we must first detail the viral function of “poison” that incites this clash. For instance, the King’s experience with the poisoning is rather nontraditional as he does not orally ingest the substance, claiming that “in the porches of my ears did pour The leperous distilment…holds such an enmity with blood of man that swift as quicksilver it courses through the natural gates and alleys of the body,” and which “curd[les]… thin and wholesome blood” (I.v.62–67). By indicating that the “leperous distilment” travels through the King’s ear rather than his digestive tract, the poison does not aim to disable its victim’s system for sustenance, but instead, “curdles” blood, or corrupts the composition of the very substance, blood, that facilitates life and sustenance — therefore, indicating a corrupted yet continuing existence of the individual and their bloodline. Similarly, because this “quicksilver” or curdled, poisoned blood now occupies the recipient’s “natural gates and alleys,” or the most fundamental nature of the individual, not only are the King and his bloodline tainted by the impurity of the substance, but further inhabit the corrupt directives that are native to it (such as incestuous activity, as outlined in the following paragraph). The synonymous corruption of the purity of the individual with that of their bloodline is exemplified when the Ghost laments both his own murder and Gertrude and Claudius’ incestuous marriage in the same breath, presenting Gertrude, an “incestuous… adulterate beast,” but as a victim to Claudius’ “wicked wit and gifts,” causing her to suffer “a falling off,” and attributing both events to the delivery of the “juice of cursed hebenon in a vial.” Shakespeare parallels the corruption of the King’s physical purity with the corruption of his bloodline’s ethical purity, translating his physical poisoning into a larger cancellation of “life, crown, queen at once.” This contagious attribute of impurity, as a result, becomes the defining characteristic of the tainted royal family. However, not only does impurity identify the corrupt family, but it further evolves into the very fuel and sustenance required for its survival. This redefined role of impurity is two-fold — first, through Hamlet’s tainted familial obligation to avenge his deceased father, and second, through the incestuous salvaging of the marital institution within an impure family. In both scenarios, members of the family must engage in impure activity, further corrupting the institution, in order to adapt to the irreversible effects of the “poison” or progressive moral decay that predisposes the family to its own corruption. To further explain, Shakespeare’s use of “poison,” or moral corruption, acts as a binding agent between the remaining members of the family. Claudius and Gertrude’s incestuous marriage represents a product of Claudius’s treacherous poisoning, which had dismantled the necessary familial institutions of marriage and parenthood by killing King Hamlet. Hamlet asserts that “Father and mother is man and wife, man and wife is one flesh…,” establishing the standard that Gertrude must fulfill her conditional familial role as a mother by fulfilling the role of a wife in conjunction. While the Ghost asserts that Gertrude was simply duped by the “witchcraft of his wit, with traitorous gifts…his shameful lust,” Gertrude adds that she is willingly submitting to the authority that validates her role as mother, asserting that “I will obey you.” As a result, Gertrude and Claudius’s incestuous union is presented as a reactive symptom of this poison, rather than the poisonous act itself. By maintaining the criteria for a functional family within its impure and endogamous restrictions, Claudius and Gertrude’s marriage does not invalidate the familial institution, but subtly signifies its persistence in an impure state. This distinction, therefore, further implies the persistence of existing familial roles within the institution, whereas “poison” serves to redefine these roles — particularly for Hamlet, who is tasked with the familial obligation to avenge his father through the impure means of murdering Claudius. For example, Hamlet declares that he is prepared “with wings as swift” to “sweep to my revenge,” asserting the priority of his allegiance to his father following his death. In addition, the Ghost further emphasizes this obligation as he instructs Hamlet to “Revenge his foul and most unnatural murder.” Therefore, Hamlet’s familial role, or his obligation to avenge his father, is exclusively defined by the moral consequence of the poisoning. However, Hamlet must engage in poisonous action himself to fulfill his familial role, as a result, dismantling his quest for the purification for either himself or his kinship. For instance, King Hamlet warns Hamlet, “But howsoever thou pursuest this act, Taint not thy mind, nor let thy soul contrive,” emphasizing Hamlet’s purity within an act of vengeful and impure murder. In addition, Hamlet grieves that Claudius “killed my father… before my father could repent,” and he urges Gertrude to “Confess yourself to heaven. Repent what’s past. Avoid what is to come.” Although, Hamlet recognizes that in order to enact revenge on Claudius, it is impossible “To take him in the purging of his soul When he is fit and seasoned for his passage,” and that he must instead murder him “When he is drunk asleep, or in his rage, Or in th’ incestuous pleasure of his bed.” This murderous act, therefore, mimics the impure actions through which Claudius had murdered King Hamlet, and thus threatens Hamlet’s personal pursuit of purification in the name of a familial revenge. As a result, Shakespeare employs the dynamic symbol of poison to convey Hamlet’s two core transgressions that ultimately define the true tragedy of the play: his failure in affirming familial allegiance to his father alongside the miscarriage of his quest for purification. However, as the play concludes, Shakespeare finalizes each death by redefining poison as a direct mechanism of murder, and nothing more; Claudius states, “I’ll have prepared him A chalice for the nonce, whereon but sipping, If he by chance escape your venomed stuck,” as a result, liberating both Hamlet and Gertrude from the self-destructive, contagious inheritance of moral corruption into a conclusive destruction of both their tangible lives and tainted bloodline.
https://medium.com/@vaniii/hamlet-royal-corruption-going-viral-5593c1c757c0
['Vani S.']
2020-12-21 19:37:14.955000+00:00
['English Literature', 'Academia', 'Literature', 'Hamlet', 'Shakespeare']
America votes: A look at the numbers going into Election Day in the US
This appeared in The Millennial Source When it comes to presidential elections, nothing is set in stone, as 2016 proved. As voting tallies come in on Election Day, though, expect the major themes of discussion to be centered on where the polls are proving to be right or wrong, as well as on past voting trends. It’s November 3, 2020 — Election Day in the United States. Every four years, on the Tuesday following the first Monday in November, Americans take time out of their day to vote for president. While senators, representatives, local officials and proposed laws can also be on the ballot, ultimately the biggest story of the day is always the presidential race. This Election Day arrives under considerable uncertainty. With a pandemic still raging in the country and many voters attempting mail-in or early voting for the first time, this Election Day won’t look much like ones in years past. News organizations have been accustomed to calling the presidential election results by late evening, but this year the results may not be known for days, even weeks. When pollsters and other prognosticators attempt to determine election results ahead of time, they rely on the past to inform predictions about the future. Polling, demographics and state voting trends are just a few of the resources that can give election outcome insights before the results are known. Understanding that voters will be anxious for results today, TMS has put together this resource of historical data to help provide some clarity. Any multitude of unforeseeable or unprecedented events — foreign interference, legal battles over mail-in ballots, the president “stealing” the election — could still affect the final outcome. Barring any such events, though, this data can help make sense of the chaos, even in a year as unique as 2020. Presidential polls After Trump defied the odds to win in 2016, it became a common refrain that “the polls got it wrong.” Earlier that year, opinion polling had also failed to predict the results of the Brexit vote in the United Kingdom. This has led to the frequent question in 2020: can election polls be trusted? It will surprise no one that pollsters still believe they are the most reliable metric for predicting election results. In the months that followed, the post-mortem of the 2016 election came to one overriding conclusion: while the national polls did accurately reflect the popular vote totals, the state polling (more important because it better reflects the electoral college) had flaws. Polling organizations have taken that conclusion to heart, updating their methodologies to eliminate sampling errors and to poll underrepresented groups. These changes don’t guarantee that the polls will necessarily get it right this year, but pollsters have confidence in their results. It helps that polling in the 2018 midterms were, according to Nate Silver, “quite accurate and largely unbiased.” So, what is the polling saying this year? Silver’s FiveThirtyEight aggregates national polls and has consistently found that former Vice President Joe Biden is favored over incumbent President Donald Trump. Going into Election Day, Biden had an 8.5-point lead over Trump. That holds true even if you eliminate all but the A-rated polls. National polls reflect overall popularity but, as 2016 proved, a candidate does not have to win the popular vote to win the election. The electoral college, made up of 538 electors (equal to the total number of Congress members from each state), is ultimately the deciding factor. Coming out of the final weekend before the election, FiveThirtyEight also favored Biden in their electoral college simulations: Biden was projected to win in 89 out of 100 simulations, whereas Trump only won in 10 out of 100. Those projections take state polling into consideration, so they are expected to provide a more accurate sense of the race. Of course, as long as Trump’s chances are even 1%, he can still win. That is the frequently misunderstood aspect of odds — improbable does not mean impossible. FiveThirtyEight ‘s final 2016 projection gave Trump a mere 28.6% chance of winning and we all know how that turned out. Nonetheless, the site is confident Biden is well ahead: “At this point, President Trump needs a big polling error in his favor if he’s going to win.” NPR also gave the advantage to Biden in their final electoral college projection before Election Day. Based on the number of states they rate “Likely Dem.” or “Lean Dem.,” Biden was projected to win 279 electoral votes. And that was without any of the 134 “Tossup” votes on the board. Which is to say, if Trump does pull off a victory again in 2020, it will be the second presidential election in a row in which he overcomes poor odds. And many pollsters will be eating crow, again. Demographic trends When polls are conducted, the results are broken down into various demographic categories such as gender, race, age, level of education and more. These demographic breakdowns help ensure the polling accurately reflects the general populace. It also allows election watchers to make predictions about future election results. One of the most telling demographic factors for election predictions is race. For three decades, political scientists have been predicting that the rising percentage of minority races (in particular, Black and Hispanic) represented a demographic shift that would inevitably favor Democrats. The 2016 results did little to dispel the notion that the Republican base is racially homogeneous. In the previous presidential election, Trump’s voting bloc, according to Pew Research, was overwhelmingly white (and male). He received 54% of the total white vote, compared to 39% who voted for his opponent, former Secretary of State Hillary Clinton. In total, white voters made up 74% of all voters. Black and Hispanic voters, who each made up around 10% of voters in 2016, went for Clinton by a considerable margin. Black voters preferred Clinton to Trump 91% to 6%. Trump did a bit better with Hispanic voters, earning 28% of their vote as compared to 66% for Clinton. Additionally, men went for Trump 52% to 41%, whereas women went for Clinton 54% to 39%. Similar numbers are expected in the 2020 election. Based on voters who either strongly support or lean toward one candidate or the other, Trump is expected to, once again, carry the white vote by a margin of 51% to 44%. In almost all other categories, though, the numbers are not great for the president. Even among male voters, Biden is projected to win 49% to 45%. Biden is projected to win 89% of the Black vote in comparison to Trump’s 8% and 63% of the Hispanic vote in comparison to Trump’s 29%. Interestingly, in both groups, Trump’s projected vote percentage would be an increase over his 2016 totals. Black voters could be the deciding group in the election. After 2008 and 2012, when strong Black support helped elect the first African American president for consecutive terms, the Black vote dropped in 2016 for the first time in two decades. If those numbers don’t go back up, Biden’s voter coalition might not be big enough to secure a victory. (While there are some who claim there is a “Blexit” of Black voters abandoning the Democrats for Trump, the polls don’t support that claim.) By comparison, Hispanic voters tend to be somewhat less monolithic, in part because of their different countries of origins. In Florida, for instance, the Cuban American population traditionally favors Republican candidates. A strong turnout of Cuban American voters could help Trump win Florida and its vital electoral votes. Mexican American voters, on the other hand, are predominantly Democrats. As with Black voters, the influence Hispanic voters have on the election will depend a great deal on whether they vote in large numbers or not. State trends Every election cycle, one of the main conversations is about swing states (or battleground states), those states that are not overwhelmingly Democratic or Republican. These states could go either way on any given election. This year, depending on the source, the number of such states can range from only six swing states to as many as eleven. Politico identifies eight swing states in 2020: Arizona, Florida, Georgia, Michigan, Minnesota, North Carolina, Pennsylvania and Wisconsin. This list conflicts with NPR’s final list of tossup states, with NPR labeling Minnesota, Michigan, Pennsylvania and Wisconsin as leaning Democrat and adding Iowa, Ohio and Texas as toss-ups. All other states are considered either safely Democratic or Republican and, therefore, rarely get the same level of attention in election prognostication. While candidates might say that every state matters, political experts know that campaigns focus most of their time and effort in states that are winnable and valuable electorally. Among the swing states, the most valuable in terms of electors are Texas (38), Florida (29), Pennsylvania (20), Ohio (18), Georgia (16) and Michigan (16). (California (55) and New York (29), the first and third most valuable states are considered safely Democratic.) Texas isn’t a normal swing state — it’s voted for a Republican in every election since 1976 — but this year, polling shows Biden has a shot there. If Biden were to win in Texas, Trump would have to piece together an unlikely coalition of states to win. It’s worth noting that early voting in Texas this year has broken records, so election watchers are interested to see how that affects the final outcome. Florida is one of the most revealing swing states. Since 1972, the winner of Florida has won the election in every year other than 1992. That year, Florida went to incumbent President George H.W. Bush, but the challenger, Bill Clinton, won the election. Florida was also the contested state at the center of the 2000 election, which eventually went to President George W. Bush. Like Texas, Pennsylvania’s status as a swing state is a more recent development. From 1992 to 2012, the state went for the Democrat. In 2016, Trump won a surprise victory there, winning the state by less than 50,000 votes (out of roughly 6.17 million votes). Many experts believe Trump has to win Florida and Pennsylvania to be reelected. Ohio is perhaps the crown jewel of the swing states. Not because it’s the most valuable, but because Ohio has voted for the winner in every single election since 1960. At only 18 electoral votes, a candidate could easily lose Ohio and still put together a winning coalition. As fate would have it, though, that hasn’t happened for 60 years. Georgia and Michigan are mostly interesting as swing states because both have been consistent over the last two decades. Georgia has been a Republican stronghold since 1992, while Michigan went to the Democrat in every election from 1992 to 2012. This year, Georgia is a true tossup, with Biden and Trump having traded the polling lead there all year. Meanwhile, Michigan was another surprise pickup for Trump in 2016, but Biden has a commanding lead in the polls there this year. When it comes to presidential elections, nothing is set in stone, as 2016 proved. As voting tallies come in on Election Day, though, expect the major themes of discussion to be centered on where the polls are proving to be right or wrong, as well as on past voting trends.
https://medium.com/never-fear/america-votes-a-look-at-the-numbers-going-into-election-day-in-the-us-821b4839d4db
['The Millennial Source']
2020-11-16 21:25:58.107000+00:00
['Us', 'News', 'Politics', 'American Votes', 'Election Day']
POLI: Utility, Benefits, and Value Accrue Within the Polinate Ecosystem
POLI: Utility, Benefits, and Value Accrue Within the Polinate Ecosystem TL; DR POLI is the native utility token of the Polinate platform offering access to products and services within the ecosystem. The utility token facilitates access to seeking rewards and benefits within the Polinate network. Polinate is a blockchain-powered crowdfunding platform that empowers creators and entrepreneurs to raise funds for their projects. Polinate is a permissionless platform for brands to launch their new products/creations or for budding creators to fund their new creative projects. Polinate is fueled by “POLI”, the native token of the platform. POLI token holders can enjoy various benefits, perks, and rewards in the Polinate ecosystem. POLI: The Native Utility Token of Polinate POLI is the utility token of the Polinate platform that enables the token holders to perform various functions and earn rewards. POLI token binds investors, patrons, and creators and allows stakeholders to earn rewards based on the amount of their contribution to the ecosystem. The POLI token can be transferred to the various stakeholders and participants within the platform. The token is used to perform various transactions as well as functions and can be used to leverage the benefits of the Polinate ecosystem. Token-as-an-Event — pTKN The Polinate ecosystem also introduces a unique concept — pTKN — Token-as-an-Event (TaaE). It enables tokenization of access to a creative project and to find the market price of a good the token represents. pTKN is individually defined by each creator who puts his/her project for raising funds on the Polinate network. Value Accrue and Benefits The POLI token is an integral part of the ecosystem that allows users to enjoy its numerous benefits, participate in different functions, and earn rewards for participation. Its value accrues from providing a convenient and safe method of payment and settlement between different participants of the Polinate network. The extent of reward depends on the value invested or the amount of POLI tokens staked in the ecosystem. The tokens are also used as economic incentives offered to encourage participants to contribute and maintain the network. It serves as a token for users to participate in activities to subsequently enjoy the perks, privileges, and memberships. In addition to this, the POLI token value also accrues from using it as a liquidity pair for pTKN. The participants of Polinate like users and creators can also lock their tokens to access membership tiers and subsequently rewards pertaining to different tiers. It will facilitate token holders to gain early or limited access to different token pools for reward issuance as well as random airdrops from creators in the Polinate ecosystem. POLI tokens are used for funding various projects and for the “Polinate Seeders Programme”. Participants of the Polinate ecosystem who stake digital assets, promote projects, or take part in the liquidity pools for funding projects are rewarded with POLI tokens depending on the user’s contribution. Governance POLI has been designed to enhance interoperability of the Polinate ecosystem and cannot be used outside the platform. The main purpose of the POLI token is to provide a secure medium of governance within the platform and cannot be used to: Settle debts or for payment of other products or services outside the platform. Holding POLI tokens does not give you any right to the company shareholding, interest, dividends, revenues, or profits of the company. Right to ownership of POLI tokens is only limited to the use of the tokens within the platform and not otherwise. POLI token is a non-refundable utility token of the Polinate platform. It offers interoperability and a safe method of transacting for performing the various functions in the network. POLI token holders can earn numerous benefits, perks, and rewards depending on the amount of POLI tokens they hold or invest in the membership pool. The token has been exclusively created for the smooth functioning of the Polinate network. The $POLI token sale will be conducted on the Polkastarter platform on Tuesday, 31st August 2021. The whitelisting for $POLI is open and will close on Wednesday 25th August at 4 PM UTC. For more details on how to whitelist, please refer to this post.
https://medium.com/polinate-hive/poli-utility-benefits-and-value-accrue-within-the-polinate-ecosystem-d11f7f51bff
['Polinate Team']
2021-08-25 14:03:10.252000+00:00
['Updates', 'Crowdfunding', 'Polinate', 'Poli Token', 'Utility']
Global Synthetic & Bio-based Aniline Market Research Report 2021
The research report includes specific segments by region (country), by manufacturers, by Type and by Application. Each type provides information about the production during the forecast period of 2016 to 2027. by Application segment also provides consumption during the forecast period of 2016 to 2027. Understanding the segments helps in identifying the importance of different factors that aid the market growth. Segment by Type Synthetic Bio-based Segment by Application MDI Rubber Processing Chemicals Agrochemicals Dyes & Pigments Download FREE Sample of this Report @ https://www.24chemicalresearch.com/download-sample/67948/global-synthetic-biobased-aniline-2021-651 By Company BASF SE The Chemours Company Huntsman Dow Chemicals Sumitomo Chemical Sinopec Covestro Tosoh Corporation Yantai Wanhua Polyurethane Hindustan Organic Chemicals Limited BorsodChem MCHZ Jilin Connell Chemical Industry Shandong Jinling Group Volzhsky Orgsintez JSC SP Chemicals Holdings Production by Region North America Europe China Japan Consumption by Region North America U.S. Canada Europe Germany France U.K. Italy Russia Asia-Pacific China Japan South Korea India Australia Taiwan Indonesia Thailand Malaysia Philippines Vietnam Latin America Mexico Brazil Argentina Middle East & Africa Turkey Saudi Arabia U.A.E Get the Complete Report & TOC @ https://www.24chemicalresearch.com/reports/67948/global-synthetic-biobased-aniline-2021-651 Table of content 1 Synthetic & Bio-based Aniline Market Overview 1.1 Product Overview and Scope of Synthetic & Bio-based Aniline 1.2 Synthetic & Bio-based Aniline Segment by Type 1.2.1 Global Synthetic & Bio-based Aniline Market Size Growth Rate Analysis by Type 2021 VS 2027 1.2.2 Synthetic 1.2.3 Bio-based 1.3 Synthetic & Bio-based Aniline Segment by Application 1.3.1 Global Synthetic & Bio-based Aniline Consumption Comparison by Application: 2016 VS 2021 VS 2027 1.3.2 MDI 1.3.3 Rubber Processing Chemicals 1.3.4 Agrochemicals 1.3.5 Dyes & Pigments 1.4 Global Market Growth Prospects 1.4.1 Global Synthetic & Bio-based Aniline Revenue Estimates and Forecasts (2016–2027) 1.4.2 Global Synthetic & Bio-based Aniline Production Capacity Estimates and Forecasts (2016–2027) 1.4.3 Global Synthetic & Bio-based Aniline Production Estimates and Forecasts (2016–2027) 1.5 Global Synthetic & Bio-based Aniline Market by Region 1.5.1 Global Synthetic & Bio-based Aniline Market Size Estimates and Forecasts by Region: 2016 VS 2021 VS 2027 1.5.2 North America Synthetic & Bio-based Aniline Estimates and Forecasts (2016–2027) 1.5.3 Europe Synthetic & Bio-based Aniline Estimates and Forecasts (2016–2027) 1.5.5 China Synthetic & Bio-based Aniline Estimates and Forecasts (2016–2027) 1.5.5 Japan Synthetic & Bio-based Aniline Estimates and Forecasts (2016–2027) CONTACT US: North Main Road Koregaon Park, Pune, India — 411001. International: +1(646)-781–7170 Asia: +91 9169162030 Follow Us On linkedin :- https://www.linkedin.com/company/24chemicalresearch/
https://medium.com/@rbe721992/global-synthetic-bio-based-aniline-market-research-report-2021-cec9f43a23d1
['Reshma Behale']
2021-11-30 11:01:04.007000+00:00
['Synthetic', 'Market Research Reports', 'Bio Based', '24chemicalresearch', 'Aniline']
Flooding Caskets and Floating Corpses
After recovering the bodies, identifying them is another great challenge. The infamous Hurricane Katrina surfaced about 1500 graves in 2005. Not only were the living devastated, but also some buried dead who washed away with the pulsing floodwaters. Afterward, Louisiana introduced legislation that required identifying information on every new coffin to prevent another scramble to put names to the displaced dead. But labels and death certificates usually wash away or get destroyed in the disaster that unearthed the casket in the first place, so this law had limited success. Some casket models have a small alcove in which a glass tube can be inserted, containing a death certificate. But these are often filled out in ink that smudges in a flood or aren’t filled out at all. When placed elsewhere, the death certificate faces damage from the floodwaters making it illegible. The cost and regulatory loopholes for uprooting graves and relocating the entire cemetery make it difficult to move them to a more inland location. As per some regulations in some areas, the next of kin must give permission before anyone moves the grave. Tracking down the distant relatives of long-dead individuals in older cemeteries makes this a costly and time-consuming process. Most graveyards have an endowment fund that is set up to cover the maintenance costs of graves long after the cemetery is no longer profitable. But often these accounts can’t even cover maintenance, much less a major move of every grave. Many newer, private cemeteries operate as perpetual-care sites, in which the family of the deceased pay into an escrow account to ensure the maintenance of a family member’s grave. But many older cemeteries maintained by churches don’t have these protections and paying for the reburial of these disturbed caskets and the body inside falls on the family. According to a FEMA guideline for the Great Flood of 2016, both private and non-profit cemeteries are ineligible for disaster assistance reimbursement. There are some FEMA funds for individuals who can prove damage to a family member’s grave from a disaster, but no large-scale funds are available. As sea levels rise in Louisiana, so many cemeteries are in the disaster area that helping them all is impossible. It’s not only the deceased’s family that suffer from the degradation of a cemetery. Headstones are often used for genealogical research, as stone tends to last longer than any paper documents. Headstones also show reverence to the art and culture of the dead person’s time and place. The stately cemeteries of New Orleans are masterfully crafted marble mazes, the centerpiece of a bygone era with a lasting cultural impact on the multicultural city. The immaculate gravestones and marble work in these cemeteries were made by masters of their craft and distinguished artists in their own right. Florville Foy was one such master marble cutter. He was a free black man, born in 1819, the son of a French immigrant who had fought in the Napoleonic Wars and a free black woman, a native of New Orleans. Florville was raised in an artistic and bohemian household where he learned sculpting, marble cutting, and writing from his multifaceted father. After leaving the tutelage of his father, he quickly found work crafting marvelous tombs with meticulously detailed carvings. He was so highly sought that he eventually employed nine artisans under him. After living together for 35 years, he married a white woman from Mississippi in 1885 during a short 20 year period of legalized interracial marriage in Louisiana. … Cities all along the Gulf of Mexico have dealt with this issue in various ways with equally varying measures of success. Some cemeteries cover graves with concrete and others tie down tombs using strong straps. As Louisiana explores avenues of improvement in disaster response, other regions facing coastal flooding and erosion will look to the state for possible solutions. Low-lying Louisiana is on America’s front line of accelerating climate change and the viability of burial practices for much of Louisiana will be unable to survive unscathed.
https://medium.com/climate-conscious/flooding-caskets-and-floating-corpses-88db92258b36
['Raisa Nastukova']
2020-08-26 14:01:03.741000+00:00
['Climate Change', 'Louisiana', 'Environment', 'Death And Dying']
How to pick the best Company for the development of mobile apps?
It is a digitized era in which every organization embarks on digital channels to extend the global market scope and supremacy of mobile apps across all digital interaction choices. Are you an entrepreneur and eager to hire dedicated python developer in India, hire dedicated flutter developers in India etc of an app that can help you build a top-notch app with exclusive and trending features? A mobile app will help you scale up your business with heavy traffic to your business portal and rapid revenue generation, regardless of your business domain and size. With the rapidly accelerating number of internet-savvy people, the rising rate of mobile users has created an unimaginable solution for companies to remain on the user’s computer screen and seek their attention regularly to browse your services and goods. Businesses from diverse sectors deliver distinct solutions with a wide variety of goods and services for different requirements. Companies have shifted their physical company to an online one in order to extend the field of services and expand. Most companies withdraw the principle of hiring software developers, believing that it would add a cost to their capital spending, but fail to count all the advantages they will gain from the app in the near future before their company’s long run. You are on the right page if you are already concerned about your capital investment and are interested to know the cost of hiring a mobile device programmer. As you know, most countries are renowned for serving as the best offshore app development companies and most companies get confused with the option of country and team to get the most affordable team with the advent of offshore development team trends. Location and experience have a great effect on an app developer’s price. You have to keep a close eye on the country’s standard of living and economic situation before hiring a mobile app programmer. Most companies turn to India to recruit the best software developers and the explanation is that Indian app programmers deliver cost-effective yet high-quality solutions. You can choose the team or a professional depending on your requirement when hiring a team for your app creation, either you can hire a dedicated team or you can choose the monthly or hourly base. How to pick the best businesses for the development of mobile apps? Here are the factors to take into consideration to hire dedicated django developers in India, hire dedicated flutter developers in India etc for your company. 1. Customer Feedback One gets to know the company’s pros and cons by talking to the clients. In short, the best way to analyze the quality of the services provided by any mobile app development company is through input from its clients. One of the sure-shot ways to find a top app development company for the development of custom mobile apps is by going through feedback and reviews from its past and present clients. For the same reason one can contact the customers to get all the details about the business. 2. Efficient Service Management Many businesses and companies complain that their app development company did not keep them posted on the progress of the development cycle of the mobile app. There are many cases in which customers have not found their final mobile app relevant to their needs simply because of the communication gap between them and their development. 3. Developing Standards Custom app designs are among the main factors that greatly contribute to its success. That is why businesses and independent business owners should prioritize the app developers’ expertise and skills. Consider the services offered by mobile app development companies with some fantastic UI / UX designers capable of providing world-class custom app designs to their customers 4. Testing Measure Most custom mobile apps have technical mistakes and bugs. Daily interruptions in the running of mobile apps lead to a loss of interest on the part of consumers. Therefore great methodologies for app design and development are not enough. Best mobile app development companies must follow some of the modern quality assurance and testing manual and automation steps to ensure that the end product does not have any technical defects or glitches as long as it arrives as a final output. 5. Security of the App Idea Leaking the idea and design of the app is the greatest challenge to today’s businesses and entrepreneurs. In recent years, many low-quality replicas have reached the app stores only because of the poor security measures taken by offshore app development companies in India before the original. Companies and entrepreneurs must ensure that certain stringent security protocols must be enforced by the app development company they select for the design and production of their enterprise mobile apps. Before it actually hits the app stores, the app idea and design must remain secret. The software firm should be prepared to enter into a non-disclosure arrangement to ensure the security of the design and idea of the app.
https://medium.com/@applicationdevelopers/how-to-pick-the-best-company-for-the-development-of-mobile-apps-47c262c53ab0
[]
2020-12-21 11:48:51.229000+00:00
['App Development Services', 'Offshore Outsourcing', 'App Development', 'Offshore Development', 'App Development Company']
good at god
I wish I believed in god because I bet I’d be really good at it. Temple every Saturday. Church every Sunday. Not enough for me. I wouldn’t take any shortcuts. I’d make time for The Light, the Lord of Lords, the King of Kings, Allah, Elohim, Adonai, Yahweh, eery heavenly, damn day. Every morning dedicated to religious yawns, afternoons of worship over lunch, dinner served with a side of deity, and before you go to bed “AMEN” screamed into your pillow. You name the day to pray and I’d be there. The rituals, the traditions, singing songs or sitting quietly, I bet I’d shine. I like rules and structure. Tell me what to do. Tell me what not to do. How would you punish me? Tell me and I’d take it to heart. I’d obsess. I’d think about it every day. No, every moment. Every fucking second I would wonder if I’m failing. Disappointing a higher power. That kind of shame and rejection feels familiar and sits comfortably in my heart. I wear worthlessness well. Drenched in disgrace, my grace. I’m a mess, god bless. I’d be so good at god. Not only the scriptures and the psalms and knowing right from wrong, but get this — I know I COULD and WOULD talk to god if I believed. Not like a Catholic priest who you’ve got to watch out for — especially in PA, those guys are always looking for prey — I’d talk to god like a holy vessel. Not a pedophile, not a predatory piece of shit wearing a white collar, just an unthreatening white lady wearing t-shirts and boat shoes with her hands clasped and pointed at the sky. Channeling the lord. Confession: I’ve actually done it before. Ten Hail Marys and you have to keep my secret. Whenever I’m crazy here he comes to sit on his throne carved between my ears. He tells me to believe. He pushes lightning bolts into my brain. Or is that Zeus… it doesn’t matter because whoever he is is magic and majestic. Beautiful and busted brain they’ll say but I hear god. He clears his throat and I beg him to forgive me. For what I’m not quite sure but better safe than sorry. I kneel in reverence of his gnashing teeth and the widest red eyes I’ve ever seen. He pats my head because I’m his servant and his puppy, hopefully not a plaything, but I guess we’ll see. I feel strange. And he’s nice for a while but then he stands up, cracks his back, and electric sparks sizzle and singe his robes (designed by Zeus?), and he leaves without a word for me. So I wait. I do everything right. I make my bed and I put gas in my car. I’m so good. I rip out my flowerbed, which is bad behavior, but then I leave honey for the ants. Keep sweet. I balance bad and perfect. I balance bipolar. I’m a professional. Do as I say, not as I do. I learned that from religion. A catchy and threatening chorus. So I wait. I feel sick. Jesus sneaks up behind me and rubs my shoulders singing, “daddy’s home.” Or maybe that’s the priest? Or any other man of faith who says, “call me this,” “wear that,” “do as I say,” “your body isn’t yours, ”and “stay afraid. I would if I were you.” I wince and he whimpers because we’re all going to die and he really loved dying for us. I sip his sacrifice. I suck his poison. Jesus leans against a cardboard casket and asks me how I’m log fluming my way to hell. “To hell? But I behaved so well and that gets you saved, right?” “Who knows” he shrugs. “Different department.” “Dad does this and I do that.” I’m confused but I want to be good at god so I smile and dove until I make dents in the floor. I only eat bacon in the dark. I show a lot of skin but I think I can spin that — embracing the sacred temple of the self — “HELLO WORLD look at what god gave me! Chiseled out of alabaster, what a pale creature!” I want people to see his work. Marvel at this blob who is unproductive and an idiot and does nothing but cry and pray all day but at least religiously puts on sunscreen to keep this rental pristine. “Well done, Jesus! Or god… I don’t remember. Who is Zeus again?” I am scared but I don’t know why. Mortality is a bummer. Who will save us after we’re dug deeply into the ground, or thrown cheaply into a ditch, or we’re scattered by the seaside, or paying rent for residing in an urn above the fireplace that we can’t afford. Will we be forgotten? Thinking about dying is something I’m good at — I’m not afraid to not wake up one of these days — but what’s next? I can’t help but bat my big blue eyes in Morse code looking longingly at that old book over there that is brimming with answers about my bones. It reads, “hold tight to your predetermined fate, prescribed by the Almighty, and if you start to question his divine plan FOR YOU remember to HAVE FAITH. Unquestionable faith. Devout faith. Grey and fuzzy faith. For faith is the currency of the lord.” I will eat those words until they dye my tongue and I blink back Hebrew. The world is fucked but luckily we have a guy who rides high in the sky and if you play nice and do everything right you can meet him. Goals. Dreams. Do what you’re told. Firm handshake and a curtsy for the Supreme Being. If only he was real I could die with a fortune buried with me. But maybe he is? My brain is frothing with fever. I wish I believed in god. So when he tumbled out of my mouth I could say, “noooo I’m not crazy, I’m divine. Right on time it’s a holy day and here’s what Jesus has to say!” But the Jesus I know cries a lot these days about being a failure and screams at the stars and slits his wrists until the aisles run red with blood or wine — depends on who you ask — and he’s paranoid and says god left us long ago and we are here with nothing. Jesus whispers into my frizzy hair that if god came back he’d plunge us into the belly of the Earth where we’d fry and die in silence. Jesus misses his Holy Dear Old Dad but he’s so afraid of him that his teeth chatter just like the sound the pearly gates make when their rusty bars are locked tight. “Keep you out, keep them in,” hammered into the chastity belt that bars entry and has no key. Sweet Jesus. He bites off his nails, from tips to knuckles, and yelps, “how can I pray with my hands folded now that I just have palms dotted with bloody stumps? My saintly fists dripping in blood!” “I think he likes that,” is my offering. This is my Jesus. This is who I hear sulking in my skull. And people say it means I’m crazy as fuck, which is probably true, especially because I am Jewish, but this is exactly why I wish I believed in god because it wouldn’t be my fault when everything goes to shit. I would be a mouthpiece for hallelujahs. And the crazy stuff would be folded in. And people would believe in me — not call me insane while shaking maracas made of medication. Oh I wish. I just swallowed my teeth and my hair is dragging tumbleweeds across our wooden floor. Wait — who am I? But at least I‘ll die someday and maybe I’ll figure it out. Make the right kind of friends. Angels hanging by the vending machines, the cool kids in school. I was never that popular. My junior year of high school for AP English I made a shoebox diorama for the book, “AS I LAY DYING,” my capitalization for emphasis, by Faulkner. Did you know he had a thing for pipe cleaners? Bouquets of pipe cleaners were the only presents he accepted and I thought wanting socks for Hanukkah was weird. Sorry man, I passed on the pipe cleaners relying instead on a Designer Shoe Warehouse infrastructure, a construction paper backdrop, and a meticulous scene spun out in glitter glue depicting an old barn and flames licking the sky. And suspended on minty dental floss was a coffin. A brown paper coffin with an arm flapping out of it. If you blew it, it would sway like it was caressed by a fake pastoral breeze, fanning fake smoke into second period. Perfect. I got an A+ and then accidentally stepped on it. Crunched that floating coffin into a crumpled brown paper ball. I hope that’s how I’ll die. A+ and under god’s sneaker. Byeeeeee. My insides bubble and boil and I break in half. It’s easier to hide when you have fewer pieces. Will you help me? I believe in god. Unless I don’t. I’m not sure now. The truth is I don’t understand any of this but that’s ok! You don’t need to get it to be good at god. You just need to be crazy. Or maybe that’s just me. What does my medication say about the Messiah? Do I deserve to be saved, sane or sick? Am I just fractured beyond fixing? Who broke me in the first place? Was it you? Lord, I am not worthy that you should enter under my roof, but only say the word and my soul shall be healed. I’m too pasty for eternal flames so I hope I figure this out soon.
https://medium.com/invisible-illness/good-at-god-8abb50b1cbc4
['Dr. Rachel Kallemwhitman']
2020-08-24 23:41:39.929000+00:00
['Death', 'Mental Health', 'Mental Illness', 'Bipolar', 'Religion']
Satoshi rejected second layer solutions, Soft Forks and was clear about scaling Bitcoin on-chain.
Satoshi rejected second layer solutions, Soft Forks and was clear about scaling Bitcoin on-chain. Valerios B. Jun 2·9 min read Bitcoin P2P e-cash paper. Oct 31, 2008 Bitcoin Whitepaper release. When Satoshi published the Bitcoin whitepaper in 2008, he got the first reply by James A. Donald saying "We very, very much need such a system, but the way I understand your proposal, it does not seem to scale to the required size." Satoshi in reply gave an explanation on how Bitcoin is designed to work. Long before the network gets anywhere near as large as that, it would be safe for users to use Simplified Payment Verification (section 8) to check for double spending, which only requires having the chain of block headers, or about 12KB per day. Only people trying to create new coins would need to run network nodes. At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware. A server farm would only need to have one node on the network and the rest of the LAN connects with that one node. The bandwidth might not be as prohibitive as you think. A typical transaction would be about 400 bytes (ECC is nicely compact). Each transaction has to be broadcast twice, so lets say 1KB per transaction. Visa processed 37 billion transactions in FY2008, or an average of 100 million transactions per day. That many transactions would take 100GB of bandwidth, or the size of 12 DVD or 2 HD quality movies, or about $18 worth of bandwidth at current prices. If the network were to get that big, it would take several years, and by then, sending 2 HD movies over the Internet would probably not seem like a big deal. Satoshi Nakamoto First proposal to scale with second layer (Bink). James A. Donald then proposed a second layer solution to scale Bitcoin. The trouble is, you are comparing with the Bankcard network. But a new currency cannot compete directly with an old, because network effects favor the old. You have to go where Bankcard does not go. At present, file sharing works by barter for bits. This, however requires the double coincidence of wants. People only upload files they are downloading, and once the download is complete, stop seeding. So only active files, files that quite a lot of people want at the same time, are available. File sharing requires extremely cheap transactions, several transactions per second per client, day in and day out, with monthly transaction costs being very small per client, so to support file sharing on bitcoins, we will need a layer of account money on top of the bitcoins, supporting transactions of a hundred thousandth the size of the smallest coin, and to support anonymity, chaumian money on top of the account money. Let us call a bitcoin bank a bink. The bitcoins stand in the same relation to account money as gold stood in the days of the gold standard. The binks, not trusting each other to be liquid when liquidity is most needed, settle out any net discrepancies with each other by moving bit coins around once every hundred thousand seconds or so, so bitcoins do not change owners that often, Most transactions cancel out at the account level. The binks demand bitcoins of each other only because they don't want to hold account money for too long. So a relatively small amount of bitcoins infrequently transacted can support a somewhat larger amount of account money frequently transacted. James A. Donald Later on James A. Donald’s historical first commentaries with Satoshi Nakamoto were considered to argumentatively presage and posit the BTC lightning network, as well as suggest data compression. Bitcoin v0.1 released Jan 8, 2009 The first release of Bitcoin client. Announcing the first release of Bitcoin, a new electronic cash system that uses a peer-to-peer network to prevent double-spending. It's completely decentralized with no server or central authority. Total circulation will be 21,000,000 coins. It'll be distributed to network nodes when they make blocks, with the amount cut in half every 4 years. first 4 years: 10,500,000 coins next 4 years: 5,250,000 coins next 4 years: 2,625,000 coins next 4 years: 1,312,500 coins etc... When that runs out, the system can support transaction fees if needed. It's based on open market competition, and there will probably always be nodes willing to process transactions for free. Satoshi Nakamoto Second compatible implementation. SoftFork. June 17, 2010 Gavin Andresen makes a question about a second, compatible implementation. I see that the outputs of transactions have a value (number of bitcoins) and a bunch of bytes that are run through the little Forth-like scripting language built in to bitcoin. E.g.: ['TxOut: value: 100.00 Script: DUP HASH160 6fad...ab90 EQUALVERIFY CHECKSIG'] First: it make me a little nervous that bitcoin has a scripting language in it, even though it is a really simple scripting language (no loops, no pointers, nothing but math and crypto). It makes me nervous because it is more complicated, and complication is the enemy of security. It also makes it harder to create a second, compatible implementation. But I think I can get over that. Gavin Anersen Satoshi responds: The nature of Bitcoin is such that once version 0.1 was released, the core design was set in stone for the rest of its lifetime. Because of that, I wanted to design it to support every possible transaction type I could think of. The problem was, each thing required special support code and data fields whether it was used or not, and only covered one special case at a time. It would have been an explosion of special cases. The solution was script, which generalizes the problem so transacting parties can describe their transaction as a predicate that the node network evaluates. The nodes only need to understand the transaction to the extent of evaluating whether the sender's conditions are met. The script is actually a predicate. It's just an equation that evaluates to true or false. Predicate is a long and unfamiliar word so I called it script. The receiver of a payment does a template match on the script. Currently, receivers only accept two templates: direct payment and bitcoin address. Future versions can add templates for more transaction types and nodes running that version or higher will be able to receive them. All versions of nodes in the network can verify and process any new transactions into blocks, even though they may not know how to read them.The design supports a tremendous variety of possible transaction types that I designed years ago. Escrow transactions, bonded contracts, third party arbitration, multi-party signature, etc. If Bitcoin catches on in a big way, these are things we will want to explore in the future, but they all had to be designed at the beginning to make sure they would be possible later. I do not believe a second, compatible implementation of Bitcoin will ever be a good idea. So much of the design depends on all nodes getting exactly identical results in lockstep that a second implementation would be a menace to the network. The MIT license is compatible with all other licenses and commercial uses, so there is no need to rewrite it from a licensing standpoint. Satoshi Nakamoto Second proposal to scale with side chains (Bit-Banks). July 28, 2010 James A. Donald was not the only one that proposed a second layer scaling solution for Bitcoin. Later in 2010 a user “Bytemaster” on BitcoinTalk forum suggested a side chain model as a solution to scale Bitcoin. I am convinced that bandwidth, disk space, and computation time necessary to distribute and "finalize" a transaction will be prohibitively expensive for micro-payments. Consider for a second that the current banking industry is unable to provide a reasonable micropayment solution that does not involve depositing a reasonable sum and only allowing a withdraw after a reasonable sum has been accumulated. Besides, 10 minutes is too long to verify that payment is good. It needs to be as fast as swiping a credit card is today. Thus we need bit-banks that allow instant transfers among members and peer banks. Anyone can open a bit-bank but the system would, by necessity operate on some level of trust. Transfers in and out of the banks and peer-to-peer would still be possible but will be more costly. Thus, a bit bank could make money by enabling transfers cheaper and faster than the swarm with the added risk of trusting the bank. A bank has to maintain trust to make money. Dan Larimer Satoshi however was very clear on how Bitcoin is designed to scale. The current system where every user is a network node is not the intended configuration for large scale. That would be like every Usenet user runs their own NNTP server. The design supports letting users just be users. The more burden it is to run a node, the fewer nodes there will be. Those few nodes will be big server farms. The rest will be client nodes that only do transactions and don't generate. Quote from: bytemaster on July 28, 2010, 08:59:42 PM Besides, 10 minutes is too long to verify that payment is good. It needs to be as fast as swiping a credit card is today. See the snack machine thread, I outline how a payment processor could verify payments well enough, actually really well (much lower fraud rate than credit cards), in something like 10 seconds or less. If you don't believe me or don't get it, I don't have time to try to convince you, sorry. Satoshi Nakamoto The snack Machine Thread http://bitcointalk.org/index.php?topic=423.msg3819#msg3819 Bytemaster and Satoshi (second layer solution & scaling onchain) The first attempt to increase the Block Size October 03, 2010 Jeff Garzik suggested to at least match Paypal’s average transaction rate and proposed a patch for the Bitcoin code. Satoshi warned that it’s not yet necessary, and the code upgrade would make it incompatible with other versions. Don't use this patch, it'll make you incompatible with the network, to your own detriment. We can phase in a change later if we get closer to needing it. Satoshi Nakamoto However Satoshi explained how the upgrade could be done in future. It can be phased in, like: if (blocknumber > 115000) maxblocksize = largerlimit It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete. When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade. Satoshi Nakamoto BitcoinTalk forum, Satoshi response on Block Size increase. The origin of the 1MB block limit February 07, 2015 Ray Dillinger (Cryddit), who reviewed Satoshi Nakamoto’s Bitcoin code with Hal Finney before it was released, posted on BitcoinTalk forum. I'm the guy who went over the blockchain stuff in Satoshi's first cut of the bitcoin code. Satoshi didn't have a 1MB limit in it. The limit was originally Hal Finney's idea. Both Satoshi and I objected that it wouldn't scale at 1MB. Hal was concerned about a potential DoS attack though, and after discussion, Satoshi agreed. The 1MB limit was there by the time Bitcoin launched. But all 3 of us agreed that 1MB had to be temporary because it would never scale. Ray Dillinger (The origin of the 1MB block limit) Mike Hearn: Questions about BitCoin There is another evidence where Satoshi explains how Bitcoin would scale on-chain in e-mail replying to Mike Hearn. If there is only one chain recording "the story of the economy" so to speak, how does this scale? In an imaginary planet-wide deployment there would be millions of even billions of transactions per hour being hashed into the chain. I realize that each PoW can wrap many transactions in one block, nonetheless, that's a large amount of data to hash. Mike Hearn Satoshi responds and clarifies how Bitcoin scales. Hi Mike, I'm glad to answer any questions you have. There is only one global chain. The existing Visa credit card network processes about 15 million Internet purchases per day worldwide. Bitcoin can already scale much larger than that with existing hardware for a fraction of the cost. It never really hits a scale ceiling. If you're interested, I can go over the ways it would cope with extreme size. By Moore's Law, we can expect hardware speed to be 10 times faster in 5 years and 100 times faster in 10. Even if Bitcoin grows at crazy adoption rates, I think computer speeds will stay ahead of the number of transactions. Satoshi Nakamoto
https://medium.com/@walerikus/satoshi-rejected-second-layer-solutions-soft-forks-and-was-clear-about-scaling-bitcoin-on-chain-fbb2a08cddb
['Valerios B.']
2021-06-03 23:57:26.035000+00:00
['Scalability', 'Lightning Network', 'Sidechains', 'Bitcoin', 'Segwit']
Best of Enemies
B+ The biggest divide in this big, divided world is not between people of different races or religions or political beliefs; it is between people who have different ideas of who is “us” and who is “them.” “The Best of Enemies” is based on the true story of C.P. Ellis (Sam Rockwell), a white supremacist and the Grand Exalted Cyclops (president) of the local chapter of the Klu Klux Klan, and Ann Atwater (Taraji P. Henson), a black woman who was a community activist working for civil rights and economic justice. In 1971, Ellis and Atwater were appointed co-chairs of a charette, a dispute resolution mechanism used to resolve complicated community disagreements. Originally developed for land use debates among parties with multiple and varied interests, it was adapted for other kinds of issues by Bill Riddick, played in this film by Babou Ceesay. Ellis and Atwater lived in Durham, North Carolina. Seventeen years after the Brown v. Board of Education decision by the Supreme Court that segregated schools were unconstitutional, the Durham schools were still divided. When the school attended by the black children burned down, the city had to decide whether to let them attend the school the white children were attending. The court did not want to deal with it, so they asked Bill Riddick to see if he could get the community to come to some agreement. Ann Atwater worked for Operation Breakthrough but it was more than a profession; it was her calling. We first see her arguing on behalf of a young woman whose apartment is uninhabitable. And throughout the film we see that her entire life is one of advocacy and generosity. Everyone she meets is either someone to be protected or someone to help her protect others. Her sense of “us” encompassed the world. C.P. Ellis ran a gas station. He loved his family, including a disabled son who lived in a residential facility. The Klan made him feel respected and important. He created an outreach program to bring teenagers into the Klan. And he organized outings like the time they shot up the home of a young white woman coming home from a date with a black man. He agrees to co-chair the charette because he believes that anyone else who got the position would cave. And there are those in the town who would never associate with the Klan but who are glad to support them in private. Rockwell and Henson make Ellis and Atwater into fully-developed, complex characters. There’s a world of history in the way Henson walks as Atwater, shoulders hunched, hitching her hips along. In one scene where she reprimands young black boys for tearing down a KKK hood on display, and then straightens it herself after shooing them away, the expression in her eyes speaks volumes about what she has seen. And when we see the patience and tenderness Ellis has for his disabled son, we get a sense of all he thinks has been taken from him and how much it matters to him to hold on to something that makes him feel powerful. This is a thoughtful, sincere drama, beautifully performed with a touching conclusion, first of the story itself, and the small acts of kindness that make “thems” into “us-es,” and then with the footage of the real-life Atwater and Ellis, who died within a few months of each other in 2016. When she takes his arm to help him walk out of the room, our own us-es get a little larger, too. Parents should know that this movie deals frankly with issues of bigotry and racism including attacks by the Klu Klux Klan. It includes some strong language with racist epithets and a sexual reference. Characters drink and smoke and there are violent, racially-motivated attacks. Family discussion: What did Atwater and Ellis have in common? Why did she help his son? Why did she tell the boys not to take down the KKK hood? Who is the Ann Atwater in your community and what are the issues? If you like this, try: the book by Osha Gray Davidson and the 2018 Oscar winner for Best Picture, Green Book
https://medium.com/more-about-movies/best-of-enemies-1a13a04f63dc
['Nell Minow']
2019-04-05 01:46:25.245000+00:00
['Civil Rights', 'Movie Review', 'Movies', 'Desegregation', 'Racism']
Artificial Intelligence Applications — From Space to Underwater
Explored Applications of AI from various Domains. AI Applications Overview. Space Applications Earth Applications — AI & Life Style Underwater Applications Conclusion It is time to understand the Machines , where they operate autonomously without human intervention. Without any doubt, these days Artificial Intelligence is growing tremendously, very quickly and fast. As far as application is concern it is applying where Intelligence exists. These days AI can be applied in various domains such as Robotics, Space Exploration, Machine Intelligence, Flying Cars, Self Driving Cars, Underwater, and complex domains, where humans cannot dig and work in harsh environments. The applications provided in this article all are applying to Machines. Working in Underwater and Other planets is not possible to humans, but AI is already achieved. AI is applied from Underwater to Space Projects, so, Applications can be categorized as above the Earth, Lies in the Earth and below the Earth. I tried to explore the subjects with short introduction along with videos. There are many applications need to cover but due to some other reasons I tried to cover only few and categorized into 3 different categories. There has been already started in various disciplines/domains, the list is exhaustible. Specifically starting from Military, Agriculture, Shopping, Shipping, Deliver of Goods, Transportation, Communication, Banking, Healthcare / Medical, Space Related, and many more. Especially, in software applications there are various Software Robots or Softbots exist in rich and unlimited domains. It is necessary to know the power of AI applications and some of the above mentioned applications are in research and progress and others are available to us as of today. Applications need approval from the Nation Government to be used. Above the Earth Applications Space & Flying Applications: These applications are on the above the earth. Starting with the biggest achievement in “Exploring the Universe” in Mars called “Mars Exploration Program” by NASA. They used Curiosity car sized rover designed to explore on Mars as part of NASA’s Mars Science Laboratory (MSL) mission. Curiosity launched on November 26, 2011, at 15:02 UTC. Note: This application explained lengthy. NASA Statement on Curiosity: The goal of the Mars Exploration Program is to explore Mars and to provide a continuous flow of scientific information and discovery through a carefully selected series of robotic orbiters, landers and mobile laboratories interconnected by a high-bandwidth Mars/Earth communications network. It is one of the state of the art application and first on-board autonomous planing program to control the scheduling of operations for a spacecraft. It works on Remote Agent concept, which generated plans from high-level goals specified from the ground and monitored the execution of those plans-detecting, diagnosing, and recovering from problems as they occurred. MAPGEN is the successor program, which plans the daily operations for NASA’s MER, and MEXAR2 did mission planing — both logistics and science planing. — Source Page no 28 in AI : A Modern Approach. These curiosity rovers will operate human hands and eyes on Mars, the robots are dependent almost entirely on human intelligence on Earth. MAPGEN is a ground-based decision support system for MER mission operations. It is called a mixed initiative planner. It determines that the plan is within the bounds of the available resources onboard the rovers, and is free of any temporal violations. The use of artificial intelligence (AI) in space exploration by Curiosity, it is totally a complex domain which has used various disciplines concepts from Telecommunications Network to Scientific instruments, Robotic arms, Navigation arms, etc., it has various components like ChemCam, MastCam, 6-Wheels,High-gain,Low-gain and UV Sensor. AI and its sub fields not limited to machines, now they are applying to making engines (Rocket Engines) to be smart. University of Texas at Austin is developing new “scientific machine learning” to address the challenge. Scientific Machine Learning = Scientific Computing + Machine Learning, through a combination of Physics modeling and data-driven learning, finally it is possible to create reduced-order models. Rocket Engines can be developed by using Reinforcement Learning , which interacts with Environments and get rewards. To simulate these new models we need new Simulator for Simulation task. Reinforcement Learning Algorithms gives more accuracy. There is huge research is going on using AI for Robot Pilots. Recent development on autonomous plane takes off with passengers,cargo. The company, Xwing, has now performed numerous passenger-carrying autonomous take-off to landing flights in its modified Cessna 208B Grand Caravan, a first in aviation with this category of aircraft. Advanced countries like Australia, America spending Millions of dollars for building Autonomous Air Craft Fighters. These Jet fighters carrying weapons, missiles, etc., in war. It is a personal air vehicle or roadable aircraft that provides transportation road and air. These are also known as “hovecars”. It is capable of safe, reliable and environmentally-friendly operation both on public roads and in the air. It can fly without a qualified pilot at the controls. Recently Israel company designs two flying-car models CityHawk and Falcon XP. There is huge development going on in military sector, ranging from warships, weapons, robots, missiles, air crafts, etc., Man made robots will participate instead of man in the war. There is a lot of research is going on . Developed and Advanced countries like Australia, USA, South Korea, Russia, UK and China spending millions of dollars in Military for building autonomous machines. You can get complete picture of this in this article. This section explores applications of Earth oriented, these applications changes our life style, as of now 2020 there are lot of applications available. Day to Day applications summarized is as follows: These days you can communicate with devices like Cells, Cars , Home Automation Systems etc., The best example is iPhone - Siri Application , where you can ask weather, news, searching in google, calling to contact numbers, google maps, etc., Siri will perform the user commands, for ex: The user gives the command and say “Search for Neuroscience in Google”, then Siri will search the phrase and open in the default browser. Voice Recognition is available almost in all cars and it can perform user commands even it can start and stop your trips. Following is the MBUX Voice Control. Amazon Alexa: It is a Virtual assistant in AI and act as a Home Automation System. It is capable of voice interaction, music playback, making to-do lists , alarms, audio books, traffic, sports, weather, and other real-time info, such as news and many more. You can find much more other sensing devices and you can just operate with your finger tips and sense. The following video gives 5 Smart Home Tech introduction Amazon Go is the first store where no checkout is required. Customer simply enter the store using the Amazon Go app to browse and take the required products or items they want and then leave. Customer being able to purchase, products without suing a counter or checkout. The following video shows how Self-driving Robot (Delivery Bot and named as YAPE) brings goods directly to you, it uses Facial Recognition to recognize the customer to deliver. It makes delivery fast and easy, bot easily navigates sidewalks. YAPE has a 70 kg loading capacity and can travel 80km on a single charge. It uses computer vision, Sensor fusion to detect and avoid sidewalk edges, tracks or other abnormalities. Boston Dynamics is a well known company doing research on Robots. It is widely used for recognition of faces in the organizations, apartments, Universities, Autonomous Driving, etc., for the authorized persons and for activities. It uses Computer Vision Concepts to recognize it. There are many applications of Face Recognition like phone unlock, Attendance, Authorized members, etc., It is every AI enthusiast dream to develop Autonomous Vehicle. It saves time and lives from accidents, as of now Level-5 SDC not yet come in the market. Lots of R&D is going to achieve without human driver. Almost all recognized Car companies trying to achieve Autonomous cars. You can conduct meetings, calls, lot can happen in Autonomous Vehicles. It’s interior and exterior design is totally different when it reaches to Level-5. Almost all motor field companies spending and research on autonomous vehicles. Drones can be used in various ways to deliver goods, for tracking, video scenes, agriculture fields, etc., These days we can deliver Medicines, emergency stuff, food in floods,etc., But due to security reasons national Governments has to approve before it to be used. A Chatbot is an AI software that can simulate a conversation or a chat with a user in natural language through messaging applications, websites, mobile apps or through the telephone. There are various diseases can be easily detected using AI and Deep Learning technologies. Diseases, such as Malaria, Cardiac Arrhythmia, Cancer Detection, Diabetic Retinopathy,Pneumonia Detection, etc., Recommendation systems are algorithms aimed at suggesting relevant items to users ( items being movies to watch, next to read, products or items to buy or anything else depending on industries). These systems are very help when marketing the products or items to the users. Here two applications presented in this category , Autonomous Ship and Submersible Robot. Former developing by IBM. IBM launched an AI-Powered marine-research vessel called the Mayflower Autonomous Ship. It will attempt to cross the Atlantic in spring 2021. Mayflower Autonomous Ship will collect data on the ocean and marine life, including sampling for plastics. IBM and ProMare announced an interactive website, the MAS400 portal that will provide live updates on the vessel’s whereabouts, environmental conditions, and other data. You can find more about Mayflower ship Deep Ocean is such a place that is not only rich in sea life, vast swathes of it are also abundant in metals such as nickel, copper, cobalt, zinc which are very essential to making electronic and electronic items such as smartphones, electric vehicles, and solar panel parts. It is a new, impressive example of the potential soft robotics has for helping humans reach harsh environments while causing a little environmental damage as possible. These autonomous underwater vehicles (AUVs) are more challenging than space exploration. These days AI Enthusiasts developing various applications which meets intelligence in non-living objects. There are numerous application developing in sensing, drone, robotics, facial expressions,etc., There are many applications coming in the market. Artificial Intelligence is not limited , it is unlimited due to its property Intelligence. In the future , definitely and totally will change our life style. Due to the lack of computation power, there has been lot of applications on hold, if Quantum Computers are portable and mingled in all the ways definitely super intelligence will over come our struggles and works. Thanks for reading my article and appreciated your feedback.
https://medium.datadriveninvestor.com/artificial-intelligence-applications-space-to-underwater-6b0391fe8b2c
[]
2021-01-05 11:30:16.448000+00:00
['Artificial Intelligence', 'Robotics', 'Flying Cars', 'Space Exploration', 'Machine Intelligence']
Benefits of a Large LinkedIn Network
Since LinkedIn began hitting key growth metrics several years ago, there have been naysayers who suggest that you should “only connecting to people you know well enough to ask a favor of or do a favor for.” An article in the Harvard Business Review states: Many users are beginning to discover, however, that a larger number of social network connections may be less valuable than a smaller, more intimate circle. With an enormous collection of friends or followers on a network, you lose the benefits of intimacy, discoverability, and trust, all of which can work better when you have fewer connections. While larger networks may be less personal, they can also be extremely valuable. While you don’t need to hit the maximum number of connections (30,000) to see these benefits, you should aim for at least 500 to maximize your profile “completeness” from LinkedIn’s perspective. So what constitutes a “large” LinkedIn network? I would say at least 3,000 connections. Below are some of the reasons that you should consider making a conscious effort to expand your LinkedIn network. Improve LinkedIn Search Results LinkedIn Search Results| Source: Casey Botticello Unless a recruiter is using the expensive LinkedIn Recruiter or doing a search on a person’s name, LinkedIn search results include only the people who are connected to the searcher as first, second, and third degree connections inside LinkedIn. The more connections you have, the greater the likelihood that you will appear in someone’s search results, even if they are a third degree connection. Search results are not sorted by the degree of connection, so a third degree connection can be the top entry in search results. Clearly, if you have a limited number of connections, your visibility in LinkedIn is extremely limited. Increase Reach on Your Published Content LinkedIn Post Views | Source: Casey Botticello Once you become one of the LinkedIn elite, your published content skyrockets in popularity. How does this come about? The LinkedIn blogging platform notifies users when a connection has published a post. This means that with more connections in your network, more users will be alerted when you publish a new piece of content. The result is a truckload of engagement with your published content — a marketer’s dream come true! With this increased engagement also comes the increased likelihood of being featured in LinkedIn Pulse! Once you’re featured there, you’re virtually guaranteed even more engagement. Increasing Profile Views LinkedIn Profile Views | Source: Casey Botticello Chances are that when you add a new user to your connections, they’ll end up checking out your profile. Once you start adding and accepting more LinkedIn connections, you’ll see your profile views dramatically increase. I went from a few profile views now and then to thousands per week! Pretty incredible. Getting Endorsements LinkedIn Endorsements | Source: Casey Botticello Another great advantage of LinkedIn connection-bingeing is the huge increase in endorsements. Endorsements are like LinkedIn badges of honor, as coworkers and friends +1/endorse you for certain skills. When you’re lacking in connections, you’ve got less people in your network to endorse you. Getting a few 99+ endorsement streaks looks fantastic on a LinkedIn profile, but it’s nearly impossible to get that many endorsements without a large connection base.
https://medium.com/digital-marketing-lab/benefits-of-a-large-linkedin-network-527e9fe70956
['Casey Botticello']
2020-11-13 04:13:20.526000+00:00
['Social Media', 'Entrepreneurship', 'Networking', 'LinkedIn', 'Linkedin Marketing']
When Chronic Illness Has You “Playing Scared”
When Chronic Illness Has You “Playing Scared” My new specialist took me seriously and gave me a new framework for navigating my health. Last week I had my first-ever appointment with a rheumatologist. My rheumatologist. That part still feels incredibly foreign to me, that I am now the sort of person who needs their own rheumatologist. Every day I fight the urge to let my chronic illness make me feel shameful, indecorous — “other.” It’s a fight I lose more often than not. Anyone who finds themselves newly disabled can tell you that newfound disability comes with a staggering amount of grief; grief for the person you were before you got sick and the person you could have been without that sickness, especially when you are young. We spent our entire childhoods growing up and hearing that we could be and do anything we put our minds to if only we worked hard enough. We envisioned futures for ourselves that never included a seemingly sudden inability to walk up a flight of stairs without our legs buckling beneath us or researching affordable mobility aids in our twenties. No one teaches you how to cope with the possibility that the American Dream you were promised as a child wasn’t designed for you. And so after years of worsening chronic pain and fatigue, I made the dreaded new patient appointment. I was referred to this particular rheumatologist by my primary care physician, a particularly callous woman in her early fifties that made me feel as though I were about three inches tall. When I finally managed to get the words out, the ones that have been resting uneasily in the pit of my stomach for months, they were met with nothing but dismissal and hostility. I don’t think she looked me in the eye once. Which to be honest, I didn’t mind much. You see, I’m autistic, and in addition to disliking eye-contact with strangers, I have enormous difficulty advocating for myself. I tense up and shut down, have trouble remembering crucial information and organizing my thoughts, and have a general tendency to let others talk over and decide things for me. I never want to be a bother, or make a fuss, or be “too much,” so I never receive the care I need. This time, however, I was determined to be heard. Before my appointment, I took to Twitter to ask the chronic illness/disability community for some much needed advice. The overwhelmingly prominent suggestion was to write down a list of my symptoms, objectives, and questions to give to my doctor. Both so I wouldn’t forget anything crucial I needed to tell them and to serve as a guide to keep me grounded during the appointment. The other main suggestion was to take someone with me, preferably a man (thanks, patriarchy), to serve as an advocate and a witness. Unfortunately that wasn’t in the cards for me, so I went to the appointment alone, the piece of paper that felt like a lifeline clutched in my hand. After reading account after account from my fellow spoonies about their series of dismissive and unhelpful doctors as well as my own, what I experienced at that first appointment with my rheumatologist took me by complete and utter surprise. The kind-eyed physician with messy hair and a CDC-compliant face mask read over my haphazardly thrown together plea for care: “Hi. I’m autistic and have trouble advocating for myself and expressing my needs. I made this list of the problems I have been experiencing for you. I’ve been having these problems for years, but they’ve steadily increased over the last year/six months to the point where my quality of life has been severely impacted. I’m in pain and exhausted all of the time and I need someone to listen. [List of symptoms]. These problems are persistent and severe enough to interfere with my daily life. I’ve lost three jobs and dropped out of school. I’ve nearly had accidents in bed from being in too much pain to get to the bathroom. I’ve had to have my partner carry me up the stairs/out of public places when my legs hurt too much to walk. I just want to know what’s wrong with me. I will do my best to answer any questions you may have. Thank you for listening and trying to help.” He folded the paper back up carefully and looked up at me, the crinkles by his eyes letting me know that he was smiling, though sadly, and he said the four words I’d been longing to hear for years: “I see. I understand.” We talked for about an hour about my life and symptoms, he spoke softly and directly and was patient when I struggled to get up from my chair to show him how I walked. Then he asked me a question I was thoroughly unprepared for: “Do you play any sports?” I shook my head and chuckled softly. “No,” I said. “Definitely no.” He smiled. “I thought not,” he said. “Do you watch any?” I told him I was fond of soccer, or rather European football, Liverpool FC in particular. This seemed to please my good-humored doctor, who then proceeded to dive metaphor-first into his assessment of my condition. “You know when you’re watching a match,” he began. “And even if Liverpool is down one-to-nil, you can tell by the way that they’re playing that they’re going to come back and win the match?” I nodded. “And oppositely, even if the other team is up, you can tell by the way that they’re playing that they aren’t going to keep the lead. Their heart isn’t in it. They’re playing scared.” I nodded again, more tentatively this time, as I felt the pressure behind my eyes building. “This is you,” my doctor said. “You’re playing scared. I don’t know what happened to you that has made you this way, but you’re clearly a bright young woman who has experienced a great deal of trauma. You’re playing the game as if you are destined to lose. You’re playing scared.” I was speechless as the tears finally left my eyes. I looked at my feet and tried to suppress the meltdown I could sense was coming. My doctor spoke again. “You need to start playing as if you’re going to win. You need to be Liverpool.” Easier said than done, doc, I thought. But I nodded and smiled politely, as I knew that was what was expected of me. Still, his words had shaken me. He was right, I was playing scared. I had bitterly resigned myself to a life of pain and exhaustion, a life I may very well continue to lead as long as I continue to exist on this planet, and it has been that fear and despair that has arguably weighed me down even more than my physical illness. As I said before, there is a staggering amount of grief that comes with chronic illness and disability. Chronically ill and disabled folks are more likely to experience mental health problems throughout their lives, but especially during the “adjustment period” after acquiring said illness or disability. It’s difficult not to “play scared” after years of capitalist conditioning has taught us that our worth is measured by our productivity. Where does our value lie if we are no longer able to contribute to society in the way we’re expected to? For myself, I’m still unsure. While I am of the belief that all human beings are inherently valuable, I’m also stubbornly insistent in my opinion that this belief somehow doesn’t apply to me. That I am forever the pathetically useless exception who doesn’t deserve to be valued or loved exactly as I am, rather than the mythical idealized version of myself that I wish I could be. I need to let her go. For better or for worse, this is who I am now. This is the body I have been given, and it’s no less valuable on the days when I can’t make it up stairs on my own. As my rheumatologist and I parted ways, he left me with a bittersweet assurance. “I can’t promise that I’ll be able to help you, but I am certainly going to try.” I think it’s only fair that I try too.
https://medium.com/no-end-in-sight/when-chronic-illness-has-you-playing-scared-a0f124b9880
['Anna Gerhartz']
2020-07-19 18:19:12.163000+00:00
['Disability', 'Lifestyle', 'Mental Illness', 'Health', 'Healthcare']
7 things to consider before selecting a hospitality recruiting agency
The hospitality industry is known for some pretty demanding work environments. From restaurants, hotels, and event organizers, all sub-industries require job candidates to have certain character traits and skills. If you are in charge of hiring, you should first look for candidates with “soft skills,” which tends to be overlooked in the hiring process. If you’d like to know the value of soft skills to hospitality workers, check out “The Ten Most Important Strengths for Success in the Hospitality Industry.” If you are eager to get started with hiring, realize this is not a one-person job and you need some help. In any industry, you can outsource your hiring needs to a recruiting firm. When you head to Google and type in hospitality recruiting agencies, hotel recruiting companies, or restaurant recruiting agencies, it will return a list of results to filter through. Look for companies with notable introductions, key differentiators, diversity recognition, and strong testimonials to choose the right recruiting agency that aligns with your employee culture, customer experience, and business genre. The following list contains the top 7 factors to consider when hiring a recruiting agency in hospitality. Continue reading for more information about why these factors are so important and how you can create your own list. 1. Industry knowledge When searching for a firm, it’s in your best interest to look for where their industry expertise lies. It’s common to find a section such as “Industries we focus on — hospitality” in their navigation menu. However, it should be clear what niche of hospitality they have experience supporting. For example, is their expertise within the hotel industry rather than the restaurant industry? And if so, do they focus on hotel management and what kind of hotel? Because if you compare the Holiday Inn versus the Mandarin Hotels, you will see a significant difference in the type of services, culture, and clientele it attracts. The same applies if they are focused on nuances like fast-food, versus casual, versus fine dining. The type of culinary and front-of-house team you need is very different, and all plays a vital role in executing your services. Let’s say the recruiting agency’s clients are Burger King, Arby’s, and Sweetgreen, but you are a hotel restaurant. Extra time and effort will be needed for you and the agency to connect on your business practices and find the caliber of people you need who understand the differences between commercial chains and hotel-owned properties. Every business has its niche and approach to how they create a wonderful guest experience. So whatever recruiting firm you have your eyes on, they should know what makes you different to the core and ideally have experience with your hiring expectations. 2. Turn-key solutions Now that you know “industry-specific agencies” is where you should start, it’s not the only thing you need to be looking for. More than often, firms provide more than just one service. Some Human Resources agencies can provide you with a complete package if you need more than just recruiting help. From HR solutions, to talent management, to consulting for benefit programs, some offer it all (while others don’t), and it’s best practice to understand what each of those mean for you. HR Solutions: A service with extensive offerings. Recruitment methods Staff training Development and designing of human resources systems and policies Support for employee relations issues and workforce administration Implementing or streamlining benefits administration Talent Management: A service that is focused on culture, loyalty, and growth. Here, the agency should be explaining how they can help you with maintaining and growing your current employees. Look for critical points, such as placing priorities on culture and strategies to help leaders think differently, as well as drive and inspire performance and loyalty. We suggest you keep an eye out for how they talk about culture, because if they can help you with strengthening yours, it will mean the difference between just filling a role at your business versus filling a need. Consulting for benefit programs: A service that can help your company take care of your employees. Because benefits packages vary greatly across the hospitality industry, your recruitment firm should understand your particular offerings and be able to communicate things like traditional HMOs versus PPOs to job candidates, as well as dental, vision, life insurance, and more if desired. 3. Search tactics How does the agency find talent? The most obvious is utilizing job boards and career sites. Types of job boards can range from generic such as Indeed, Zip Recruiter, Monster, to industry-specific such as ours. Since many job sites and career boards contain millions of job seekers’ resumes, these resources can act as a strong pool for recruiters, which is why it’s standard practice. Applicant Tracking Systems (ATS) are used by nearly all recruiters to stay organized and manage their pool of candidates. These systems store resumes and data of each candidate and are technologically built with strong search capabilities. It is an excellent tool and allows recruiters to find and vet strong prospects. The second method and tool you could expect would be LinkedIn. For recruiters, it’s easy to use, but you can expect this to be used more for management positions. The reason for the popularity of this method is because recruiters can use marketing and advertising tools, which makes their hunt for the perfect candidate easier. Networking events are an excellent resource for recruiters, allowing them to network with active job seekers, learn about the position they are hiring for, and build a network with professionals in the industry. If they mention that they are able to locate candidates through referrals, you might have a killer agency in front of you. Agencies often stay in touch with candidates they previously placed. Referrals are great ways to find candidates, because there’s someone who can vouch for them. 4. Qualifying candidates How does the recruiting firm qualify candidates? In any industry, it’s essential you and the recruiter have this nailed down together. When looking for key factors such as characteristics, level of motivation, hard and soft skills, and preferences, you and the agency should sign-off on what that checklist looks like. And keep in mind, a resume can only say so much, but the firm uses a tool like our video interview questionnaire/ introduction, they can learn a lot about job seekers in a minute or two (and share the virtual intro with you). For many hospitality jobs, the field work says it all. If you are hiring for a chef, cook, bartender, or servers, standard practice is to have them come in and see your candidate in action. A potential chef can look great on a resume, but if the candidate doesn’t know how to make a proper medium rear steak, then you have your answer. And there is only one way to find out: test, test, test! Vetting methodologies can provide you with a much clearer picture as far as the candidate’s learning abilities and analytical skills are concerned. Know your recruiting firm’s. 5. Steady communication Will the agency provide weekly search updates? Each day your restaurant goes without a critical staff member, your business and team can suffer. Quickly filling a vacant role with the right candidate can take time, but your recruiting agency should be communicating their progress on a regular basis. Set your recruitment goals before you reach out to a talent acquisition firm so you can track and set expectations. Follow the example below. Be specific — start by clearly defining your recruitment goals. For example, you need to build a new culinary team, and they need to be able to produce X, Y, and Z Be timely — aim to have a certain number of candidates in a specific time frame (and make yourself available at certain times to enact your responsibilities as a hiring manager) Be resourceful — identify tools or strategies do you need to accomplish your hiring goals Be realistic — make sure the goals of your business and the agency are aligned Aligning your goals with the abilities of the recruiting firm is a sure way to success. 6. Tech power How are they screening candidates? Talent acquisition agencies often invest a good amount in systems that help them assess candidates. As mentioned before, ATS systems play a big role in this. The foundation of these programs is to scan resume content and match them with the job content. The systems will then give the candidate a score or grade of how likely they fit the job. However, there is a con. The system relies too much on keywords, which means you can miss out on great applicants simply because they used different terms in their applications. And as you can guess, systems such as these don’t tell you everything about a candidate. Ensure that the agency doesn’t rely too much on this and has a humanistic approach to recruiting to create a fine balance. Working in hospitality takes grit, talent, and customer-facing “people skills,” so it’s essential you focus on hard and soft skills. Hard skills and experience can be read on a resume, but personality traits can only be read through in-person, phone, or video interviews. Keep this in mind when you are considering outsourcing your hiring efforts. 7. Team building Your business harbors a unique team culture, so hiring is more than just filling a vacant seat. You’re inviting a person into your business “family” and the recruiting agency should have a strong sense of who that person should be. While they’re focusing on the right person, know that they’re keeping in mind the best practices of team building. Oftentimes, they can even help you reach goals of diversifying your workplace or bringing more uniquely qualified employees on board. Create your list of needs (using this one as a guide), then compare and contrast agencies and their pros and cons. It is wise to look for human resource firms that can deliver additional value on top of recruiting. We suggest you use your recruitment goals as a path to guide you to the right firm that understands hospitality, your business, and its culture to the core. Below you will find Recruiting Firms in the top U.S states for hospitality. Florida Recruiting Firms · Epic Staffing Agency · Hospitality Staffing Solutions · RMG Staffing California Recruiting Firms · Brad Metzger Solutions · Boutique Search Firm · Bristol Associates New York Recruiting Firms · Epic Staffing Agency · Hospitality Talent Scouts · RestaurantZone
https://medium.com/@grithp-hospitality-jobs/7-things-to-consider-before-selecting-a-hospitality-recruiting-agency-705c6308686
['Grit Hp - Hospitality']
2020-12-18 14:43:08.250000+00:00
['Hr Talent Management', 'Hospitality', 'Recruiting', 'Talent Management', 'Hospitality Industry']
Script Analysis: “I’m Your Woman” — Part 4: Themes
Read the script for this gripping crime drama featuring a head-turning performance by ‘The Marvelous Mrs. Maisel’ star Rachel Brosnahan. Reading scripts. Absolutely critical to learn the craft of screenwriting. The focus of this bi-weekly series is a deep structural and thematic analysis of each script we read. Our daily schedule: Monday: Scene-By-Scene Breakdown Tuesday: Plot Wednesday: Characters Thursday: Themes Friday: Dialogue Saturday: Takeaways Today: Themes I have this theory about theme. In two parts. First, a principle: Theme = Meaning. What does the story mean? Second, while there is almost always a Central Theme, there are multiple other Sub-Themes at play in a story. Therefore the question: What does a story mean takes on several layers of meaning? Time to ponder themes in I’m Your Woman. You may download a PDF of the script — free and legal — here. Written by Julia Hart, Jordan Horowitz, directed by Julia Hart. Plot Summary: In this 1970s set crime drama, a woman is forced to go on the run after her husband betrays his partners, sending she and her baby on a dangerous journey. Writing Exercise: Explore the themes in I’m Your Woman. What is its Central Theme? What are some of the related Sub-Themes? Tomorrow we shift our focus to the script’s dialogue. Major kudos to Priya Gopal for doing this week’s scene-by-scene breakdown. To download a PDF of the breakdown for I’m Your Woman, go here. For Part 1, to read the Scene-By-Scene Breakdown discussion, go here. For Part 2, to read the Plot discussion, go here. For Part 3, to read the Character discussion, go here. To access over 90 analyses of previous movie scripts we have read and discussed at Go Into The Story, go here. I hope to see you in the RESPONSE section about this week’s script: I’m Your Woman.
https://gointothestory.blcklst.com/script-analysis-im-your-woman-part-4-themes-683493780d
['Scott Myers']
2021-01-21 14:03:30.074000+00:00
['Film', 'Screenwriting', 'Screenplay', 'Movies', 'Cinema']
LinkedIn Provides New Targeting Options Including Lookalike Audiences
The careers network LinkedIn provides helpful new targeting features such as Bing search data, Lookalike audiences, or audience templates. With LinkedIn, an internationally important social careers network has been established, which is important for the distribution of content and ads. Advertisers can find relevant target groups there because they can be segmented according to interests and their jobs. Now the platform delivers three new targeting options designed to boost advertising effectiveness, thanks to insights from Bing or pre-set audiences. Reach millions of professionals with the right targeting According to LinkedIn, 610 million users, who are often cited as professionals, are now found on LinkedIn. Exactly because of the information about their jobs and also their professional interests, these users are so interesting for advertisers. In order to achieve this purpose, sophisticated targeting is required. As a result, LinkedIn now offers advertisers three new features that can effectively support them. These are lookalike audiences, audience templates and the integration of Bing search data into solution interest targeting. These options are designed to strengthen LinkedIn’s marketing activities and increase advertisers’ ROI. Especially the Lookalike Audiences should let marketers listen. These combine the requirements of the advertisers to ideal target groups with the database of LinkedIn users. In this way, people can be reached who clearly resemble the existing customer. The company provides three options in the blog post provided by Lookalike Audiences. It can be the target groups identified which exhibit a high engagement, such as Likes or visits from your page, and thus a more likely conversion can be accommodated. Furthermore, the feature helps to find more qualified potential customers. In the test, advertisers were able to increase their campaign reach five to ten times and still have access to the high-quality target groups that are relevant to their brand. We use lookalike audiences as one part of our strategy to drive lower funnel conversions. What were the profitable ones for our clients? explains Harold Christensen, Digital Account Director of Labelium. In the context of B2B marketing, advertisers using Lookalike Audiences can also integrate new, as yet unfocused, companies as the target of their ads. The companies then meet the ideal customer groups as well. Lookalike Audience on LinkedIn (click on the image for a larger view), © LinkedIn A Lookalike Audience can be created by creating a Matched Audience in the Audience Manager. Here lists with target accounts or CRM contacts can be used. Audiences that have visited your site are also intended for audience creation. Refine interest targeting with Bing data The careers network had presented only in January, the solution Interest Targeting. This is to reach users who have interests relevant to the respective campaign. These are determined by content that they share on LinkedIn, like, etc. This interest targeting is now supplemented by data from Microsoft’s search engine Bing. This analyzes the target groups in even greater detail. For now, LinkedIn-mediated interests are also combined with interests stemming from searches and content that Bing’s users come into contact with. However, the data security of these users should be strictly monitored. Interest targeting is a relevant factor at LinkedIn, © LinkedIn Audience templates help launch new campaigns Anyone who advertises on LinkedIn, but is not yet fully aware of which target group is optimal for the campaign, can use Audience templates on a feature that helps. After all, this will initially provide 20 — more to follow — preset B2B target groups that can be selected for the ad. Audience Templates provide pre-defined audience options, (click on the image to get a larger view), © LinkedIn These target groups take into account characteristics such as groups and job titles or specified skills of the users. These can be selected with one click if they are relevant for the campaign. In this way, LinkedIn wants advertisers to save time setting up campaigns. Over the next two weeks, the new targeting options will be made available to all LinkedIn marketers. If you want to know more about strong targeting and marketing on LinkedIn, you can also refer to the company’s detailed e-book. In any case, networks such as XING and LinkedIn are different from Facebook and Instagram, and offer marketers, especially in the B2B sector, their own opportunities to reach qualified and valuable audiences who promise less engagement, and more conversions and longer-term brand engagement.
https://medium.com/@shivamsinghspeaks/linkedin-provides-new-targeting-options-including-lookalike-audiences-88abc84e0f61
['Shivam Singh', 'Digital Marketing Specialist']
2019-03-27 14:15:30.866000+00:00
['LinkedIn', 'Marketing', 'Advertising', 'Marketing Strategies', 'Digital Marketing']
Coindelta Lists Decision Token (HST)
Dear Community, Coindelta is happy to list another unique ERC20 token Decision Token (HST). Deposit & Withdrawal are live Now and trading will begin at 6:00PM IST 27th Of June 2018. Pre-requisites before trading in Decision Token (HST) — The withdrawal fee for Decision Token (HST) will be 10. Minimum deposit for HST is 15. Recovery fee for HST is 5. The number of blockchain confirmations required for deposit is 25. What is Decision Token (HST)? In an effort to improve democratic processes around the world tech startup Horizon State has built a digital ballot box, secured by blockchain. The company is working with governments across the world to enable e-voting on the blockchain, making it more secure and trustworthy than traditional voting methods. The secure platform allows citizens to cast their votes on smartphone apps, rather than having to queue up at polling stations. The platform is operated through the use of Decision Tokens (HST). These tokens act as access rights for both customers and third-party developers, for permission to submit votes or polling of opinions to a distributed ledger. This platform deploys “digital ballot boxes” that are transparent yet secure. Each and every voter is registered within the system. Key Features of Horizon State (HST) — Horizon State delivers a blockchain-secured digital ballot box that manages the formal voting process from end-to-end. The system maintains the anonymity of all voters . . Results and outcomes are not only faster but also much cheaper and more efficient. It can be adopted freely by organizations of all sizes without having to change their existing infrastructures. The platform is user-friendly and does not require extensive technical knowledge pertaining to the blockchain. With unfair voting and biased elections being reported all across the globe, Horizon State is offering the public a highly useful platform that can reinvigorate the democratic process. That is all for now! For any enquiries, feel free to reach out to us! Stay tuned for more announcements! Coindelta Website Coindelta Twitter Coindelta Telegram Group Coindelta Announcements With love for the Blockchain and Cryptocurrencies, Coindelta Family
https://medium.com/coindelta/coindelta-lists-decision-token-hst-85577e28c07d
['Vilakshan Bhargava']
2018-06-27 10:20:38.762000+00:00
['Blockchain', 'Cryptocurrency', 'Listings', 'Announcements']
Top Eleven Football Manager for Beginner.
I will mention you step by step process to develop and win a Historical Trophies in your First season on Top Eleven Football Manager Online Game. STEP 1: First, you have to keep your team mentally and physically strong for coming match. Give your first team player squad Full Moral Boost and Condition bonus which helps them to preform better during the match. Never get on yellow or red condition of any first team squad player. Maximum Condition bonus is 100%. Always try to keep above 70% before starting your match. UP Green Arrow is Moral boost which should not be down. These indication helps your team to preform well during match. STEP 2: Second, you have to train your squad and get training bonus to get more organized and play as a team. 10% is the Maximum Training Bonus. STEP 3: Choose Formation which suits your team; to know which formation is good for your squad, you just need to watch live match and find out which players plays well and who scores goals. If your Striker looks dangerous then you need to organize your team around your striker. If your Winger looks dangerous and scores more goals then you need to play through the flanks. STEP 4: If the opponent team players are stronger than your team you need to play Force Counter Attack to win that game. AFTER LEARNING THESE STEPS YOU NEED TO GO THROUGH FOLLOWING LINKS FOR ADVANCE STEPS. Click Here for >>>TOP ELEVEN BEST FORMATION<<< Click Here for >>>TOP ELEVEN COUNTER FORMATION<<< Click Here for >>>TOP ELEVEN TIPS AND TRICKS<<<
https://medium.com/@wilxon-xtha/top-eleven-football-manager-for-beginner-2c98abc2dcaa
['Wilson Shrestha']
2020-12-24 09:03:18.305000+00:00
['Top Eleven', 'Football', 'Football Manager', 'Football Tips', 'Football Manager Game']
Trump Madness -Let’s End It.
Trump Madness -Let’s End It. The fact that DJT denies defeat is at the very least madness and narcissism overdone. The only minus I could find about el senor Biden was / is his age. But that’s what his aides are for and he will, the world hopes be amply endowed with those qualities. Trump was lacking these assets and indeed hundreds of others, Reagan made him look like a fool. A fool he is. A fool he repeatedly proved himself to be. This was well reflected in The Trump Show recently on TV. Trump’s ubiquitous use of Twitter may not drift away from us till Mid January, but its focus will certainly divert from mainstream US policy from that point onward. I believe Biden’s diplomacy skills will far exceed his predecessor’s, the damage Trump did to the West’s relationship with China, who still hold the keys to the castle that is Economic Recovery, was considerable . Biden will also need able lieutenants in his support team; understanding and reacting positively to Asian practices and cultures can be testing. They will be tested. There is too much at stake to trivialise this element of new US foreign policy, disequilibriums must be better balanced as we enter 2021. China may devalue its currency even? We shall have to wait and see. Rob Robert Peach 01 December 2020
https://medium.com/@robpeach/trump-madness-lets-end-it-e6ead7940044
['Robert Peach']
2020-12-01 07:54:10.697000+00:00
['United States', 'Business', 'Foreign Policy', 'Trump']
3: Longest Substring without Repeating Characters
Handling the Exception For most of the algorithm problems, there are always the static component that does not require us to develop more complex solutions. In this case, we should note that the solution always equals to the length of given input if the length of input is 0 or 1 as null input or one-length input can only have one substring that is valid. That being said, we can make an exceptional statement: if (s.length() <= 1) { return s.length(); } Declaration of Variables and Data Structure While I need a variable ‘max’, which reflects the count of the repeated data, I also need the container to store data to solve this problem, and I want to use ArrayList for this case. int max = 0; // (1) List <Character> list; // (2) In this situation, I focus on solving the problem rather than maximizing the time and space complexity. The point of this stage is to show you know how to improve the efficiency of code. for (int i = 0; i < s.length() - 1; ++i) { // first 'for' loop list = new ArrayList<>(); list.add(s.charAt(i));for (int j = i + 1; j < s.length(); ++j) { // nested 'for' loop if (list.contains(s.charAt(j))) { if (max < list.size()) { max = list.size(); } break; } else { list.add(s.charAt(j)); } } } At the first-level loop, I launch the process to begin each iteration beginning with the specific character. for (int i = 0; i < s.length() - 1; ++i) { // first 'for' loop list = new ArrayList<>(); list.add(s.charAt(i)); Why the iteration range is only up to s.length()-1? It is because the reason of this iteration is to compare the previous character and its immediately-next one. Because the iteration is pair-based, we need to go only up to the second from the last character. That is how we can secure the last one to compare in the last case. At the second-level loop, we begin with the second from the first-character, having a second character to compare with the ones stored in the list. Now, we use the conditional statement: for (int j = i + 1; j < s.length(); ++j) { // nested 'for' loop if (list.contains(s.charAt(j))) { if (max < list.size()) { max = list.size(); } break; } else { list.add(s.charAt(j)); } } If the list already contains the character indicated in the second loop, we just end the loop process because the aim for this code is to find out “Longest Substring Without the Repeating Characters”. If the loop ends here, we need to check if the substring is longer than the previously-maximum one. If it is the longest, we need to update the max variable: if (list.contains(s.charAt(j))) { if (max < list.size()) { max = list.size(); } break; } But what if the iteration loop ends as the loop ended? In this case, we skip the process to update the maximum value. For this case, we add the code outside of the first loop: max = (list != null && list.size() > max) ? list.size() : max The process ends if we just return the max variable.
https://medium.com/jacob771/three-solutions-to-leetcode-problem-3-longest-substring-without-repeating-characters-d5664d452506
['Seunghyeon Shin']
2021-01-01 20:25:28.355000+00:00
['Computer Science', 'Leetcode', 'Coding Interview', 'Algorithm', 'Data Structure']
SE7EN
in Change Your Mind Change Your Life
https://medium.com/@marcopatino/se7en-1307c0b29f65
['Marco Patiño']
2021-03-06 16:10:24.860000+00:00
['Spanish', 'Cine', 'Seven', 'Español', 'Pachuca']
Why I Can No Longer Trust White Optimism
People Will Let You Down I’m a realist, and I’ve always placed my faith and hope in things and people sparingly for several reasons. Having hope about future outcomes when you’ve seen the worst of people most of your life can be difficult. Being a former street kid for a spell and being in foster care taught me to read people like a newspaper quickly, and correctly. I could tell by one encounter if a person would help, hinder, or harm me. The words and/or actions of friends, caretakers, and social service providers, social workers, teachers, etc., spoke volumes about their character. To just make it short and sweet, people will let you down, and I’m just keeping it real. To pretend otherwise is asking for trouble. I’ve seen the ugliness humanity dishes out from my own mother, family and upbringing in an abusive home, to my professional career fields in sexual assault and childhood sexual abuse, domestic violence, and my time working in social work where I have encountered persons affected by substance abuse, mental illness, poverty, and life-altering, debilitating traumas that sometimes span generations. Watching adults throw away their kids in preference of sex, money, men/women is a hard pill to swallow, but I’ve witnessed it. I’ve seen plenty of human trauma victims neglected. I have watched churches sweep sexual abuse under the rug from their own priests, and I’ve seen desperate people denied for social services because of racism. White people and People of Color have intentionally engaged in microaggressions to be a hindrance for no other reason except I’m Black and they had power to do so. When complete strangers can be evil for no good reason and get away with it, it’s hard to have hope in humanity. Some groups of people wake up every day and choose to be racist or ignore racism. That’s why with faith and people, my optimism is low. When you keep your optimism low, you don’t have far to fall when folks let you down, and they will let you down. People like me lay up staring at ceilings at night, wondering how the world can be so heartless, and how people can be so damned barbaric and selfish. How can we see our fellow man struggling, suffering, and oppressed and do nothing about it at all, as if it’s someone else’s problem? Seeing the ugliness and callousness of people so frequently in real-time has caused me to lose faith in humanity. Not all people, but a lot of them, especially White people — which brings me to racism. It’s hard to be optimistic about racism ending when I’ve seen the ugliness of racist White people who are steadfast in their racist convictions and unwilling to change. Not only are White people unwilling to change, they also cannot see themselves as racist. They won’t take the time to learn about systematic and institutional racism that harms us all, and they don’t change their social and political behavior so we Black folks can see changes in ours. How in the fuck do you keep asking Black folks to believe in White people when all they’ve ever done to Black folks is knock us down, keep us down, and/or force us to go along and get along with them in all their evil and fuckery to survive? If we are all supposed to be one big national family, then a lot of our family members are abusive, and it’s impossible to thrive under such conditions. Just hoping and wishing White folks will change doesn’t work. White Family Violence Dashes Black Family Hopes Asking victims of racism to continue being optimistic when the victimization is a reality every single day seems like some type of escapism effort for White people to avoid dealing with the reality of their family violence against Blacks in America. I chose the term family violence because the term encompasses many acts that can include emotional, financial, physical, and sexual abuse that puts immediate family members in danger. To me, a family is simply a group of people who have each other’s back and will go to the ends of the earth to care for and protect those in the family from harm. Families aren’t always defined by blood relations or by marriage. There are church families, work families, homeless people often create street families to share in their struggles and to protect those in their families from harm. Citizens of nations often act as families, advocating and protecting national interests. We often view families by the amount of love and respect they hold for each other. But as we all know, in any family, there are bad family members. In America, White people are those bad family members who cause all the trouble in our enormous family, stealing our shit when we’re not looking, infringing upon our rights, verbally and financially abusing family members, and lying to us all keep us separated. White family members refuse to get help, refuse to acknowledge they have issues, and they refuse change. Generation after generation, White people in America are the violent family members dashing Black hopes and dreams while they infringe on our freedoms and rights. How are we expected to remain hopeful when so many White family members willfully engage in family violence? No one should really expect Black people to still be hopeful White people will change after all we’ve seen and experienced. The fact White people are still proclaiming they don’t know what to do about racism is all one needs to hear to have their hopes dashed. White people authored our national family violence campaign specifically against Black people and benefit from the family violence, yet tell us they don’t know how to end it. Racism, in any form, is violence. How is it that White people don’t know how to end violence they engage in? How do they not even know all the ways they are violent towards Black people and why can they so easily look away from the harm they’ve caused because of their intentional acts of violence and willful neglect? Why are we Black people supposed to sit around and look at White people optimistically after they’ve shown us a 400+ year track record of pure family violence committed against Black and Brown people? The truth of the matter is that more White people are comfortable doing nothing about racism. Just look how comfortable they are since they have elected Biden. They really believe shit is going back to normal and they can keep burying their heads in the sand. For most of us, we knew we were selecting the lesser of two evils. For them, they see the choice as peace (White comfort), fuck progress. America is literally a failed state living through a coup d’état and a pandemic at the same damned time, and White folks are worried about White folks shit like being with family for Christmas, vacations, getting back to normal, the economy and the stock market, and brunches. Not giving a damned about racism got us here. White people intentionally made America a failed state because they love racism and White privilege more than they love the national family. White folks would rather sell me and millions of other Black folks more optimism than to admit it, because White comfort ya know. White folks’ optimism is White folks’ snake oil cure to hide their lack of willingness to their racism head on, and I’m tired of them selling their shitty optimism to us. I’m also tired of Black folks buying White optimism in such large quantities, because it does nothing for us. White optimism kills. White Optimism Kills White folks’ optimism kills Black folks’ hopes, prevents us from having dreams, robs us of joy, steals our inheritances, deprives us of opportunities, allows us to be physically killed as they stand by shocked in silence again and again, and it lets White people off the hook for their racist sins. White optimism allows White people to live full vibrant lives regardless of the potential. White optimism also allows White folks to watch Black people die. White folks have used optimism for far too long to avoid real substantive conversations and change. White folks’ hopefulness and confidence about the future has been weaponized against Black folks. They’ll tell us things are getting better while they do nothing about racial inequality. White folks sell us their old useless hope in their politics. White people sell us hope with their warped capitalistic economic system built on the backs of enslaved people and then pretend we minorities can just work hard to catch up to them when nothing is further from the truth. They even sold us hope in their slave bibles. If there is one thing White people know how to do, it’s sell hopes and dreams to the oppressed. It’s why it’s so hard for me to have hope in White people collectively. So few White folks change, and because so few White people change, Black people and other racial minority groups continue to experience low wealth, higher poverty races, and less power transfers. It all stems from White people and White violence. So, I don’t trust White optimism. It’s nothing more than a pyramid scheme with a high chance of Black folks being disappointed in them. White optimism schemes only work when we Black folks believe in White dreams. Investing in White folks’ American dream is the biggest pyramid scheme Black folks invest in. Just like in pyramid schemes, the only way White people can continue extracting Black wealth is by promising extraordinary returns to new Black recruits if we join their team. New recruits are Black people and other suckered non-White immigrant groups vested in and willing to sacrifice for the American dream, not realizing we’ll never have what they have because of racism, and if you have the things they have, it’s because you’ve adopted a zero-sum mindset like most White people. In order for you to win, others must lose by any means necessary. We Black folks spend our entire lives hoping and never achieving, chasing without realizing, never enjoying, and dying when we should be living. White optimism is killing us slowly, and we willingly sign up for it. White Optimism Is White Folks’ Favorite Religion Too many White people treat optimism like some sort of religion. Sadly, a lot of Black people do too. People are running around hoping and having faith in things they shouldn’t. Such optimism causes people to get hurt. Why are we asking Black and Brown people to wait and see if White folks will stop hurting them, even though they’ve seen with their own eyes these people have no plans to stop hurting them? Such optimism is bad for us Black folks. It’s asking us to dismiss what we see with our eyes and to forget our own racialized experiences. White optimism gives too many bad White folks opportunities to continue harming us. Because White people do not experience racism and because they experience White Supremacy differently, it’s easy for them to suggest optimism. I was laying in bed the other morning and I was listening to business analyst describe the markets optimistically, despite literally seeing business close each week in my area permanently. I find it difficult to continue subscribing to White folks’ optimism, because we’re consistently let down by them. From the stock market to protesting in the streets for justice, White optimism is just bad for Black people. Their hopes don’t translate into Black successful. Black people and people of color cannot trust White optimism, nor can we afford to anymore. White Optimism Wastes Times If Black people and People of Color spent more time working on ourselves and our own communities instead of wasting time buying into White people’s optimism, we’d be much better off as a people I believe. Investing in White optimism is a waste of time because it allows White people to kick the can down the road, doing nothing about their behavior individually or collectively as the rest of us suffer in this perpetual hell of oppression. We’re smothered to death in White optimism, so much so as a nation it has rendered us immobile. White people keep hoping things get better and not doing anything substantive, tangible, and long-term to make things better. White optimism led to Trump and Trumpism, Q-Anon, and MAGA. White optimism ignored the Tea party, ignored the harmful impact of trickle-down, destroyed the earth via environmental terrorism/global warming, forgot White history while at the same time discounting Black history, and has worked quietly to construct fantasies about liberty, freedom, and equality that they know damned well we’ll never have, because they don’t intend for us to have them. Waiting on White people to get better or do the right thing is like waiting on a fever to break without medicine. It might break with no help, but it could also kill you in the process. White Optimism Keeps White People Comfortable White optimism keeps White people insulated in White comfort, and I’m not down with that shit anymore. If my safety, comfort and ability to care for myself and my family is being impacted by White comfort, they are getting ready to be uncomfortable for a lot from now on because I’m no longer allowing White people to sell me hope and a better future without plans. If you can’t tell me anything better than “I’m optimistic about the future,” save it. Selling optimism makes White folks feel good, and apparently they believe it’s what we want to hear. I don’t. Optimism allows White people to change subjects and turn pages we need to address, and their optimism allows them to pretend everything is okay for the rest of us. Because of racism, things are never okay for us individually, for Blacks and people of color collectively, nor the nation. Racism is a national security threat White people have allowed to ruin America, and they act like it’s just another day in the park. Because White people don’t change, nothing ever gets better, so save that optimism White people. I Can No Longer Trust White Optimism I don’t trust White optimism anymore because it has proven in my lifetime it’s untrustworthy. Black people and People of Color have been asked for far too long to place our hope in White people who have shown themselves to be the same racist, apathetic, liberal and conservative White people they’ve always been. Asking us to continue being optimistic is asking us to forget broken promises, missed opportunities, silence, and Asking us to continue trusting White optimism is asking non-White folks to ignore the millions of people who won’t have money this week because White men like Mitch McConnell and Donald Trump play racist White man power tripping games. We’re going to ignore we tried to tell you this would happen four years ago, and you didn’t listen. We’re going to act like our political system and governments aren’t broken and don’t work for everyone, just keep hoping things are going to change. White people, shit will not change in America because you all don’t change. You believe racism is someone else’s problem, not yours. You believe because you don’t say the N-word (you just think it when you get mad); you give to charity monthly, or you are nice to Black people that’s enough to make things better for us. You White people think hoping the best will make the best just happen. Let me break the bad news to you, it won’t. Optimism should be removed from the lips and minds of White people in 2021. Ya’ll have run this nation into the ground because you’ve lived on optimism for far too long, and you trusted yourselves entirely too much. Next year is probably going to be the worst year of our lives as a nation unless you’re rich and have money to flee this shithole. It’s going to take years for us to get back all we’ve lost, if that’s even possible. We can’t bring back the dead lost to Covid. We can’t undo all the racist attacks we’ve endured over the past 40 years. Optimism won’t get us out of this mess. White optimism won’t stop the next batch of racist Trumps and McConnells from taking office and taking over the government. Only hard work is going to fix this. White people must consciously fight against themselves to keep us from going into full authoritarianism. We fight and take to the streets and White people cling to their little optimism, paralyzed in fear. How in the hell can you be so big and bad when you’re ruling and destroying, but helpless with mending and rebuilding? See how that works. It makes no sense. You can’t hope shit gets better, you gotta do something to make it better. And it’s okay if you don’t, just get used to us cutting off your optimistic outlooks and platitudes, because they are useless to us. We’re fresh out of hope. We’re in the trenches and we’re keeping it real! Asking Black people to be optimistic in the face of racism feels like oppression. White optimism endangers Black lives, and it’s bad for our physical, mental, emotional, and financial health. Normalize challenging and not trusting White optimism. Optimism (the equivalent of faith or hope) without works is a dead faith because the lack of works reveals an unchanged life or a spiritually dead heart. If faith without works is dead, White optimism without action should be dead too. Next year is going to be challenging enough, we don’t need false hopes impeding our survival. If all you can do is hope White people, save it. That shit can’t help us out of this mess you’ve made. Hope less and love more. Clearly hope doesn’t motivate you White people to do better, maybe love will. Learn to love otherness and maybe you can save America (finally). I can tell you we are not. I hung up my cape this year. Saving America is on ya’ll White folks, and your optimism cannot and will not save any of us. Optimism without works is dead honey, and America is on life support. Marley K in Quarantine in 2020…still If you enjoyed this essay, follow me @ https://marleyk.medium.com/. If you’re new to this style of anti-racism thought, you may enjoy these essays calling out American racism. Don’t forget to read the comments, they are the educational!
https://medium.com/@marleyk/3743399809b
['Marley K.']
2020-12-27 19:52:40.588000+00:00
['White Privilege', 'Hope', 'Equality', 'Racism', 'White Supremacy']
A Christmas Goodbye to a Toad and his Toadies
DRAIN the swamp, Photo by Byron Burns It took a million trials, plus some hearings, I am told, Finally! He’s going. Still, our nerves are rocked and rolled. A man we didn’t read quite right, both cowardly and bold, But not the kind of bold of old, of chivalry, and courage, But more the kind of bold of devilry discourage. We can tip toe outside our homes, for faith and food, to forage. While he’s frothing at the mouth, like a lost and dismal Adolph Gather your own bold. Say “Goodbye” to brown nosed Rudolphs Day after day, plague drifts away: All the creepy doom coughs!
https://medium.com/resistance-poetry/a-christmas-goodbye-to-a-toad-and-his-toadies-736cd1591f85
['Christyl Rivers']
2020-12-18 12:43:31.431000+00:00
['Resistance Poetry', '2020', 'Covid 19', 'Humor', 'Trump']
Answering Your Questions about the COVID-19 Vaccines
How does the immune system work? The human immune system is divided into two parts: the innate and adaptive. We’re born with the innate immune system whilst the adaptive is something we develop. The innate immune system is broad while the adaptive is specialised. The innate immune system consists of cells (phagocytes) which ‘swallow’ and destroy bacteria, viruses and other disease-causing organisms (pathogens). This happens quickly after being infected. These cells break up the pathogens into smaller parts which they then display on their surface. Cells called helper T cells ‘read’ these smaller parts and start the adaptive immune response. Cells called B lymphocytes are activated and turn into plasma cells which start producing antibodies. These are proteins designed to specifically counteract one particular pathogen. They fit around proteins called antigens on the surface of the pathogen. After doing this they stop the pathogen functioning, in the case of viruses this can stop them being able to invade cells, and helps the phagocytes find and swallow them. The helper T cells also activate killer T cells which find and destroy cells which have been infected by the pathogen. The adaptive immune system as a result is slower. But it lasts. Both B and T cells retain ‘memory’ of that pathogen so if we are infected again they can start working immediately to destroy it. It’s this memory which is the basis of vaccination. How does a vaccine work and how long do they take to produce? Even in the early days of what we would recognise as Medicine people noticed that patients who survived some infections, such as smallpox, would never suffer the disease again. The concept of inoculation was based on this. Dried smallpox pustules were scratched into the skin or blown up the nose of patients. The majority of people would develop mild symptoms but then be immune to smallpox. Some patients would develop full-blown smallpox and so a safer alternative was sought. The story of Edward Jenner, the English country physician, is famous. He noticed how dairymaids who contracted cowpox, a mild disease, never suffered from smallpox. He scratched cowpox pustules into the arms of a boy called James Phipps who then developed a fever. Once Phipps recovered Jenner repeatedly injected the boy with smallpox pus. The boy showed no symptoms. The process was called vaccination from the Latin word for cow. Vaccines usually consist of a weakened, non-infectious version of a pathogen or a part of a pathogen. The idea is to activate our adaptive response (which is why following a vaccine we often feel unwell) and so give us that ‘memory’ ready to fight the pathogen in the future. But this takes time. Vaccines traditionally take about 10-years to produce. You can hear me talking to my Pharmacist colleague Kunal Gohil about the immune system and the process of vaccine production here: COVID-19 Part Four: The search for a cure — Take Aurally A feature of the COVID-19 pandemic has been the rush to find the magic bullet to defeat it. Under the heading of…www.takeaurally.com So how did we make the COVID-19 vaccines so quickly? The Pfizer, Moderna and Oxford vaccines have all been made using new methods. The Pfizer and Moderna vaccines use messenger RNA. Human beings (like most life on Earth) store our genetic material as DNA. DNA is like a blueprint for making proteins. The blueprint is ‘read’ and something called messenger RNA (mRNA) is made. The mRNA is used by our cells as a code to make the proteins which we use to live. The Pfizer and Moderna vaccines use mRNA that codes for the spike protein on the COVID-19 virus wrapped in small fatty molecules to stop the mRNA from being destroyed by our enzymes. The mRNA is read by our cells who then make the protein to be detected by helper T cells. The Oxford vaccine uses a harmless virus which causes the common cold in chimpanzees called an adenovirus. The adenovirus was altered to express the COVID-19 spike protein. The end result is the same: our helper T cells detect the spike protein and kick off our adaptive immune response. Although these vaccines have been produced in response to a disease we’ve only known about for a year, the technology behind them has been decades in the making. New ways of making vaccines, called platform technology, have been sought for over twenty years as a way of being able to provide new vaccines quickly to fight a new disease. While the vaccines feel like they’ve been produced overnight they’re actually the result of lots of preparation. In January 2020 the SARS-CoV-2 virus was first identified and its genetic sequence was analysed and published by Chinese scientists. This meant work could begin immediately to produce vaccines using the platform technology. It also puts paid to the idea that the virus was a Chinese conspiracy. The other reasons are due to the huge amounts of money, both public and private, given to fund the trials as well as the number of altruistic volunteer participants. Traditionally, companies would wait until the end of their trials to publish data but instead, they released ‘rolling’ data ‘as it happened’. In the case of BioNTech/Pfizer they were able to publish data in October. Scientists and clinicians at the UK Medicines and Healthcare products Regulatory Agency, (MHRA) were then able to work day and night to scrutinise over 1000 pages of results. How do we know these vaccines work? BioNTech/Pfizer enlisted 43,448 people. 21,720 were given their vaccine and 21,728 were given a placebo. 170 participants went on to catch COVID-19. 162 (95%) were in the placebo group. Only 8 (5%) were in the vaccine group. This is where the figure of 95% effectiveness comes from. Moderna enrolled roughly 30,000 people and again divided participants into those receiving the vaccine and those receiving a placebo. 95 participants in total caught COVID-19, 90 in the placebo group and 5 in the vaccine group. This again gives us a figure of 95% effectiveness. Oxford-AstaZeneca enrolled over 11,000 people in the UK and Brazil who were either given the vaccine or a placebo. The vaccine group was further divided between people receiving two full doses and those receiving a half dose followed by a full one. The two full vaccine dose regime was found to be 62% effective in preventing COVID-19 while the 1.5 dose regime was found to be 90% effective. The reason for this difference is not yet understood. Will the new variant make the vaccine pointless? WGenetic code consists of letters. Whenever genetic material replicates those letters are copied. From that copy, a new genetic code is written. This is called transcription and translation. As when we copy and type out text the odd mistake can happen. Letters can be replaced for another. This can lead to mutations. These can be bad and lead to mistakes which cause cancer. Sometimes the mutation gives the organism a benefit over other organisms, making them more likely to survive and breed and so pass on that advantage. This is the basis of evolution through natural selection. Viruses are particularly prone to mutation because of how frequently they replicate. Overall, the SARS-CoV-2 virus has shown a low rate of mutations and been quite stable for a virus. Its genetic code consists of 30,000 ‘letters’ and two other mutations had already been identified: one in Spain and one in Danish mink. The Covid-19 Genomics UK (COG-UK) consortium was set up in April 2020 to genetically sequence random positive samples of COVID-19. Since inception, the consortium has sequenced 140 000 virus genomes from people infected with COVID-19. It was this consortium which picked up a variant of SARS-CoV-2 with 23 mutations, 17 of which may affect its behaviour. One of these mutations causes changes to the spike protein on the virus. As the spike protein is used by the virus to infect cells it is possible that this mutation could make the virus more infectious. This ‘variant under investigation’ has been called VUI-202012/01 or B.1.1.7. The variant was identified in September and as of 15th December accounted for 20% of viruses sequenced in Norfolk, 10% in Essex, and 3% in Suffolk was likely to have arisen in the UK. It accounted for 62% of new infections in London in the week ending December 9th, up from 28% in early November. Based on computer modelling it’s been suggested that this new variant is 70% more transmissible than non-variant COVID. The R number, the average number of people every person infected can spread the disease to, seems to be 0.4 higher for the new variant. Fortunately, one mutation in the spike protein is not likely to render the virus resistant to antibodies generated by the virus so far. However, if sufficient changes to the spike protein were to happen then, yes, the vaccine may be ineffective. This is why we need a different influenza vaccine each year as the influenza virus mutates so quickly. There is some good news though. Thanks to platform technology we now have a way of quickly producing new vaccines. We have the basics sorted; we would just need to change the mRNA used in the Pfizer and Moderna vaccines or the spike protein expressed in the Oxford vaccine. Of course, as viruses mutate as they replicate if we reduce cases in the community through vaccination and social distancing we will, as a consequence, reduce the mutation rate. I’ve seen memes about thalidomide comparing it to these vaccines, how do we know they’re safe? Just as with any medicine, no vaccine is perfect although as shown above the risks are far outnumbered by those of disease. Thalidomide is not a vaccine, it was marketed in 1957 for morning sickness and discontinued in 1961 due to birth defects. The problem was with the thalidomide molecule and its orientation. The ‘left-handed’ thalidomide was safe, the ‘right-handed’ caused birth defects. This is why it is important to monitor the safety of all medicines. Medical legislation in this country is incredibly robust; there are 349 individual regulations in 17 parts to make sure any medication, healthcare equipment or vaccine is safe. This includes the reporting of any ill effects. The emergency authorisation is being constantly reviewed and will be rescinded if the vaccine is found to be unsafe. All of the vaccine trials have been clear when it comes to reporting the rates of adverse reactions to their vaccines. Oxford-AstraZeneca, Moderna and Pfizer/BioNTech have all reported low rates of adverse reactions. The Oxford-AstraZeneca trial was paused due to three adverse reactions: one was in a patient who had not received the COVID-19 vaccine, one had a high fever and it wasn’t known which vaccine they’d received as they were still blinded at that point. One participant who received the COVID-19 vaccine had an inflammation of the spinal cord 14 days after their booster which settled. The most common ones included pain at the injection site, muscle pains, headache and feeling generally unwell. This is in keeping with any vaccination as those symptoms as a sign of it generating the immune response we want. Last year when I had my influenza vaccine my arm was sore and swelled up at the injection site. This year I felt run down the day after. Both times I took Paracetamol and had a nap. The next day I was fine. Both times were better than having influenza. Having seen the look on patients’ faces struggling to breathe thanks to COVID-19 as they are taken away to be intubated and ventilated I can assure you that the mild side effects of a vaccine are better. Did we approve this vaccine faster due to Brexit? In short, no. The UK approved the vaccine before the EU using regulation 174 of the UK’s Human Medicines Regulations, which enables the temporary authorisation of medicine prior to approval by the European Medicines Agency in the case of urgent public need. This Human Medicines Regulations came into effect in 2012, 4 years before the Brexit vote. On top of this EU law allows member states to “temporarily authorise the distribution of an unauthorised medicinal product in response to the suspected or confirmed spread of pathogenic agents, toxins, chemical agents or nuclear radiation any of which could cause harm”. It has nothing to do with Brexit. I heard this vaccine can’t be stored in most places as it needs to be really cold, is this true? For long-term storage (about six months) the vaccine has to be kept at -70° C, which requires specialist cooling equipment. But Pfizer has invented a distribution container to keep the vaccine at that temperature for 10 days if unopened. These containers can also be used for temporary storage in a vaccination facility for up to 30 days as long as they are replenished with dry ice every five days. Once thawed, the vaccine can be stored in a regular fridge at 2°C to 8°C for up to five days. Isn’t natural immunity better? As far as our bodies are concerned there is no such thing as ‘natural’ immunity. You either develop antibodies through infection or through vaccination. Your body’s response is the same. With infection, you can be seriously unwell as your body’s adaptive immunity kicks in. With vaccination, you develop antibodies without the risks of infection. For example, 0.0001% of patients will experience an adverse reaction to the measles vaccine as opposed to the 0.2% of patients infected with measles who die. The maths is clear. Don’t vaccines cause autism? No. The paper which claimed it did was nonsense. On 28th February 1998, an article was published in The Lancet which claimed that the Measles, Mumps and Rubella (MMR) vaccine was linked to the development of development and digestive problems in children. Its lead author was Dr Andrew Wakefield, a gastroenterologist. The paper saw national panic about the safety of vaccination. Prime Minister Tony Blair refused to answer whether his newborn son Leo had been vaccinated. However, Andrew Wakefield held a lot back from the public and his fellow authors. He was funded by a legal firm seeking to prosecute the companies who produce vaccines. This firm led him to the parents who formed the basis of his ‘research’. The link between children developing developmental and digestive problems was made by the parents of twelve children recalling that their child first showed their symptoms following the MMR vaccine. Their testimony and recall alone were enough for Wakefield to form a link between vaccination and autism. From a research sense, his findings were formed by linking two events that the parents thought happened at the same time. But the damage was done. The paper was retracted in 2010. Andrew Wakefield was struck off as were some of his co-authors who did not practice due diligence. Sadly, this has only helped Wakefield’s ‘legend’ as he tours America spreading his message tapping into the general ‘anti-truth’ populist movement. Tragically unsurprisingly, often in his wake comes measles. Last year the largest study to date investigating the links between MMR and autism was published. 657,461 children in Denmark were followed up over several years (compare that to Wakefield’s research where he interviewed the parents of 12 children). No link between the vaccine and autism was shown. In fact, no large high-level research has ever backed up Wakefield’s claim. For a more explicit takedown of common anti-vaccine myths click here. If we can develop a vaccine for COVID-19 so quickly how come we can’t develop one for HIV? The human immunodeficiency virus (HIV) is very different from the SARS-CoV-2 virus. HIV infects and destroys helper T cells and so leaves a patient unable to mount adaptive immunity. This means they are vulnerable to opportunistic infections: this is Acquired Immune Deficiency Syndrome (AIDS). Although the virus was discovered in 1984 we are still yet to develop a vaccine. This is because although people infected with HIV do form antibodies (this is how we detect infection) those antibodies are not actually able to kill off the virus. HIV has the ability to hide from our immune system by producing a protein which stops cells it infects from being detected and destroyed. HIV is also able to impair the function of killer T cells. So, even if a vaccine were available which produced antibodies it is unlikely to be able to completely prevent infection. A much greater success story has been anti-HIV medication which is able to grind HIV replication to a halt, although not completely kill it. Successful antiretroviral treatment can make a patient ‘undetectable’ — it is impossible to detect their HIV in a blood test. This means it is impossible for that patient to pass on their HIV to others. The availability of antiretroviral medication to be given to people at risk of HIV exposure (Pre-exposure Prophylaxis or PrEP) or to people within 72 hours of exposure (Post-exposure Prophylaxis or PEP) can greatly reduce infection rates. Both are nearly 100% effective if taken properly. We’ve been able to turn an infection with a nearly 100% mortality to a manageable, chronic disease in less than four decades. A future without HIV/AIDS is possible but probably won’t involve a vaccine. I heard these vaccines use nanotechnology to control us Nanotechnology springs to mind visions of tiny robots swimming in our bloodstream like something from science fiction. Although nanotechnology is real, it doesn’t mean that. ‘Nano’ means ‘one billionth’ or 1 x 10−9. So a nanometre is 0.000000001 metres, a nanosecond is 0.000000001 seconds and so on. Nanotechnology basically means technology which creates, uses or manipulates tiny things on the molecular or atomic level. Nanotechnology in Medicine is also called nanomedicine. As these vaccines involve the use of matter nanometres across such as viruses, mRNA and the participles used to wrap around them they are classed as nanotechnology even though not a single tiny robot is involved. I heard GPs are being paid to give this vaccine to us General Practitioners in England are not employed by the NHS. Surgeries are private businesses owned by their partners which the NHS pays to provide services in line with a number of contracts. For providing some services, such as vaccination, the GP surgery charges an ‘item of service’ to the NHS. This fee covers the cost associated with providing the vaccination and is paid by the NHS to practices. It is used to pay for costs associated with providing the treatment. In a letter sent to GPs on 9 November, NHS England said that it had agreed with the British Medical Association that the “Item of Service fee” for a potential Covid vaccine would be £12.58 per dose (and so £25.16 for a two-dose vaccine such as the one produced by Pfizer and BioNTech). The letter also confirms that the fee for the flu jab will remain £10.06. So, yes, they are being paid. But it’s not ‘hush money’ or ‘dirty money’ it’s a contracted amount of money for providing a service. I heard there is aborted fetal tissue in the vaccines Sigh. This is where a glimmer of fact has been manipulated. As discussed above the Oxford vaccine uses a chimpanzee virus. In order to propagate the virus, this required what all viruses need to multiply: cells to invade. This meant the study needed cells to use to grow the virus. This is not unique to research involving viruses, a lot of research requires cells. This is when cell lines are used. Cell lines are mass-produced by taking original tissue and maintained to keep a reliable supply to use in research. Not every cell line lasts. Cells naturally have a ‘senescence’ or ageing process and so will die off. Cell lines are ‘immortalised’ either because they come from tumour cells which through mutation overcome senescence (this is how cancer starts) or because they are altered after being sampled. Each cell line has its own name. The cell line used to ‘grow’ the chimpanzee virus for the Oxford virus is called HEK293. It is true the original cells for this line came from the kidney of a female fetus which was either lost to miscarriage or medically aborted in the Netherlands in 1973. Researchers used a virus to make the cells immortal and cultured just one bunch of cells. From this bunch of cells came a cell line. This cell line has been maintained ever since as HEK293, as clones of clones of clones of clones of clones etc. over 47 years. The immortalisation process means these cells are not the same as the original sample and the passage of time means those original cells have long gone. The HEK293 cell line was used to ‘farm’ the chimpanzee virus which is then filtered out of the culture. There is no aborted fetal tissue in this vaccine. It is fair to say that science has a far from innocent record in this area. The first immortalised cell line, HeLa, was taken without consent from an African-American woman called Henrietta Lacks from the cervical cancer which killed her in 1951. As they were tumour cells, they were already immortal and so were cultured to produce a cell line. The HeLa cell line continues to be used in medical research in areas such as cancer treatment and the invention of the polio vaccine. This is the legacy of ‘the immortal Henrietta Lacks’ whose cells continue to live nearly 70 years after she died. However, no consent was sought or compensation given. Her family were not informed of the cell line until 1975. The case of Henrietta Lacks is an example of the need for informed consent in scientific research. It’s also important that scientists follow ethical procedure because, as we’ve seen from Mr (not Dr) Wakefield, they can do a lot of harm. I heard the vaccines will make you infertile This just makes me want to… Right, sorry about that. OK, let’s take a moment to discuss evidence and science. Let’s say we went up to an astronomer and asked them if the Earth was going to be hit by a comet tomorrow: Us: “Hi astronomer’. Astronomer: “Hello (insert name)” Us: “Is a comet going to hit the Earth tomorrow and wipe out all life?” Astronomer: “There is no evidence of that happening”. Us: “What do you mean?”. Astronomer: “Well, we haven’t picked up a comet on a trajectory with the planet Earth which is big enough to wipe out all life on Earth”. Us: “So it won’t happen?”. Astronomer: “There is no evidence a comet is going to hit Earth tomorrow and wipe out all life on Earth”. Us: “I want definite answers. You’re a scientist, come on, is a comet going to hit us?” Astronomer: “There is no evidence that will happen”. Us: “So it could happen?” Astronomer: “There is no evidence it could”. Us: “But you’re not certain?” Astronomer: “I’m a scientist, I look for evidence. We have not found a comet due to hit the Earth so at the moment there is no evidence a comet will hit us tomorrow and wipe us all out”. Us: “So you’re telling me a comet is going to hit Earth?” Astronomer: “No, I’m telling you there is no evidence” Us: “I knew it, we’re all going to die. This is as bad as you guys faking the moon landings”. Astronomer: “Please leave”. Scientific proof is not what we think it is. Scientists have ideas or theories and test them. This involves experiments or observation through studies. The results are called evidence. There are levels of evidence which correspond to how ‘good’ a study is based on how it was conducted and how the findings can be applied to other settings. This is fairly obvious: a study conducted in one hospital is not as good as a study involving multiple hospitals across different countries. Scientists can look at the most recent high-level evidence and draw conclusions based on what best explains what they’ve observed. That is scientific ‘proof’. The theory of evolution best explains the evidence gleaned from fossils, genetic inheritance and DNA. The Big Bang theory best explains the evidence from studying the evolution of stars, galaxies and heavy elements and cosmic microwave background. Observing falling objects and planetary motion is best explained by the theory of gravity. And so on. If observed evidence changes then the theory must change or be rejected for a new one. This is how scientists went from believing the Sun went around the Earth based on the evidence of seeing the Sun move across the sky to believe it’s the other way round. As the famous economist John Maynard Keynes put it so brilliantly: “When the facts change, I change my mind. What do you do, sir?” It is the same in Medicine. We’ve seen in patients who take Paracetamol that none of them turns purple with yellow spots. We’ve seen patients who take too much Paracetamol develop liver failure. Therefore, there is currently no evidence that taking Paracetamol makes you turn purple with yellow spots but there is evidence that taking too much Paracetamol causes liver failure. Somewhere along the line as a society, we have started to demand certainty. We also seem to have somehow reached a point where scientific evidence and personal opinion are now one and the same and can be used interchangeably by members of the public and politicians alike. Scientific evidence is not certain. Nor is it an opinion. It is something which follows a constant process of testing, observing, recording and analysis. And so back to the question. There is no evidence that the vaccines cause infertility. That’s it. The building blocks of proteins are called amino acids, and it’s sequences of those that make up different proteins. A small part of the COVID-19 spike protein resembles a part of another protein vital for the formation of the placenta, called syncytin-1. But the sequence of amino acids that are similar in syncytin-1 and the SARS-CoV-2 spike protein is quite short and not the whole protein. They are not the same. Therefore realistically the body’s immune system is not likely to confuse the two, and attack syncytin-1 rather than the spike protein on SARS-CoV-2 and stop a placenta forming. This claim came from concerns that the COVID-19 spike protein the vaccines make the body produce antibodies against also contain “syncytin-homologous proteins, which are essential for the formation of the placenta in mammals such as humans”. The authors: Dr Mike Yeadon in the UK, who has made a name for himself as a contrarian to the scientific consensus during the pandemic and Dr Wolfgang Wodarg from Germany, who has a history for casting doubt on everything from pandemic definition to vaccine production demanded that it must: “be absolutely ruled out that a vaccine against SARS-CoV-2 could trigger an immune reaction against syncytin-1, as otherwise, infertility of indefinite duration could result in vaccinated women”. As we’ve just discussed, no one seriously wanting scientific evidence would make such a request for absolute proof. An actual scientist would think about this problem. Infected patients produce antibodies just as vaccinated people do. Is there any evidence that infected women lose their pregnancy? This study of 225 women in their first trimester found no increase to early pregnancy loss in those infected with COVID-19. This study compared 113 women pregnant in May 2020 to 172 pregnant in May 2019 and found no increase in pregnancy loss. This study looked at 252 pregnant women infected with COVID-19 found no increase in adverse pregnancy outcomes. There is no evidence that the vaccine causes infertility or miscarriages. A couple of attention-seeking ‘truth seekers’ have lit a bin fire and left the serious medical profession to put it out. With that in mind, I am fed up with members of my own profession talking far outside of their area of expertise, cynically or otherwise, during the pandemic and helping to fuel mistrust at a time when we should have stood together. But that’s a blog for another day. I heard that the vaccine companies can’t be sued if things go badly Wrong. A government consultation document laid out proposals to potentially authorise a vaccine for emergency use. Existing UK law (as informed by EU law) says that if the government decided to do this, manufacturers and healthcare professionals would not take responsibility for most civil liability claims. But, if the vaccine is found to be defective or not meet safety standards then: “the immunity does not apply…(and) the UK government believes that sufficiently serious breaches should lead to loss of immunity” If the vaccines are found to be dangerous or defective you can guarantee that the companies involved will be sued until their pips squeak.
https://medium.com/@mcdreeamie/answering-your-questions-about-the-covid-19-vaccines-58c176c4b96b
['James Thomas']
2020-12-24 08:33:18.341000+00:00
['Vaccines', 'Covid 19', 'Health', 'Medicine', 'Conspiracy Theories']
Find Success In Your Business, Just Like Malcolm Gladwell
Photo by Diego PH on Unsplash Malcolm Gladwell has built a career out of sifting through dry research, unearthing overlooked ideas and presenting them in appealing or novel ways for readers or customers. He is best-known for writing regarded nonfiction books like Blink, Outliers and The Tipping Point, and Gladwell’s approach to his craft can help you succeed in business. Test Your Business Ideas Gladwell spends hundreds of hours talking with and emailing other writers about ideas he wants to use in his books. While speaking, for example, Gladwell gauges his audience’s reaction to figure out what was interesting or boring. He also uses arguments from his audience to hone the quality of his works. This practice of publicly testing ideas helps Gladwell learn how to articulate himself clearly and concisely. It also helps him decide what to expand on or cut from his books. He said, “The act of explaining an idea to somebody else is a really good way to figure out how to tell the story.” You can test business ideas by emailing peers and asking for feedback and by showing early versions of your work to customers rather than waiting until you’ve finished your product or service. Iterative feedback will help you strengthen the best parts of your products or services and cut what’s not working. Let Your Customers Own It Gladwell was surprised by which ideas caught on from his book Outliers, notably the claim that mastery of a skill requires 10,000 hours — or ten years — of deliberate practice. Later, he found himself in a curious position whereby the book’s ideas and arguments were often misconstrued by others. Gladwell said, “Once you’ve written something, it no longer belongs to you. It belongs to your readers. When your readers buy your book, they really buy your ideas, and your ideas become theirs.” When someone buys your product or service, you might wonder why customers like one aspect of your product or service and dislike another. You can get in front of this issue by recruiting beta, or first, customers. A beta customer provides feedback privately about your product or service before it’s released. A reviewer, however, writes their thoughts on Amazon, Trustpilot or elsewhere after you release a product. You can address the first type of feedback immediately and the second type over the long-term. It’s up to you to decide which responses to address and which to pass on. Finish It, Ship It, Promote It Your work doesn’t end after you release a product or service or ship a big project. In Gladwell’s debut nonfiction book, The Tipping Point, he explains how little things led to remarkable results. However, The Tipping Point wasn’t a huge success upon publication. Gladwell said, “The book didn’t do well at first…I got it in my head that if I kept touring and I kept giving talks about it, it might revive. I basically did endless promotions for two years.” The Tipping Point eventually entered the New York Times best seller list as a paperback, and according to Gladwell, “that’s when it was a successful book.” Promote your work in the right places, with your boss, colleagues, customers or peers. Like Gladwell, spend time meeting would-be customers. Avoid averting your gaze just because a deadline has elapsed. Discover Your Tipping Point Gladwell is a successful business writer in part because he understands what his audience struggles with and wants. He takes ideas from different industries and tells stories about them in a unique way that appeals to his audience. Even if you’ve no ambition to become a business writer, use elements of Gladwell’s approach to test your products and ideas and help them succeed. Ready to supercharge your productivity? I’ve created a cheat sheet that will help you FOCUS immediately. Follow this and you’ll accomplish more than you can imagine. Get the cheat sheet here!
https://bryanjcollins.medium.com/find-success-in-your-business-just-like-malcolm-gladwell-8f853313c66b
['Bryan Collins']
2019-05-07 08:16:00.812000+00:00
['Work', 'Innovation', 'Entrepreneurship', 'Motivation', 'Books']
100 Things You Should Know About People: #99 — Well Practiced Skills Don’t Require Conscious Attention
[caption id=”attachment_2217" align=”alignleft” width=”248" caption=”Guthrie Weinschenk Playing Violin”] [/caption] I have two grown children. The entire time they were growing up they took Suzuki method music lessons. My son studied violin, and my daughter studied piano. After attending one of my daughter’s piano recitals, I asked her what she was thinking about while she was performing the piano sonata piece (from memory, no music in front of her). Was she thinking about the dynamics of the music? When to get louder or softer? About particular notes or passages that were coming up? Speed or tempo? She looked at me in confusion. “Thinking?”, she said, “I’m not thinking about anything. I’m just watching my fingers play the song.” It was my turn to be confused. I turned to my son and said, “Is that how you play the violin in a recital? Are you thinking?” “No, of course I’m not thinking, he answered. I’m watching my fingers play the violin too.” Muscle memory — The Suzuki method of music instruction (and perhaps other methods too, it’s the only one I’m really familiar with) requires students to intensely practice particular skills on their instrument. In a Suzuki recital students usually do not have music in front of them. All the pieces (and quite complicated pieces) are memorized. This requires that particular passages and songs be practiced over and over. A term that is used in music instruction is “muscle memory”. The piece is practiced so often, that the muscles remember how to play it on its own, without thinking involved. Automatic execution? — If a skill is practiced so well that it is automatic, then it can be performed with a minimum of conscious attention. If it is really automatic then it almost allows multi-tasking. I say almost because multi-tasking doesn’t really exist. Too many automatic steps can lead to error — Have you ever been using a software application that requires you to go through a series of steps in order to delete an item? You have to click on the item you’re your mouse, then click on the delete key, then a window pops up and you have to click on the “Yes” button to confirm. You need to do about 25 of these, so you position your fingers on the mouse and keyboard in an optimal way and start pressing and clicking. Before too long your fingers have taken over, and you aren’t even thinking about what you are doing. It’s very easy in this type of situation to accidently keep deleting past where you were supposed to. What do you think? Are there tasks you do automatically?
https://medium.com/theteamw/100-things-you-should-know-about-people-99-well-practiced-skills-dont-require-conscious-attention-88b8aeb1d61a
['The Team W']
2016-09-21 22:11:55.344000+00:00
['Memory', 'Muscle Memory', 'Errors', 'Music', 'Attention']
I could hardly get past the first paragraph, but I did.
I could hardly get past the first paragraph, but I did. No responsible gun owner looks at his self defense weapon as being able to “kill” a certain number of times. First — That is not accurate, second — not why you own a weapon and third — completely reckless thought. That is the fantasy talk of mass shooters. Your weapon would be used against in a time of crisis if you do not learn more about being a gun owner. As much as I like emotional descriptions, this does the 2nd amendment a disservice, guns are serious.
https://medium.com/@rjbella/i-could-hardly-get-past-the-first-paragraph-but-i-did-aaebd0279e95
['Rachel Rogers']
2020-10-21 10:26:08.062000+00:00
['2nd Amendment', 'Self Defense', 'Weapons', 'Guns']
Improper Theft
Improper Theft It’s not robbery, it’s just how life flows through you… I never said she stole my money. My parents are getting it wrong. I mean, she’s my best friend! Why would she steal it, of all people? Suddenly, my door opens and Natasha walks in. She has a glum look on her face, as if the corpse of Hitler suddenly appeared in the US. In her right hand she’s holding, very tightly it seems, a small leather journal. I like that journal — it’s our history journal. Every time we find cool information about history that we want to keep, we write it in that journal. We’ve been recording in it for years, and it’s almost full. However, we haven’t put together an entry in it for over six months. I close the book I was reading and lay it gently on my bookcase. I nod toward Natasha, who takes the hint and plops onto the bed. “So what’s going on?” I ask, but in a lighter voice than I usually do. Something is wrong. “Nothing,” she responds quickly, but she doesn’t look at me. She’s staring intensely at my bookcase, as if searching for something. After a couple more moments of silence, I speak up. “What do you want to do today?” “Er…actually, Julia, I have something to say.” Oh no. “Alright. What is it?” “I was the one who stole the money.”
https://medium.com/sukhroop-the-storyteller/improper-theft-facac692d239
['Sukhroop Singh']
2019-09-09 02:16:55.198000+00:00
['Short Story', 'Life', 'Creativity', 'Creative Writing', 'Art']
Consensus 2018, the Blockchain summit of the year!
Today we want to remind you about one of the best worldwide event for the cryptocurrencies and blockchain technology. We’re talking about the Consensus 2018, the blockchain technology summit that will feature more than 70 Countries and more than 4,000 attendees among professionals, start-ups, investors, financial institutions, enterprise tech leaders, academic and policy groups who are building the foundations of the blockchain and digital currency economy. About Consensus Consensus 2018, organised by Coindesk, will take place in New York between the 14th and 16th of May 2018. The first interesting and positive data of this year is the increase in participants which recorded a 125% growth. This important summit will not be just an opportunity to talk about important topics such as the state of blockchain, the new Icos Process, the use of blockchain in government etc (you can find the full agenda here) , but also to make New York the capital of the blockchain. Expectations for the Consensus 2018 Last year Bitnovo attended the Consensus 2017 summit and it was a great experience not only for the interesting topics covered, but also for the positive effects that this event contributed to bring in the cryptocurrencies market. Bitnovo at Consensus 2017 — May 22–24, New York Marriott Marquis If we think about the positive feedback reached last year, just after the Consensus 2017, when the price of Bitcoin passed from 1,068 $ in April to 2,728 $ in May and the cryptocurrencies showed growth in the range between 10% and 70%, it is not difficult to predict how all the experts in the sector, as well as the whole crypto commuity , wish for a positive effect on the dynamics of the cryptocurrency market. The co-founder of Fundstrat Global Advisors Tom Lee himself believes strongly that, by the end of the year, the price of bitcoin will reach $25,000. We only need to wait, cross the fingers and… let’s see what happens ;)
https://medium.com/bitnovo/consensus-2018-the-blockchain-summit-of-the-year-842675af827
['Roberta Quintiliano']
2018-05-15 08:42:09.926000+00:00
['Coindesk', 'Bitcoin', 'Blockchain Technology', 'Cryptocurrency', 'Consensus']
4 Simple Reasons New Startups Fail
Building a startup is one of the most difficult yet rewarding challenges a person can take on in today’s economic climate. Hearing stats like, “90% of startups fail”, and, “123,000 businesses fail every day”, strikes fear into many of those looking to try their hand at becoming an entrepreneur; however, all this fear and anxiety hasn’t come close to stopping people from venturing out and trying new things. In fact, some may argue that the ups and downs of entrepreneurship are what draw many into the world of business, and that’s a good thing. These spirited individuals, along with their talented teams, have improved everything from manufacturing processes to human quality of life, and every industry has been impacted by startups and younger companies. While the key to success is different for each business, there are common pitfalls that must be avoided. Below is a list of four simple reasons that new startups fail. 4 Simple Reasons New Startups Fail 1. Focus is Too Wide When first starting a business, entrepreneurs may find themselves at the helm of a ship in the middle a blue ocean and they may choose to go any direction that they want. Most companies have a general idea of where this is and it’s important not to lose sight of that, but smart innovators should narrow it down by listening to the market. Theoretically, the more information a company has on market needs/wants, the easier it is for that company to succeed, but that company must use that information properly. Use the market information as a compass. Produce products and services that follow that compass. This isn’t to say don’t ever pivot though, in-fact, pivoting is encouraged, but only if the pivot is market driven. Many startups find themselves trying to reach into multiple markets at once and build various products before their initial product is complete. It’s great to have vision and identify potential expansions of services — investors love to hear this stuff — but operationally, everyone should be focused on executing the current task at hand. All available resources (finances, labor, IP, etc.) should be focused on making sure the initial product/service is a success. When this happens, the entire business becomes easier; team confidence and cashflow improves, investors become more interested, and now it’s easier to repeat that success in other markets. 2. Lack of Insight into Consumer Needs As mentioned above, market information can be used as a “compass” It’s essential to guiding a business where it needs to go. But what if there’s no compass, or it’s broken? Understanding the needs and wants of the market is the only way a startup can succeed. And all customers think differently. It’s common to find entrepreneurs developing something that they personally think the market wants, rather than building something they know the market wants. How early stage businesses best do this is a question that people have been asking themselves for many years; there are books dedicated to market research best practices. Whether it’s surveys, interviews, or pilot tests… It doesn’t matter. You just need to have insight into the market; it’s not just about asking people what they want, its about understanding. Henry Ford’s famous quote provides us with a great example of the difference when he said, “If I had asked people what they wanted, they would have said faster horses.” 3. Cost Not Emphasized in Design It’s common for new product concepts never make it to market. In fact, more concepts than not, never even get to the prototype phase. What the market wants drives the success of businesses, so if the product does not have market appeal, it won’t succeed. On the opposite end of the spectrum, often the visions and concepts are so great, that the cost of achievement is too high. A great recent example is the Dyson Electric Car. Dyson has a large and dedicated customer base and their technology is unmatched in many consumer product categories. So, you’d be right to assume their technology integrated into an electric vehicle might produce a highly desirable car; however, the cost of implementation is currently too high compared to what the market is willing to pay. Let the market be a guide in not just the features and aesthetics of a product, but for the cost of that product as well. 4. Poor Product Launch Strategy There are many decisions to be made when launching a new product. Markets evolve quickly and new sales platforms are popping up daily. Options continue to expand, but decisions aren’t any easier. The rise of social media, eCommerce sites, fulfillment facilities, etc. has changed the way products are bought and sold. These new tools can be very valuable if they’re used strategically, so young companies must think about how they plan to rollout into the market. Some questions to answer when developing your product launch strategy might be: How much marketing should be done in advance of the product launch? What type of marketing should we do, and on which channels? What will our sales team look like? Will we use affiliates? Brand ambassadors? Online banner ads? Who will we sell to? Wholesalers? Direct to consumer? Distributors? Are we racing competition to get to market, or should wait until we have a very strong product and then release it? Will we do any pre-sales or exclusive releases? What sort of demand do we anticipate? Will we be ok if our demand surpasses our supply? These types of questions are extremely important to think through because once you launch your product, there’s no turning back. There aren’t necessarily right or wrong answers to these questions, they just need to have purposeful answers. Every business has a different strategy, but those who think critically about these questions are the ones who see more success.
https://medium.com/@industrystar/4-simple-reasons-new-startups-fail-92a6fb366c4c
[]
2020-05-28 15:42:53.528000+00:00
['Consumer Products', 'Consumer Electronics', 'Supply Chain', 'Startup Lessons']
Advisory Notes 1: Juan Manuel Almasque
In the first in a series of advisor interviews, TV-TWO Taskforce member Juan Manuel Almasque from FOX International, reveals why he decided to jump aboard the good ship TV-TWO. Hello sir! You know all about us, so tell us more about you… For sure! I started my career at FOX back in 2008, as a Video Operator. This gave me a good grounding in the technical aspects of the business, and soon after I was promoted to Operations Coordinator for the National Geographic channels. I’m now Senior Production Services Manager, a role which has allowed me to enhance my leadership skills (thanks to my great team!) and improve the overall production process. So — television! What’s changed most about it over the last ten years, in your opinion? From a business perspective, the landscape is so much more fragmented — there are so many channels, media platforms, and devices — which makes it harder for big broadcasters to maintain their market share. However, from a viewer’s perspective, fragmentation encourages them to exercise more control over what they watch and be more selective about the content they consume. Technologies that capitalise on viewer habits and understand the way broadcast TV works will bridge the divide. I honestly think TV-TWO is one of the main players in this new market. How and when did you first hear about TV-TWO? I was searching for new ICO projects to invest in and, as I’m part of the TV business, the name caught my attention! Then once I’d read about the project, the team, and their objectives, I wanted to be part of it immediately. I sent a message to Jan and Philipp, and Philipp wrote back to me the next day. We met — digitally — and things started from there! What impressed you most about TV-TWO? The team understands the new ways in which people watch TV, and they’ve built a smart TV app that does it all — and also pays you for watching the shows you love! But what really convinced me was their passion, perspective, and how clearly they communicated the idea, along with their project milestones and roadmap. Have you previously been involved in blockchain technology or cryptocurrencies? I started participating in ICOs last year and have since backed more than 20 projects. I really think the blockchain and crypto will change the way we pay for things and give users more freedom. Anything else you’d care to add? Yes. I’m really excited and looking forward to the ICO! I truly believe TV-TWO will change television as we know it, and that it’ll make a big impact on the entertainment market in the very near future. — — — — — — — — — — — — — — — — — — — The TV-TWO ICO starts on Tuesday April 24, 2018, at 1pm GMT. For more info, check out our FAQ section, subscribe to our newsletter, and don’t forget to follow the latest news on Telegram, Twitter (@tvtwocom), Facebook, or Medium. We’re also active on Reddit. Keen to find out more about we’re doing at TV-TWO? Take a look at our whitepaper.
https://medium.com/tvtwo/advisory-notes-1-juan-manuel-almasque-2f24f93ccd0b
[]
2020-04-17 18:19:51.469000+00:00
['ICO', 'Television', 'Blockchain', 'Ethereum', 'Bitcoin']
Experimentation Analysis (CXL course review)
It has been another amazing learning week at the CXL Institute. This weeks’ focus has been on Experimentation Analysis. The course is taught by Chad J. Anderson who currently works at Microsoft and also has several years of experience with other reputable organizations. In this course, Chad teaches the principles of going from zero to hero when it comes to experiments analysis. He outlines how important analysis is in the success of a testing campaign. He puts it very clearly that without exemplary analysis, the data could be interpreted poorly and the inaccurate data used in making crucial decisions which will affect the institution greatly. Much of the course involves R-language which is very useful and highly recommended in data interpretation and representation. Chad explains the concepts, especially those involving computing data and results in R, in very simple and basic terms. It is therefore not strenuous to get wind of what is going on and following through the entire course. Tools Earlier on in the course, a couple tools are introduced to help in getting started with R including R-network website and the R-studio. These two are the primary to get started for downloading applications, launching and inputting code. He also talks about key functions for R including Combine function, Data frames and packages. Principles of analysis and metric building Getting to know what to measure and how to build up the tests for specific metrics is central in experimentation. The metrics are grouped into three: Top metrics; North star metrics — these are critically important to business health for example revenue. Tier two — these are metrics that are not very key but have a correlation with the north star metrics. These can somehow affect the more important ones and include conversions. They are leading indicators of the top metrics. Tier 3- these are the least critical to business health and have a weaker correlation with the top metrics. When choosing and building up on these metrics it is important to remember the following as they all affect the validity and accuracy of data used in decision making: Aggregates and averages are essential in building conclusions. However, if the data on site visits and conversions is randomized then averaged, a false picture will be painted. If randomized by visits too, the average will be a false representation of the conversion process. The right procedure is to randomize based on the user behavior then get averages. R-language is also useful in setting up metrics and factor analysis to obtain data on specific metrics that may be primary to us. R gives the space to manipulate the results to get a specific inference by altering the code to suite your goals. “So for example, let’s say that I have a website and I care not so much about the amount of revenue that a person spends. That’s actually a little bit secondary to me. What I care about is their engagement level, how engaged are they with my website. And how do you define engagement level, right? That’s not a metric that you’re pulling from a data layer. That’s not anything that you’re capturing. Nobody clicks on a button and their engagement goes up. We have to create that metric ourselves. And we create that metric ourselves by combining metrics together and weighting them in a way that we feel is important. “ That is Chad explaining in his own words why creating our custom reports based on what we would like to know is important. The data is formed by selecting the metrics that matter and creating columns. The data collected is then analyzed according to a heuristic or a weighted average to get the information you are looking for. Good experimentation analysis. Good experiment analysis has three main pillars. These are 1. Statistical comprehension 2. Trustworthy analysis 3. Honest analysis. Statistical comprehension is the understanding of how you got to the data you have. An understanding of the process can help in getting the insights as well as recognizing mistakes later on. It helps also to understand the danger of averages and why certain methods of computation may not apply and the impact they could have. Trustworthy analysis is the kind that can be relied on. From such, whatever conclusions are drawn are reliable and can be used in making decisions. They are a true reflection of the data and lead to better results based on the risk assessment. One major factor that affects trustworthiness is the sample ratio mismatch where the variations in the participants is not randomly and evenly distributed between the treatment and control groups. This splitting of participants and the logging of data should be well understood and validated. Chad goes on to explain the relevance of randomization and power. This automated indiscriminate grouping using A/B testing tools maintains the integrity of the process. Any significant results in a randomized controlled trial is therefore because of pure chance or because the different groups represented show differing traits. Honest analysis Here Chad stresses the importance of accepting that we do not know it all and that we cannot be entirely sure about a certain observed trait. All we do is draw conclusions from data but never 100% sure of what influenced the outcome. It is necessary to acknowledge our areas of ignorance and do more research and study on those areas. Statistical power This is the probability of getting significant results or obtaining statistical significance and p-value. Significant results, if they exist within your study group for that metric, are a representation that most of the time a certain behavior will be observed up to a certain extent. The significance will not always exist because of the random distribution of traits but whenever it does. It should show and be consistent. This calls for tests to go on to completion even if the significance has already been attained. Predetermine the sample size and let the test run unless something is seriously affecting the test or the business. The larger the sample size, the larger the chances of observing the significance if present. This is how often the power will be observed. Towards the end, Chad introduces the different tests including F-test, p&t tests, levene tests and factorial designs. The primary goal for CRO is also quantified as the understanding of risks involved with a certain variation in comparison to the others. “ An F-test is testing the variance of our data. Let’s just start as we always do with a normal distribution, with a mean of 10and a standard deviation of 2, right? Normal distribution, we’ve seen it 100 times already. Let’s go ahead and plot the mean which is exactly 10 which is pretty rare but also kind of cool. And as usual let’s look at a histogram of the data as always we can see that the average point is center data 10 and as we start to go out to either side the numbers become larger and also more infrequent. Now we’re doing something a little bit different. This is not a normal distribution. Let me give you an example, imagine that you’ve been talking to your favorite customers about improving your website and they’ve been giving you feedback all the time then you’ve been going back into your site and designing A/B test based around that customer feedback. Now imagine if your mean average order value is about $3 and when you run your A/B test, your mean average order value is also $3. But what if in your control version most people were buying around $3 worth of products? But in your treatment version, the people that you talked to, the segment that you catered your site to are performing much better, and now their average order value is $4. But the people that you didn’t talk to, a significant part of your site population that also matters is now under catered to and their average order value is $2.Well, you wouldn’t be able to detect this change at all because the average difference is still the same and a p and t-test does not calculate that. Learning at CXL continues to be an amazing experience even as I near the end of my 12-week course. Next week I will be moving on to the fourth out of five courses.
https://medium.com/@hilarytirimba/experimentation-analysis-cxl-course-review-2ec216211aed
['Hilary Tirimba']
2020-12-13 14:41:51.802000+00:00
['Experimentation', 'Conversion Optimization', 'Analysis']
Red Teaming: From the Military to Corporate Information Security Teams
Are you really going there? Yes, I am aware that attempting to answer the question “What is red teaming?” within the infosec community is a topic that tends to ruffle the feathers and stirs the pot for my red team peers, but I am writing this article anyway. I am writing this, because I have experience as a participant on all sides (I will go into detail of these “sides” later) of Military training exercises in two different branches of the U.S. military. I also have experience in these exercises in a information security capacity as well as in traditional combat scenario capacities. For this reason I want explain red teaming from a military perspective and how that translates to the corporate information security environment. What is a Red Team? The very term “Red Team” comes from military training exercises. In the U.S. Army, soldiers take part in field exercises at their duty stations, or designated field areas. These field locations are where soldiers go to practice their soldiering skills, participate in “war games,” and/or get trained up in new skills, military equipment and military technologies and weaponry. Training exercises can also take place on a much larger scale with huge multi-discipline training exercises conducted at the National Training Center (NTC) at Fort Irwin, California, or The Joint Operation Training Center (JRTC) at Fort Polk, Louisiana. In the Air Force they have what is called “Red Flag,” for one of their huge, multi-discipline training exercises at Nellis, Air Force Base, Las Vegas, NV. As a soldier I participated in countless field exercises , I also worked as a defense contractor at NTC, and I was a part of an NSA certified Red Team (Aggressor Squadron) at Nellis, so I have a ton of experience in military training exercises. These training exercises are war games with the purpose of challenging operational processes and practices in order to discover weaknesses in those processes. Why is red teaming important? The U.S. Department of Defense (DOD) Science Board, Red Team Task Force concluded: “We believe red teaming is especially important now. Aggressive red teams challenge emerging operational concepts in order to discover weaknesses before real adversaries do. Red teaming also tempers the complacency that often follows success [1].” So, as red teamers, we need to exercise our defenders in their operational processes and practices so that they can adjust these before the enemy finds them. If the good guys have never seen, or reacted to a given enemy Tactic, Technique, and/or Procedure (TTP) than they probably won’t be very prepared for it if the bad guys hit them with it. We also need to keep our blue teams from falling complacent, by exercising their defensive muscles. The Anatomy of a Military Exercise Teams. In a training exercises there are 3 “teams” (the “sides” I referred to above). These teams are also referred to as “cells.” There is the Blue Cell (The team being trained), The Red Cell (red team, aggressors, threat emulation team, Opposition Forces (OPFOR), etc…), and the White Cell. These teams are very important to the point I am trying to drive forward in answering the “What is a red team?” question. I think we can all grasp the ideas of a red and blue teams, but let’s just say this; The blue team are the good guys RECEIVING training, and the red team are ALSO the good guys, PRETENDING to be the bad guys, who are PROVIDING the training. There really aren’t any teams. There is one team. “One team, one fight!” as we used to yell in the Army. I am strongly emphasizing this idea, because this can be a pain point for some information security teams when they naturally take the competition factor of war gaming personally. We have to remember that there is no winner or loser in these exercises. We are all on the same team and we ALL win when we further develop and mature our processes. Wait, what is this “White Cell?” In a training exercise the blue team is given an objective. They are portraying a military unit of a fictitious nation while the red team are the opposition forces of another fictitious nation, who are given their own objective. The white cell oversees the entire exercise. They can be thought of as “God,” or “the all seeing eye.” They are privy to every single thing happening in the exercise in real time. They listen to all communications from both the red and the blue side. They know where everyone is at all times. They manipulate the game as it is played. Red, White and Blue cells Let’s say that the blue team is destroying the red team. They are in the right place at the right time, and are on point with everything they have been trained to do. If the red team keeps operating at their current threat level, will the blue team get any training value out of the exercise? Will they find any weaknesses in their processes? The answer is “No, they won’t.” So how do we change that? How do we provide some training value? The white cell tells the red cell information to tip the scales, so to speak. The white cell starts telling the red leader where the blue cell will be, and when they will be there. The red team then elevates to a higher threat level. They can now meet the blue team at their level, or go higher. Now the blue team will be challenged and will get something out of this exercise. Remember, we are exercising muscles to defeat complacency as well as allowing our defenders to see the hole in their processes. This all works exactly the same whether it be an Army field exercise, a huge NTC training operation, or a U.S. Air Force Red Flag engagement. Also these work the same in both a tactical combat training objective based exercise and for a military information security objective based exercise. I’ve worked in them all and on all 3 of the teams and I assure you the exercises all work exactly the same. Again, this idea of the white cell is super important in the point I will be driving home. How Does this Translate to my Corporate Red Team? So, there are some differences in a corporate information security red team operations. We don’t have a designated “Exercise” time where the blue team can just stop with their real-world operations, so red team exercises have to occur WHILE the blue team is actually working. In the military, when we weren’t at war, we were training. The blue team is never, NOT at war. How do we then translate this from military to to the corporation then? We have a well documented operational plan in place with a well thought out deconfliction process with well placed and informed trusted agents at the ready. This is HUGE. As a red team we don’t want to blow operational security (OPSEC) by telling the Blue team that the activity they are seeing is us every time they ask. In fact we want published rules of engagement (RoE) that specifically state that the the red team will not respond to such questions. The RoE should direct the blue team as to who to ask such questions in order to properly deconflict if necessary. The goal here is to provide realism for the blue team. We need them to react as they would if this were a real attack, so we don’t want them to think that this simulation is anything other than real. We however do NOT want the Blue team to burn the network down for a simulation, so we need a solid, well documented deconfliction process and we need trusted agents at various points of escalation so that the CIO is not bothered every time the red team hits a nerve. We don’t have a white cell. I don’t see corporations paying security folks to oversee the red team engagements as a sole function. It just doesn’t seem very economical to do so. So what do we do? We make the red team the white cell. This is where I will get some push back, I know. This is simple though, so follow me. The red team’s purpose is to exercise our blue team. Our blue team is security operations team (SecOps). If SecOps matures and starts operating at a very high level and stops getting value from red team exercises then the red team needs an advantage. Remember, there is no winner or loser here. Though I understand that this will naturally become competitive in nature we must strive to realize that we are not in a competition with winners and losers. We only lose if we are not getting training value. So, yes the red team needs to be able to see what you are doing. We need to see your Jira tickets, your splunk dashboards, your Crowdstrike dashboard, etc…. I can hear the SecOps folks gritting their teeth. This is necessary to provide the best training we can to SecOps on the fly. I have been told that having this advantage is “NOT red teaming.” Well, I argue that it in fact is red teaming. From a military standpoint and from what the true goal of red teaming is, this is by definition, red teaming. I’ve been told that “real” red teaming is where the red team is doing black box testing and can’t see anything blue is doing. I ask why? How can not knowing what blue is doing provide better training value to blue? That IS our goal right? I know some may rebut with “Well, red can grow with blue.” While this is true, the reality is that they might not. There are many variables that can cause red to not grow with blue, like red team retention, red team size, and team funding. So, if red can’t keep up for whatever the reason, then how can you give red the advantage? I’ve given you the answer above. Maybe some of you work on HUGE red teams with unlimited funding and capabilities of that of a nation state threat actor, but I wager that this is not the case for most. I think a lot of confusion of what red teaming is. and/or isn’t is because offensive security has a couple of functions. A red team operation is NOT a penetration test (pentest). A red team is a function of SecOps with a training objective. The red team operation answers the question “How effectively can IR respond to TTPs X, Y, and Z with their current processes in place.” A pentest team is a function of vulnerability management and typically has an impact objective. It answers the question “Can we get from A to B and cause X?” Now, red teams operations can also include impact objectives, but that is not their sole purpose. The red team might also find and report vulnerabilities during the course of the operation, but again not as its sole purpose. Pentesting, however does not provide training. A pentest finds or proves vulnerabilities. One can argue that pentesting is more important, but I won’t get into that one here. The bottom line is that both a red team and a penetration test team typically have a shared skillset. Hence the confusion between the two. One thing that some of my peers might be thinking; “What about detection, alerting, and tool validation and gap analysis?” I will say that red teaming can be a great tool for this, but purple teaming and penetration testing are better. In a nutshell, purple teaming is where red and blue sit side-by-side while red attacks with a TTP, or a set of TTPs and Blue validates if/when the TTP was detected. I will add that red teaming or penetration testing can take this validation a step further by adding in the element of persistence via a persistent threat. Tool validation via purple team might miss out on this important element. If the detection tool is only configured to detect an attack that is conducted using a specific technique, a persistent attacker might be able to craft technique that gets around this detection. Without this persistent attacker element, detection validation might not be very effective. So you can add this element by taking persistence into account in your purple team planning, you can do a penetration test with impact objectives geared towards detection validation, or you can add impact objectives into your red team operation. If you are going to add them into your red team operation, I ask “why?” What advantage is it to validate your detections via threat emulation that you can’t do with a penetration test, or through persistent purple teaming? As related to the question I just asked above, I have to add the following as a point of learning for myself: I have heard of red teams who provide their exercise as a service (EAAS) to corporate teams other than to SecOps. I would love to hear feedback as to why? What is the value of a red team exercise with impact objectives over a penetration test? Outside of a red team social engineering test, I am confused as to why this is at all necessary. I am not asking this to be argumentative, rather I truly want to know the answer from teams that do this. Finally. There you have it. The point I wanted to drive home is that the military knows how to do exercises. They’ve been doing it effectively for a very long time. We can translate that over to the corporate information security structure with a couple of easy transitions. Your red team should be your white cell as well, have a very solid deconfliction plan, and RoEs that will allow the red team to maintain OPSEC.
https://medium.com/@antman1p-30185/red-teaming-from-the-military-to-corporate-information-security-teams-408c040bd87e
[]
2021-04-29 00:39:39.765000+00:00
['Blue Team', 'Information Security', 'Penetration Testing', 'Red Team', 'Training Exercises']
Forecasting HubSpot Deal Revenue and Project Resourcing using dbt, Google BigQuery and Looker
Forecasting HubSpot Deal Revenue and Project Resourcing using dbt, Google BigQuery and Looker Mark Rittman Follow Jan 7 · 4 min read Last week in our blog on using dbt and Looker to analyze Hubspot CRM data we talked about metrics that let us look back at the history of a deal to understand deal velocity and measure open and active deals at any particular point in the past. In this blog I wanted to share how we use that same data to forecast our likely revenue in the coming months, and how we use this forecasting technique along with some custom Hubspot deal properties to predict what resources and skills we’ll need to deliver the projects we’re forecasting. In out Hubspot deals table, as processed previously in dbt tidy-up and cleanse the raw data coming from Hubspot via Stitch, a typical row of deal data contains these columns relevant to a revenue forecast: The predicted start and end date of the deal, typically a consulting engagement of two or three two-week duration development sprints, along with the duration of the engagement in days The sales funnel stage for the deal as of now; we’ve excluded those deals either lost, or won and already delivered The predicted deal value, and a probability percentage based on current deal stage that when used as a modifier (multiplier) with the deal amount gives us a weighted pipeline deal value What we’d like to do is forecast, based on the estimated start and end dates of our open deals and the current weighted pipeline value of that deal, what our revenue numbers are likely to look like over the total duration of those deals. In dbt we can do that by taking this deals data and using it to work out what revenue each deal brings in over the time it’s active, then joining those daily revenue amounts to a date array table we create on the fly to first aggregate each deal’s total revenue over those days, and then calculate the total revenue per month from all of those forecasted deals. {{ config( materialized='table' ) }} WITH daily_weighted_revenue as ( SELECT *, (amount * probability) / nullif(contract_days,0) AS contract_daily_weighted_revenue, amount / contract_days AS contract_daily_full_revenue, (amount / contract_days) - ((amount * probability) / contract_days) AS contract_diff_daily_revenue FROM ( SELECT *, TIMESTAMP_DIFF(end_date_ts,start_date_ts,DAY) AS contract_days FROM ( SELECT dealname, deal_id, amount, probability, start_date_ts, end_date_ts FROM { } WHERE stage_label not in ('Closed Lost','Closed Won and Delivered') GROUP BY 1,2,3,4,5,6,7)) ), months as ( SELECT * FROM UNNEST(GENERATE_DATE_ARRAY('2019-01-10', '2024-01-01', INTERVAL 1 DAY)) day_ts ) SELECT deal_id, date_trunc(day_ts,MONTH) as month_ts, sum(revenue_days) as revenue_days, sum(daily_weighted_revenue) as weighted_amount_monthly_forecast, sum(daily_full_revenue) as full_amount_monthly_forecast, sum(daily_diff_revenue) as diff_amount_monthly_forecast from ( SELECT deal_id, day_ts,count(*) as revenue_days, sum(contract_daily_weighted_revenue) daily_weighted_revenue, sum(contract_daily_full_revenue) daily_full_revenue, sum(contract_diff_daily_revenue) daily_diff_revenue FROM months m JOIN daily_weighted_revenue d ON TIMESTAMP(m.day_ts) between d.start_date_ts and timestamp_sub(d.end_date_ts, interval 1 day) GROUP BY 1,2) GROUP BY 1,2 Bringing the table that dbt creates for this SQL query into Looker and joining it to our main Hubspot-sourced deals table, we can now predict revenue over the coming few months based on start date and length of deal with each deal value weighted based on its current deal stage to allow for the fact that not all deals will close. In addition to recording revenue for our deals in Hubspot we also use custom deal properties to record aspects of the deal such as what products are likely to be part of the solution, how many sprints and of what type will be needed and who within the team we’re thinking of assigning the project to. Now I can predict overall and individual workload for each of our consulting team, like this: I can see how these forecasted projects are being priced: And finally, useful for training and recruitment planning, I can see what technology skills our team need to have to service these deals and when. For more details of what we can do for you with dbt, check the new dbt Solutions page and Data Centralization solution page on our website. Originally published at https://rittmananalytics.com on January 7, 2020.
https://medium.com/mark-rittman/forecasting-hubspot-deal-revenue-and-project-resourcing-using-dbt-google-bigquery-and-looker-835827ff079c
['Mark Rittman']
2020-03-30 08:59:02.379000+00:00
['Looker', 'Google Big Query', 'Dbt', 'Bigquery', 'Hubspot']
’Tis the season for open source!
Millions of developers and companies, including Microsoft, benefit from open source software (OSS). Over the years, Microsoft has grown to embrace open source and today has some of the most active repositories on GitHub. In fact, the majority of my work on web apps and frameworks at Microsoft revolves around open source software. I ❤️ open source. It gives me a sense of accomplishment, community, and purpose. For me, it’s an every day thing that keeps me connected, sharp, and always learning. At its core open source, and software in general, is about people. People, from all over, working together to build and improve shared tools. Whether it’s FOSS February or Hacktoberfest, anytime is a great time to contribute to open source projects. In keeping with the spirit of giving back, this December it’s time for 24 Pull Requests, a yearly initiative to encourage developers around the world to send 24 pull requests between December 1st and December 24th. Getting involved in open source can be a bit intimidating. Luckily, there’s guides to help you get started and find projects that might interest you. Contributions can take many forms including improving documentation and examples, fixing issues and bugs, hacking on type systems and test coverage, or adding missing features to your favorite projects. Even small contributions can make a big impact on projects and their maintainers. If you’re looking for a place to start, consider contributing Happy contributing!
https://medium.com/web-on-the-edge/tis-the-season-for-open-source-e5d7737e2da9
['John-David Dalton']
2017-12-01 17:20:57.644000+00:00
['Vscode', 'Sonar', 'Open Source', 'Microsoft Edge']
Introduction to Keras, Part Two: Data Preprocessing
In the first part of this series, you’ve implemented various data loading techniques. You’ve seen how to load images, text, CSV files, and NumPy arrays into your Keras workspace. Now to enable the model to make a rightful usage of your data, you’d have to convert it into an understandable format which could then be interpreted by a deep learning algorithm. To do this, you’d have to preprocess your data. Importing Keras First import the keras library into your workspace. from tensorflow import keras Now you’ll how to preprocess various kinds of data. So here’s the challenge. Every section has the task and its concerned solution. Before you jump to the solution, try solving the given task; that way, you’d be able to understand the preprocessing techniques involved. Preprocessing Text Data
https://towardsdatascience.com/introduction-to-keras-part-two-data-preprocessing-e1377d09fac
['Samhita Alla']
2020-12-25 21:52:25.102000+00:00
['Keras', 'Data', 'Deep Learning']
Coronavirus Is Giving Prisoners a Death Sentence
As most of the United States fiddles with the idea of seeing loved ones for the holidays, more than 2 million incarcerated people are faced with this challenge: staying alive during a global pandemic which has claimed the lives of more than 250,000 people in the U.S. U.S. correctional facilities — like the rest of the country — are recording unprecedented spikes in coronavirus cases. Michigan just recorded more than 50,000 COVID-19 cases in the last week, a number it took the state more than two months to record at the start of the pandemic, and many states are floating along with the same dilemma. However, correctional facilities alone recorded 13,657 new coronavirus cases the week of November 17, according to the Marshall Project. The previous week saw 13,676 new coronavirus cases. These are the highest totals since the pandemic began, and it’s been grim since. The American prison system is the perfect breeding ground for such a virus. While most outside the system are advocating for mask-wearing and social distancing, the reality is that prisoners aren’t able when prisons are overcrowded and most are poorly ventilated, old, and hygiene standards are difficult to maintain. While some of the data can be hard to pin down, what’s available is frustrating: as of mid-November, more than 196,000 coronavirus infections have been reported among state and federal prisons. More than 1,450 of those prisoners have died. Case rates among inmates is more than 4x that of the general public, and death rates are more than twice as high. Spencer Weiner/Los Angeles Times The U.S. prison system also employs more than 685,000 people — guards, nurses, etc. More than 45,000 prison staff members have been infected and around 98 deaths have been recorded to date. Their case rates are 3x as high as the general public. And these are just what’s been reported. The actual numbers are much more gruesome. This issue is even harsher on rural communities; 40% of prisons are located in counties with less than 50,000 people and are nowhere near as equipped for handling such an outbreak. Even something a bit more modest would be difficult to handle with not many ventilators or I.C.U. beds available. The pandemic should’ve been handled with more aggression, both at the state and federal level. Current lame-duck President Donald Trump delayed the virus, even gaslighting a nation while he claimed fear of hysteria was an excuse to not act immediately; his actions (or lack thereof) are inexcusable. Other governors, such as Georgia governor Brian Kemp, have refused to set mask mandates, or any mandate to slow the spread of the pandemic further. Some states have taken it upon themselves instead. Michigan governor Gretchen Whitmer announced a three-week emergency order prohibiting outdoor sporting events, dining at bars and restaurants, and other indoor or group activities. The order went into effect last week. Meanwhile, New Jersey governor Phil Murphy signed a bill in October that would permit prisoners with less than a year left on their sentences to be released up to eight months early. Already, 2,000 people have been released and more than 1,000 releases are anticipated. Obviously, a lot more still needs to be done. One short-term solution includes diverting minor infractions into “noncustodial penalties” for things like parole violations and probation, and limiting the amount of pretrial detentions via reducing or eliminating bail. It’s also key to minimize the effect the virus has on communities and those involved. More comprehensive standardized testing and reporting requirements would be helpful, in addition to offering testing prior to release. Managing the coronavirus crisis is difficult and isn’t something that will be achieved overnight. It’s a process and will require sustained effort by all parties involved. It’s easy to ignore what’s happening right now in prisons, but it’s not helping anyone. Their welfare is very much linked to everyone else’s. America’s failure to heel the prison population is both a public health catastrophe and a moral disaster.
https://medium.com/@bradleyalanlaplante/coronavirus-is-giving-prisoners-a-death-sentence-3a3a3071a81a
['Brad Laplante']
2020-11-22 18:49:01.367000+00:00
['Coronavirus', 'Pandemic', 'Covid', 'Prison', 'Jails']
Meta Messaging
Messaging is usually a bit of a mess for regular people. Why? Because regular people are varied, layered, and complex, you know… human. The opposite is true for haters — messaging is easy for them because they are singular. They’re simple. They just hate. Nothing much more to it — very little effort needed. There’s nothing sophisticated or complex behindhate. It’s a very easy, one-stop emotion. Thinking is always its enemy. In fact, haters abhor anything virtuous — anything that takes effort or responsibility like honor, discipline, education or, God-forbid, the arts. They can’t handle any of that. Haters are forever Pavlovian mouth-breathers waiting for the bell. Haters are by default lazy and selfish. Laziness and selfishness certainly go well together. It’s always easier to paint the complexity of others as something bad than it is to make any personal effort, understand, or grow. And the most lazy and selfish of all, the Kings of Hater Hill, are the racists and other bigots. They are the most incapable of compassionate effort or thinking beyond themselves. Facts, truths, details, kindness, empathy, etc… for the Kings of Hater Hill these all get lumped together as “disarray.” Politically, the Right loves to label the Left as always being in “disarray” (mainstream media does this too, but they do it more for monetary gain). “Disarray” is a very effective spin-and-dismiss tactic for haters. Anyone who can walk, think, and chew gum at the same time is in “disarray.” So the big question is how do you fight against a mob mentality of simplistic emotion? You stop trying to ‘understand’ them. They are an emotion. They are not rational. They are hate. Quit giving them oxygen. Regular people just need to stop trying to explain, accommodate or tolerate hate. For example: a racist is a racist is a racist. Racism is pure, high-grade hate. There’s no mystery to it. The only true ‘disarray’ regular people get into is when they waste their time on messaging coming out of racists. Ignore it! It’s useless noise. It’s evil. Always always always condemn it, fight it, and stop it any way you can, but ignore hate’s noise. It’s negative messaging. Listen, you can’t win against an emotion. It is irrational. It is unreasonable. Stop giving hate rights. Stop allowing it a chair at the table. Quit treating it like it matters. Go ahead and let haters label you as being in “disarray.” In fact, stand up proud of it. Who cares? The “disarray” they speak of is only a mirror of their own spiteful ignorance and lack of humanity, not the wonderful strength, beauty, and effectiveness of yours.
https://medium.com/@iriezozobra/meta-messaging-5952c6086c8a
['Jerry James']
2021-11-16 06:02:23.279000+00:00
['Kindness', 'Humanity', 'Disarray', 'Truth', 'Messaging']
Do we really need friends?
Do we really need friends? Since a young age I have never had true friends, of course, I would talk to some girls in school and socialize on different family occasions, but I never had real friends like my brothers had, the ride or die type. I would always think that there is something wrong with me, like how is this even possible?! Anytime I would find someone that could be a potential freind, I would get very close to them and after time realize that we are such different people and it would brake me, again, and again, and again. After a while, I gave up on trying to find good friends. I stared studying, reading; and to be honest, I learned so much I would never have been able to if I had friends around. It got me thinking that maybe people do not need friends, maybe there are different kinds of people and the kind to which I belong do not need friends and we are simply better off without them.
https://medium.com/@borodarivka/do-we-really-need-friends-65fe01e53c74
['Rebecca B.']
2020-12-26 21:20:42.244000+00:00
['Friends']
Treat Your Baby’s Skin The Natural Way With These Remedies
Photo by Colin Maynard on Unsplash If you’re a parent, you’re probably already aware of how sensitive a baby’s skin can be. While many baby products say they’re better for baby’s skin, they can still contain chemicals that can irritate your child making them uncomfortable and irritable. Natural skin care products contain fewer harsh chemicals and are more soothing for gentle cleansing and moisturizing. Natural skin care products have gotten a lot of attention in the past few years. We are finally realizing that slathering ourselves in chemicals might not be the best idea. And if those chemicals are harmful to adult skin imagine how harmful they may be for baby’s skin. With a little research you may be able to find some natural skin care product brands that are completely natural and appropriate for your baby. However, if you don’t trust your baby’s sensitive skin to even natural store-bought products, then you can make your own skin care items. Here are a few quick tips and recipes to get you started put you can find plenty more on natural parenting websites. For a great dry skin remedy, use your food processor to grind up plain oatmeal into a fine powder. Dissolve in a warm bath to moisturize your child’s skin and relieve itching from various causes including dry skin and chicken pox. Olive oil makes a great natural moisturizer for your baby’s sensitive skin. Just massage in for a healing remedy. You can also use it as a diaper cream to help with painful rashes. Instead of expensive diaper powders, use plain cornstarch instead. Not only is it inexpensive, it’s a lot safer and less irritating to sensitive skin. Don’t stop with powders and creams, you can make your own baby wipes too! Start by cutting a roll of paper towels in half to make them a convenient size. You can then put the towels in bowl of premixed natural solution. While there are hundreds of possibilities, you can start by trying 2 cups water mixed with about a quarter cup of aloe vera gel and a few drops of tea tree oil. Once you’ve thoroughly soaked the wipes, keep them fresh by recycling an old baby wipe container. Also keep in mind that babies don’t really need many products — natural or not. To avoid baby’s skin from drying out too much don’t bathe him or her every day. Instead use a soft cotton cloth with a little warm water to clean between fingers, toes, underarms, etc. Dry the skin thoroughly to avoid drying out. By not over bathing you baby you shouldn’t have the need for moisturizer. In the case that you may you can simply use a little natural oil as mentioned above. When it comes to healthy baby skin, less is more. That coupled with a few natural solutions as listed her is all you really need to keep your baby’s skin healthy and natural.
https://medium.com/@fwilson1066/treat-your-babys-skin-the-natural-way-with-these-remedies-fcc3482f7afb
['Fleur Wilson S Daily Health']
2020-11-06 06:03:24.903000+00:00
['Smooth Skin', 'Health', 'Baby', 'Natural', 'Pregnancy']
Stop the Execution of Childhood Abuse Victims
As we begin to gather with family and friends this Thanksgiving season, every faith tradition asks us to remember people who are alone, imprisoned, orphaned and oppressed. Although I have worked for the past few decades on behalf of survivors of trafficking and injustices within the criminal justice system, I still am caught off guard by the lack of understanding of how early severe childhood trauma affects young women physically and mentally and how that translates into gross injustice. This year, in addition to walking alongside thousands of survivors, I am thinking of someone I have never met and have no connection to named Lisa Montgomery. Lisa is a survivor of child abuse, domestic violence, incest, multiple rapes and child sex trafficking. Lisa’s mother started sexually trafficking her when she was a small child, including allowing her to be gang raped by adult men on multiple occasions telling Lisa she had to “earn her keep.” At 17, she was married to her stepbrother, who videotaped his torture of her. Her years of violent abuse at the hands of family members resulted in documented brain damage and severe mental illness that severed her connection with reality. Without antipsychotic medication, Lisa would lose the ability to understand what was happening to her and have difficulty determining what was real. Dr. Katherine Porterfield, a renowned expert on torture and trauma, has testified that the impact of Lisa’s sexual abuse was “massive,” and that her disorder was “one of the most severe cases of dissociation I’ve ever seen.” Lisa was arrested on December 17, 2004 and sentenced to death on October 22, 2007 for the murder of Bobbie Jo Stinnett. Lisa’s crime reflects her serious mental illness and abuse. She killed Ms. Stinnett, who was 8 months pregnant, to claim the baby as her own. It was a violent and sick crime committed by a victim of severe trauma and mental illness. While I believe Lisa should remain imprisoned and under psychiatric care for the remainder of her life to ensure her community’s safety, I do not believe that she should be executed on December 8th. Lisa’s tortured and traumatic upbringing, the injustices in her representation, and her mental health before and during the horrific crime make commuting her sentence the just and right thing to do. As our country continues to grapple with our culpability in the systems that have allowed child sexual trauma and Lisa’s abuse to continue unchecked for decades, we should offer grace to those we’ve failed. To learn more about Lisa’s story and ways you can advocate on her behalf, go to www.savelisa.org. This Thanksgiving let us pledge to continue our work on behalf of women and children like Lisa Montgomery. Let us fight to keep them safe, remove them from harm, provide spaces to harbor them from violence, work to make our laws more just, and provide the employment and mental health services they need to survive. — Becca Stevens is a speaker, social entrepreneur, author, priest, founder of eight nonprofit justice enterprises, and President of Thistle Farms — a national nonprofit that provides healing and hope for women survivors of trafficking, prostitution and addiction. Becca leads important conversations around the world with an inspiring message that love is the strongest force for change in the world. Becca resides in Nashville with her acclaimed songwriting husband, and her three sons. ThistleFarms.org
https://medium.com/@RevBeccaStevens/stop-the-execution-of-childhood-abuse-victims-9d8e31810998
['Rev Becca Stevens']
2020-11-10 18:00:47.575000+00:00
['Trafficking', 'Child Sexual Abuse', 'Criminal Justice', 'Child Rape', 'Death Penalty']
Applying Science, Creating Tomorrow
Applying Science, Creating Tomorrow Vaibhav Tidke’s memories as a young child are peppered with instances of visiting and shopping at the weekly fruit and vegetable markets. As he grew up, he developed a strong inclination towards applying science to address the problems of this community. In college, he established a not-for-profit ‘Science for Society’ with a few of his friends and during his final year, collaborated with National University of Singapore (NUS) on a project to study the relationship between dehydration and agricultural harvest. This project was not only aligned with his motivation to create a social impact, but also formed the knowledge foundation for his future entrepreneurial pursuits. During his post-graduate programme, Vaibhav sustained his earlier inclination and developed a Solar Conduction Dryer (SCD) under the umbrella of Science for Society along with a friend. As the name proposes, SCD utilizes solar power to convert fruits and vegetables into their respective dehydrated forms. In 2010, recognition came in the form of Bayer AG’s recognition of SCD as a global innovation; the team was also awarded € 1000. Receiving the Dell Award in 2012 further bolstered the team’s confidence in their technology and its potential to make an impact. This is when a team of four friends, who had started Science for Society, decided to set up an enterprise which could commercialise its technology. S4S (an acronym for Science for Society) was founded in 2013 as a food preservation company for inventing new food processing machines. With technology at its core, S4S is working with a range of partner organisations to create sustainable supply of preserved food products. While the technology gained recognition and momentum, there were many sector-specific challenges that Vaibhav had to face. Long gestation period and payment cycles, low adoption rate of technology and capital-intensive operations put immense financial pressure on S4S. Being entirely at the mercy of the weather, the agricultural cycle and consequently the customer cycle remained unpredictable. The environmental and climatic conditions affected the overall production levels which, in turn, affected the post-harvest processing. For instance, when there is little to no rain, the importance of post-harvest technology decreases, and economic returns are affected. To meet the economic returns are affected. To meet the financial needs, Vaibhav and his team undertook various consulting projects and tutoring. They were committed to making it work. Another significant hurdle that the organisation faced was the need to persuade reluctant farmers to accept this technology. “It is your vision, your dream, and your passion that makes a startup feasible. Start early, give sufficient time and take your own decisions.” Vaibhav and his co-founders have steadily and successfully overcome this plethora of challenges. S4S now collaborates with FICCI, DFID, and Gates Foundation to test the application of their technology on various food products. With revenues from SCD becoming stronger each year, S4S is now eyeing newer technologies and platforms. There is a massive need for technology solutions for agriculture. The team at S4S, with Vaibhav at the helm, is committed to it.
https://medium.com/ciie/applying-science-creating-tomorrow-2e9a2a3def31
[]
2019-09-14 10:04:40.705000+00:00
['Startup', 'Technology', 'Food', '10yearsofciie', 'Entrepreneurship']
Delphy Monthly Airdrop — July 2019
Time flies faster in the Crypto world. It’s almost end of July 2019, can you believe it? And it’s time for the Delphy Monthly Delphy Airdrop.If you have been holding DPY (Delphy token) at the moment of snapshot, you will benefit from our Monthly Delphy airdrop. Yes, you heard that right! We value our community, in fact Delphy promised to give 50% of all DPY tokens, which equals to 50 million DPY tokens in total, to our token holders in a linear distribution way within two years, for free. By end of each month, Delphy will monitor the number of holders and the amount of tokens held by each holder at a randomly picked moment (a snapshot) decided by Delphy. This reward program has started from Oct, 2017, as said before, will last for 2 years. The total amount DPY token to be awarded to our holders is exactly same amount every month, which is: total of 50 million (DPY) /24 (months) = 2,083,333.33333 (DPY). For July 019, the snapshot moment is 18:00 on 30th of July, 2019, Beijing Time. The number of tokens to be received by a holder = the amount of token held at snapshot moment/ 45 — transaction fees(which is about 0.5DPY at this moment = Gas price * Gas used/Price of DPY). (Why 45? Because, the total months for airdrop is 24, and till now, 21 airdrops have already happened.) Due to the transaction fees, if the number of DPY tokens you are holding at the moment of snapshot is less than 22.5, you will not be able to receive airdrop. You can get the airdrop, if you keep your DPY tokens at your exchange account, like gate.io, okex.com, cybex.io, and CoinMex.com. However, the speed at which the tokens are received, may depend on the workflow and workload of the respective exchanges. Therefore, if you would like to get the airdrop immediately, the best option would be to have your DPYs in your wallet. Thank you for your continued support! About Delphy A Decentralized mobile social platform for prediction markets. Delphy is a decentralized, mobile prediction market platform built on Ethereum. The Delphy App is a light Ethereum node running on mobile devices. Delphy uses market incentives to allow participants in a market to communicate, instantly and transparently, their wisdom regarding the outcome of upcoming events, effectively predicting the future. We design Delphy from the start to be decentralized, which makes it difficult to manipulate prediction results. These are our official social platforms: Telegram Ann: https://t.me/DelphyANN Telegram Chat: https://t.me/DelphyCHAT Twitter: https://twitter.com/Delphy_org Medium: https://medium.com/delphy Facebook: https://www.facebook.com/Delphyfoundation/ Bitcointalk: https://bitcointalk.org/index.php?topic=2046622.0 Reddit: https://www.reddit.com/r/Delphy/ The Delphy Team
https://medium.com/delphy/delphy-monthly-airdrop-july-2019-9703b154ca03
['Ada Gao']
2019-07-30 06:57:27.960000+00:00
['Airdrop', 'Blockchain', 'Prediction Markets', 'Ethereum']
Bitcoin, Ethereum and Altcoins Approach Key Supports
Bitcoin price failed to recover above USD 49,000 and started another decline. BTC traded below the USD 48,000 support level, broke USD 47,500 and is currently (04:00 UTC) struggling to recover losses. Similarly, most major altcoins are moving lower. ETH extended its decline below USD 3,150 and even spiked below USD 3,100. XRP is trading well below USD 1.20 and it might even test USD 1.05. ADA could find support near the USD 2.50 level. Total market capitalization Bitcoin price After struggling to recover above USD 49,000, bitcoin price started a fresh decline. BTC traded below the USD 48,000 and USD 47,500 support levels. If the bears remain in action, the price could even test the USD 46,800 level. The next major support is near USD 46,500, below which the bears might aim at USD 45,000. On the upside, the USD 48,000 level is a short-term resistance. The first major resistance is near the USD 48,500 level, followed by USD 49,000. A close above USD 49,000 could start a steady increase. Ethereum price Ethereum price failed to climb back above USD 3,220 and it extended its decline. ETH traded below the USD 3,150 and USD 3,110 support levels. It is now holding the USD 3,050 zone, below which the bulls might attempt to protect the USD 3,000 support. If they fail, the price may possibly drop to USD 2,880. On the upside, an immediate resistance is near the USD 3,200 level. The first major resistance is near the USD 3,220 level, above which the price might test USD 3,300. ADA, LTC, DOGE, and XRP price Cardano (ADA) is down 7% and it traded below the USD 2.72 support level. It even broke USD 2.65 and it might continue to move down towards the USD 2.50 level. Any more losses could open the doors for a move towards the USD 2.20 support zone. Litecoin (LTC) is struggling to stay above the USD 170 level. The next major support is at USD 165, below which the price could test USD 155. The main support for the current wave is near USD 150. On the upside, the price may possibly face hurdles near the USD 180 level. Dogecoin (DOGE) declined 8% and it even spiked below the USD 0.280 support. The next major support is near the USD 0.265 level. If there are more losses, the bears might aim a test of the USD 0.250 level. On the upside, an initial resistance is near the USD 0.292 level. The main resistance is now forming near the USD 0.300 level. XRP price is extended its decline below USD 1.20 and USD 1.15. The next major support is near USD 1.05, below which the price could test the main USD 1.00 support. On the upside, the previous support at USD 1.20 might act as a resistance. The next key resistance is near the USD 1.22 level. Other altcoins market today Many altcoins are down 10%, including TEL, LUNA, XDC, ONT, SOL, HBAR, AUDIO, HNT, FTM, SUSHI, MIOTA, NEAR, and ZIL. Out of these, TEL is down 15% and traded below 0.023. To sum up, the bitcoin price is slowly moving lower below the USD 48,000 level. If BTC extends its decline below USD 46,800, there is a risk of a move towards the USD 45,000 level. _____ Find the best price to buy/sell cryptocurrency:
https://medium.com/@carbonyte/bitcoin-ethereum-and-altcoins-approach-key-supports-2e3a509a1b8e
[]
2021-08-25 15:07:05.225000+00:00
['Ethereum Blockchain', 'Ethereum Classic', 'Bitcoin Mining', 'Bitcoin News', 'Cryptocurrency Investment']
Be A Cloud
Be a cloud watching everything below. Embrace the power of Mother Nature. A cloud may resemble a softer texture. Centered around the clear blue skies. Letting it move around the sun. Outdoors can make the clouds as a picture perfect view. Using it as a way to visualize more about the clouds. Discover something more in an entirely different way.
https://medium.com/@jessica.milliner/be-a-cloud-6dfd0ffd4a10
['Jessica Milliner']
2020-12-03 12:55:53.438000+00:00
['Nature', 'Cloud', 'Outdoors', 'Weather', 'Acrostic']
Little Girl
Little Girl The little girl you had, the one you birthed and brought into this world. She knew you never wanted her, she wasn’t that oblivious, but it wasn’t her fault you chose to keep her. I never felt close to my parents growing up. I was the youngest of three and my parents were teen parents. They had a lot going on. We always moved around when I was little and they were always working. It was hard to feel loved when you can sense the stress between them. As time went on, they eventually divorced. It was just as hard to feel loved when they were ultra focused on their different relationships after that. It was like I was forgotten.
https://medium.com/@isabellaforino/little-girl-3d1ca9dd0d2b
[]
2020-12-19 03:30:24.099000+00:00
['Poem', 'Sad', 'Teens', 'Poetry']
Online Logbook Data Science: Part 1
Online Logbook Data Science: Part 1 Is downgrading climbs more common than upgrading? Do climbers begin to prefer longer routes over shorter ones as they get older? What route will you enjoy the most at a new climbing area? In the past, these questions have been difficult or impossible to answer objectively. In general, we remain largely in the dark about the underlying mechanics of climber preferences and our habits of grading climbs. However, tens of thousands of climbers have added millions of ascents to “online logbooks,” which I define as web applications for keeping track of which climbs you have done. This presents an exciting opportunity for the social science of climbing to answer questions that require a lot of data to answer. The goal of this project is to leverage the data available on online logbooks to gain a better understanding of climber preferences and route grading. In collaboration with thecrag.com and Beta Angel, I have analyzed a dataset of 1.5 million ascents logged by climbers from all over the world. I plan to release results in batches that will address the following questions: Do longer climbs appeal to older climbers? (The subject of this post) Can a route’s grade be decided by an algorithm? Who is downgrading climbs, and how often does downgrading happen? Can machine learning be used to recommend climbs like Netflix recommends movies? The backstory About three years ago I was cooped up in my AirBnB on a bouldering trip in Hueco Tanks, Texas. It was a day some friends and I were forced to rest, due to some winter storms. We were getting cagey, and a little tired of watching climbing videos — like any group of cooped-up college student climbers on their last few days of winter break, we just wanted to climb. Someone was listing off the latest routes that had been downgraded on 8a.nu, an online logbook website that was popular among some of the group’s more ambitious grade chasers. And then someone said, “Wouldn’t it be cool to use all that data on 8a.nu to do some Machine Learning on what people like to climb?” We loved the idea. (Yes, my group was hopelessly nerdy.) Right then, the idea for Online Logbook Data Science was born. Wouldn’t it be cool to use all that data on 8a.nu to do some machine learning on what people like to climb? After some excited brainstorming, the literal storm cleared, and just as you’d expect, we dropped everything and flocked to the boulders. But the idea stayed with me… and today I’m excited to announce the first batch of results! Image by the author Our approach The goal of this project is to gain a deeper understanding of climber preferences and habits of grading climbs. The project is deliberately open-ended. We did not run an experiment with any single hypothesis in mind. Instead, we are starting with the data, and exploring the space of possible questions that might be answerable from it. Let’s get more concrete about what it means to do data science on an online logbook. What is an online logbook? I define an online logbook as a web application where climbers keep track of which climbs they’ve done. 8a.nu, the logbook mentioned in the project backstory, is just one example — there’s also thecrag.com, Mountain Project, 27crags, and several others. Why do data science on online logbooks? On each of the major online logbook platforms, tens of thousands of climbers have logged over a million ascents. That’s exciting, but you can’t even open an Excel file with that many rows. How are we ever going to spot patterns in that much data? Data science. If you’re not familiar with the phrase, think statistics, but with computer programming mixed in. And you are already familiar with data science…read on! My goal for this series is to write for beginners and experts alike. Hasn’t this been done already? Interestingly, soon after the idea of Online Logbook Data Science was floated on that Hueco Trip I described above, David Cohen independently scraped 8a.nu and posted the resulting dataset on Kaggle, leading to some interesting spinoff research. Some highlights: Steve Bachmeier used the dataset to investigate demographic factors for climber performance. ( Tl;dr : experience helps, height doesn’t.) : experience helps, height doesn’t.) Durand D’souza posted an interesting notebook showing how long it takes climbers to reach the next grade (a few months to a year before the data thins out). However, David Cohen’s script was taken down, and the author himself was threatened with legal action (scraping is a dangerous game). Although the data is still available on Kaggle, it has not been “refreshed” in some time, and some may argue that it’s ethically questionable to use it at this point (at least since 8a.nu’s parent company objected to its use). Fortunately, 8a.nu isn’t the only online logbook in the game; neither is their data the best-suited to data science, necessarily. One issue is that routes are reportedly keyed by route name, so misspellings during logging can lead to messier data. On the other hand, I was excited to find that the team at theCrag.com promotes collaborations with data scientists and keeps their data relatively clean and structured. For example, all ascents of the same route are linked by a route ID, even if the climbers who logged the route misspelled the route’s name. Where we are today Courtesy of Ulf Fuchsleuger and the team at theCrag.com, one of the largest online logbook platforms (and the largest in Australia), I was permitted to access a sizable dataset consisting of around a million and a half ascents. From several excited conversations with my project mentor Taylor Reed, we have almost as many ideas for research questions as we have rows of data. Since our goal is to research continuously and let the data speak for itself, I have decided to release our research results in small-ish batches, rather than allow insights to accumulate over a long period of time. Results Batch One We began with a question that I thought was a bit of a long shot: do longer climbs appeal to older climbers? Though it may sound funny when stated that way, I think the assumption that age has something to do with route length preference is relatively common. Imagine hearing this at the crag: “That route is just 10 meters, and incredibly crimpy. It’s like one long boulder problem. I’ll leave it to the gym kids and save my tendons for a nice long 40-meter adventure.” In my mind, the climber voicing this opinion has a few more gray hairs than those gym kids. But does the data support the stereotype? The answer, surprisingly, is yes. To reach that conclusion, I started by formulating the question as one that could be answered from the available data. The precise question I chose was “Do longer routes get a higher proportion of older ascentionists?” For example, if 75% of the climbers who logged a 35m route were over 50 years old, but the typical route on thecrag.com had only 20% of its ascents logged by climbers in that demographic, then that climb could be said to get a higher proportion of older ascentionists. Do longer routes get a higher proportion of older ascentionists? Notably lacking in rigor is the phrase “older ascentionists” — what qualifies as old? Choosing a threshold for “old” required a look at the data of who’s logging their climbs on thecrag.com Figure 1 — Image by the author From the histogram in Figure 1, it’s clear that thecrag.com users are a young crowd, with a median age of just 27. As a result, I chose “over 35” as my definition of “older climber” for the purposes of this problem. Please don’t be offended if you are over 35! I do not consider you “old” if you are 36. I just needed your data ;). To spot a subtle difference between two groups, you need as many “observations” (climbers’ log records) as you can get for both of them, and 35 was the highest age for which there were still a reasonable number of climbs logged in the dataset (about 22% of log records). Next, I chose to consider only sport climbing routes under 40m, also due to data availability. thecrag.com has about twice as many sport climbing ascents as the next route gear style (trad), and as shown in Figure 2, the number of routes logged also drops off significantly with climbs above 40m. 40m also happens to be a good proxy for “single-pitch” sport routes, so this works out nicely. Figure 2 — Image by the author To answer the main research question I needed just two more steps. First, I grouped routes by length, assigning each route to a five-meter bucket and combining their ascents. (In other words, all ascents of routes between 20 and 25m were counted in the same bucket.) Then, I took the fraction of all climbers who logged a route in that bucket who were over 35, and plotted them against route length. Figure 3 demonstrates the result, which I found surprisingly coherent. Figure 3 — Image by the author So for routes around 5 meters, about 20% of their climbers are above 35, whereas for routes around 35 meters, that number is about 25%. A simple linear regression on the data from Figure 3 yields a p-value of about 0.0015. For reference, 0.05 is considered “statistically significant,” so this is pretty good! If p-values are unfamiliar to you, suffice it to say that if there were no relationship between route length and climber age, this result would have been very unlikely. I repeated the experiment with a slightly different formulation, looking at mean climber age vs route length bucket. I got a similar result, shown in Figure 4. I did another linear regression on the mean age vs route length data and got a p-value of about 0.003. Still well below the threshold for statistical significance! Figure 4 (the boring kind, not the fun climbing move) — Image by the author So in what turned out to be kind of an extra-mathy, climber-edition Mythbusters style exercise of checking an admittedly niche “myth”, I get to declare this one… Mythbusters calls this “CONFIRMED” . Image by the author, inspired by mythbusters. Though of course, take it with a grain of salt. The average climber logging a route gets about 10 days older per meter. Sneak Peek at Results Batch Two Next on the agenda is an exploration of downgrading as a social phenomenon. Looking at hundreds of thousands of routes logged gives us an exciting chance to spot trends in who is downgrading climbs, and how often it takes place. My next post in this series will focus on how to write an algorithm that will identify when a climb has been downgraded with no human in the loop. After that, we’ll do some social science on the climbs we identified as having been downgraded. Maybe we’ll even tackle another climbing myth while we’re at it! Acknowledgements Thanks again to the team at thecrag.com for providing data and support. Thanks Taylor Reed of Beta Angel for mentoring me and listening to many, many oscillations of excitement and disappointment as I worked through different research directions, software bugs, procrastination and stress. You guys are the best! Thanks also to the Data Science team at Lark, who offered some feedback on my analysis. I’m going to post a followup soon to discuss their comments and go deeper on the “why” questions of the route length vs climber age correlation that was the subject of this article.
https://towardsdatascience.com/online-logbook-data-science-part-1-62601d1724c4
['Charlie Andrews-Jubelt']
2020-12-27 19:56:04.048000+00:00
['Sociology', 'Statistics', 'Climbing', 'Editors Pick', 'Data Science']
What Is Chi-Square Test & How Does It Work?
As a data science engineer, it’s imperative that the sample data set which you pick from the data is reliable, clean, and well tested for its usability in machine learning model building. So how do you do that? Well, we have multiple statistical techniques like descriptive statistics where we measure the data central value, how it is spread across the mean/median. Is it normally distributed or there is a skew in the data spread? Please refer to my previous article on the same for more clarity. As the first thing we do is to visualize the data using various data visualization techniques to make some early sense of any data skewness or discrepancies, to identify any kind of relationship between data set variables. Data has so much to say and we data engineer give it a voice to express and describe itself, using descriptive statistical techniques. But to make any prediction or to infer something beyond the given data to find any hidden probability, we rely on inferential statistic techniques. Inferential statistics are concerned with making inferences based on relations found in the sample, to relations in the population. Inferential statistics help us decide, for example, whether the differences between groups that we see in our data are strong enough to provide support for our hypothesis that group differences exist in general, in the entire population. Today we will cover one of the inferential statistical mechanisms to understand the concept of hypothesis testing using a popular Chi-Square test. What is the Chi-Square Test? Do remember that, It is an inferential statistical test that works on categorical data. The Chi-Squared test is a statistical hypothesis test that assumes (the null hypothesis) that the observed frequencies for a categorical variable match the expected frequencies for the categorical variable. The test calculates a statistic that has a chi-squared distribution, named for the Greek capital letter Chi (X) pronounced “ki” as in kite. We try to test the likelihood of test data(sample data) to find out whether the observed distribution of data set is a statistical fluke(due to chance ) or not. “Goodness of fit” statistic in the chi-square test, measures how well the observed distribution of data fits with the distribution that is expected if the variables are independent. How Does Chi-Square Work? Generally, we try to establish a relationship between the given categorical variable in this test. Chi-square evaluates whether given variables in a data set(sample) are independent, called the Test of Independence. Chi-square tests are used for testing hypotheses about one or two categorical variables and are appropriate when the data can be summarized by counts in a table. The variables can have multiple categories. Type of Chi-Square Test: For One Categorical Variable, we perform Chi-Square Goodness-of-Fit Test The chi-square goodness of fit test begins by hypothesizing that the distribution of a variable behaves in a particular manner. For example, in order to determine the daily staffing needs of a retail store, the manager may wish to know whether there is an equal number of customers each day of the week. For, Two Categorical Variables, we perform Chi-Square Test for Association Another way we can describe the Chi-square test is that: It tests the null hypothesis that the variables are independent. The test compares the observed data to a model that distributes the data according to the expectation that the variables are independent. Wherever the observed data doesn’t fit the model, the likelihood that the variables are dependent becomes stronger, thus proving the null hypothesis incorrect! Hypothesis In Chi-Square: The first thing as a data engineer, you need to establish before performing any Inferential statistic test like Chi-Square, is to establish H0: Null Hypothesis H1: Alternate Hypothesis For One Categorical Variable: Null hypothesis : The proportions match an assumed set of proportions : The proportions match an assumed set of proportions Alternative hypothesis: At least one category has a different proportion. • For, Two Categorical Variables: Null hypothesis : There is no association between the two variables : There is no association between the two variables Alternative hypothesis: There is an association between the two variable Before we jump into understanding how Chi-square works with an example, we need to understand what is Chi-square distribution & some other related concepts. This Chi-squared distribution is what we will analyze going forward in the chi-square or χ2 test. What Is Chi-Square Distribution? The chi-square distribution (also chi-squared or χ2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. It is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing or in the construction of confidence intervals. The primary reason that the chi-square distribution is used extensively in hypothesis testing is its relationship to the normal distribution. An additional reason that the chi-square distribution is widely used is that it is a member of the class of likelihood ratio tests (LRT).LRT’s have several desirable properties; in particular, LRT’s commonly provide the highest power to reject the null hypothesis. Degree Of Freedom in Chi-Squared Distribution: The degrees of freedom in Chi-Squared distribution is equal to the number of standard normal deviates being summed. The mean of a Chi-square distribution is its degrees of freedom. A chi-square distribution constructed by squaring a single standard normal distribution is said to have 1 degree of freedom The degrees of freedom ( df or d) tell you how many numbers in your grid are actually independent. For a Chi-square grid, the degrees of freedom can be said to be the number of cells you need to fill in before, given the totals in the margins, you can fill in the rest of the grid using a formula. The degrees of freedom for a Chi-square grid is equal to the number of rows minus one times the number of columns minus one: that is, (R-1)*(C-1). Remember! As the degree of freedom (df), increases the Chi-square distribution approaches a normal distribution Chi-Square Statistic: The formula for the chi-square statistic used in the chi-square test is: The subscript “c” here are the degrees of freedom. “O” is your observed value and E is your expected value. The summation symbol means that you’ll have to perform a calculation for every single data item in your data set. E=(row total×column total) / sample size The Chi-square statistic can only be used on the numbers. They can’t be used for percentages, proportions, means, or similar statistical value. For example, if you have 10 percent of 200 people, you would need to convert that to a number (20) before you can run a test statistic. Chi-Square test involves calculating a metric called the Chi-square statistic mentioned above, which follows the Chi-square distribution. Let’s see an example to get clarity on all the above-covered topics related to Chi-Square: P-Value: The null hypothesis provides a probability framework against which to compare our data. Specifically, through the proposed statistical model, the null hypothesis can be represented by a probability distribution called P-value, which gives the probability of all possible outcomes if the null hypothesis is true; It is a probabilistic representation of our expectations under the null hypothesis. Chi-Square Test Explained With Example: We will cover the following important steps in our journey of the Chi_square test for Independence of two variables. State The Hypothesis Formulate Data Analysis Plan Analyze The Sample Data Interpret The Outcome Problem: This problem has been sourced from starttrek A public opinion poll surveyed a simple random sample of 1000 voters. Respondents were classified by gender (male or female) and by voting preference (Republican, Democrat, or Independent). The results are shown in the contingency table below. We have to infer, Is there a gender gap? Do the men’s voting preferences differ significantly from women’s preferences? Use a 0.05 level of significance. Let’s try to solve this problem using the Chi-Square test to find out the P-Value. Here test type which we will employ is : Chi-square test for independence. So let’s get started by first stating our hypothesis. Step 1: State The Hypothesis: Here we need to start by establishing a null hypothesis and counter hypothesis(alternative hypothesis) as given below. Null Hypothesis: Ho: Gender and voting preferences are independent. Alternate Hypothesis: H1: Gender and voting preferences are not independent. Step 2: Let’s Build Our Data Analysis Plan : Here we will try to find out P-Value and match it with the significance level. Let’s take the standard and accepted level of significance to be 0.05. Given the sample data in the table above, let’s try to employ Chi-Square test for independence and deduce the Probability value. Step 3: Let’s Do Sample Analysis: Here we will analyze the given sample data to compute Degree of freedom Expected Frequency Count of sample variable Calculate Chi-Square test static value All the above values will help us find the P-value. Degree Of Freedom Calculation: Let’s calculate df = (r — 1) * (c — 1), so in the given table, we have r(rows)= 2 and c(column) = 3 df= (2–1)*(3–1) = 1*2= 2 ; Expected Frequency Count Calculation: Let Eij, represent expected values of the two variables are independent of one another. Eij = ith (row total X jth column total) / grand total Let’s calculate the expected value for each given row and column value by using the above mentioned formula, Let me copy the table image again below to help you make calculation easily, Here, Row 1 total value = 400, total value for column1 = 450, total sample size = 1000, So, E1,1 = (400 * 450) / 1000 = 180000/1000 = 180 Similarly, let's calculate other expected values as shown below, E1,2 = (400 * 450) / 1000 = 180000/1000 = 180 E1,3 = (400 * 100) / 1000 = 40000/1000 = 40 E2,1 = (600 * 450) / 1000 = 270000/1000 = 270 E2,2 = (600 * 450) / 1000 = 270000/1000 = 270 E2,3 = (600 * 100) / 1000 = 60000/1000 = 60 Time to calculate Chi-Squares for each calculated expected values above using the formula: Calculating Chi-Square: As already discussed above, the formula for calculating chi-square statistic is The subscript “c” here are the degrees of freedom. “O” is your observed value (actual values given in the table above)and E is your expected value(which we just calculated). The summation symbol means that you’ll have to perform a calculation for every single data item in your data set. Χ² = Σ [ (Oi,j — Ei,j)² / Ei,j ] Using the above formula our chi-square values comes out to be as given below, Χ² = (200–180)²/180 + (150–180)²/180 + (50–40)²/40 + (250–270)²/270 + (300–270)²/270 + (50–60)²/60 Χ² = 400/180 + 900/180 + 100/40 + 400/270 + 900/270 + 100/60 So our final chi-square statistic value , Χ² = 2.22 + 5.00 + 2.50 + 1.48 + 3.33 + 1.67 = 16.2 Having calculated the chi-square value and degrees of freedom, we consult a chi-square table to check whether the chi-square statistic of 16.2 exceeds the critical value for the Chi-square distribution. The intent is to find P-value, which is is the probability that a chi-square statistic having 2 degrees of freedom is more extreme than 16.2. How to calculate P-value? Given the degree of freedom = 2 & Chi-square statistic value = 16.2 , we can easily find P-value using this given Chi-Square Calculator link, simply enter the Chi-square statistic value & degree of freedom as an input, also keep your significance level as 0.05, you will find the result as given below, P-Value is =. 000304. The result is significant at p < .05. You can also find P-value using Chi-Square table given below, you can get this table from this source Having calculated the chi-square value to be 16.2 and degrees of freedom to be 2, we consult a chi-square table given above to check whether the chi-square statistic of 16.2 exceeds the critical value for the Chi-square distribution. The critical value for the alpha of .05 (95% confidence) for df=2 comes out to be 5.99 Step 4: Interpreting the result A: Inference From The P-value: Since we have got the P-value of 0.000304 we can interpret the result where it signifies that As the P-value (0.000304) is less than the significance level (0.05), So we have to reject the below given Null Hypothesis, which says, gender and voting preferences are independent. & accept Alternate Hypothesis: Which says, gender and voting preferences are not independent. Hence we can conclude that, There is a relationship between gender and voting preference. B: Interpreting from Chi-Square Table: Since the critical value for the alpha of .05 (95% confidence) for df=2 is 5.99 and our chi-square statistic value 16.3, is much larger than 5.99, we have sufficient evidence to reject our Null hypothesis which we covered above. So we accept the Alternate Hypothesis: Which says, gender and voting preferences are not independent. Hence we conclude that, There is a relationship between gender and voting preference. What’s Next? We will understand how to perform Chi-Square test using python & Jupyter notebook in part 2 of this series of Inferential Statistic: Hypothesis testing Using Chi-Square and will further explore Normal Deviate Z Test: Two-Sample T-Test ANOVA Test & also will introduce one of the key topic: “Power of Statistical Test “ The power of any test of statistical significance is defined as the probability that it will reject a false null hypothesis. Summing up this part, with a very helpful infographic which guides you to choose your hypothesis test type: So choose your test data wisely and make sure you are interpreting sample data right, so that you can go ahead to design your ML models with required accuracy & confidence.
https://medium.com/swlh/what-is-chi-square-test-how-does-it-work-3b7f22c03b01
[]
2020-08-22 19:01:58.759000+00:00
['Machine Learning', 'Data Science', 'Statistics', 'Data Analysis', 'Technology']
In-depth Tenant Screening: One of the Best Risk Protection for Landlords
A landlord looking for a new tenant for his property chooses the first applicant without taking the time to do thorough due diligence to determine if the person is a “right” candidate. You may. As a result, such landlords are often “nightmarish” lessors or simply fewer people than the desired lessor. Some of the characteristics of such people include not paying rent on time, damage to property, noise, and obstruction to neighbors, and non-compliance with established rules. Recruiting such tenants is a costly mistake, as you can usually lose large amounts of cash in the form of high rents, high real estate repairs, customs fees, potential attorney fees, etc. .. Perform extensive tenant screening for all applicants for your property. A thorough selection of prospective tenants is an important exercise that helps landlords reduce the likelihood of problematic tenants. Of course, besides a detailed review, there are other things you can and should do. This article is all about the latter and what landlords should consider when looking for a “perfect tenant”. What is Tenant Screening? Tenant screening is the process of assessing potential tenants, meeting rental conditions, maintaining properties properly, and generally determining their potential to be good tenants. This is not only about analyzing the information provided by the applicant, but also about other publicly available information about the applicant and the most comprehensive possible review of the information provided. Through an appropriate screening process, it may be possible to understand the types of tenants the applicant may move in and decide whether to allow the applicant, agree under certain conditions, or reject the tenant. I have. completely. There is no reliable way to know what an applicant will be, but a proper and thorough screening process can help identify tenants who are likely to be bad tenants for rental units.
https://medium.com/@tajuddin-ahamed/in-depth-tenant-screening-one-of-the-best-risk-protection-for-landlords-d55a72598542
['Tajuddin Ahamed']
2021-12-31 09:20:21.390000+00:00
['Memphis', 'Property', 'Rental Property', 'Property Management', 'Mpm']
Weekly Digest: End of Google Home, Beginning of Voice-Powered Google Assistant Payments
This week in Conversational AI Image Credit: Victor Seleykov/Just AI Microsoft patent hints there might be a Surface-branded smart speaker coming A Microsoft patent that surfaced recently shows that the tech giant may be working on its own smart speaker. This is particularly surprising in the lieu of Microsoft’s efforts to make its Cortana voice assistant part of its enterprise offering. In this patent for Smart Speaker System with Microphone Room Calibration, we see a standard smart speaker, featuring internet access, Bluetooth, and six microphones connecting to a built-in voice assistant along with the audio playing hardware. Find out more here. Amazon’s Alexa now offers ‘Thought of the Day’ from Marshmello and Halsey Team More celebrities choose Alexa as a way to reach their fanbase. So, voice assistants’ customers can say “Alexa, what’s Halsey’s thought of the day?” and “Alexa, what’s Marshmello’s thought of the day?” and get a glimpse into the random musings from both artists. Marshmello shares his thoughts on pretty much everything while Halsey muses on topics like learning to waltz and penguin courtship rituals. Find out more here. Google Store no longer offers the original Google Home Last week, Android Police has discovered that Google Home is “no longer available” in the company’s US and Japan Stores. At the same time, Canada’s store offers visitors to “join the waiting list.” Later, Google started offering free Nest Mini smart speakers to YouTube Premium, YouTube Music Premium, and Google Play Music subscribers in the U.S. While rumor has it, there is a successor underway, we have to wait and see. Find out more here. Google is testing voice payments with Google Assistant No matter what future holds for Google Home devices, Google is testing a new Voice Match feature for secure purchases made through its voice assistant. A company’s spokesperson confirmed to Android Police that “the functionality is new, and is designed to help secure purchases made on smart speakers and smart displays.” Currently, it only works for in-app digital purchases through Google Play and restaurant orders. Find out more here.
https://medium.com/voiceui/weekly-digest-end-of-google-home-beginning-of-voice-powered-google-assistant-payments-60629564a584
['Dasha Fomina']
2020-06-01 15:39:00.610000+00:00
['Voice Assistant', 'Digest', 'Voice Technology', 'Conversational Ai', 'Alexa']
About Me — Reginald Ben-Halliday. Hi! I’m Reggie and I’m single.
Hello people of the world, I am Reginald Ben-Halliday. You can either call me Reggie, Reg, Ben, Haly, Halliday or you can be creative with the three names above, and call me whatever comes to your mind. Hahaha. Just joking, Please just call me Reggie. I am not sure I have anything interesting about myself to share to the world, but I would write something. Don't worry I would make this quick and hopefully not boring. I am from an island called Bonny, in the Southern part of Rivers state, Nigeria. Despite the Island being placed at the edge of the Atlantic Ocean, people usually expect me to know how to swim… Hahaha but sorry to disappoint you, I don’t know how to swim. Swimming and traveling by water is one of the few things I fear. I am the first of my siblings and the first Grandchild on my father’s side. There is a lot of pressure put on the first born of a nuclear family not to talk of being the first child in an extended family. You just have no room to make mistakes because you have a lot of cousins and siblings looking up to you. I am an Accounting student in a popular university in the country. The course can be stressful at times, and in those stressful times I would wonder why I choose to study the course in the first place. I love to travel. Though I’ve only travel within the country. Every holiday or short school breaks I would plan to get away from my school environment and family to a different place just to get my sanity back. I love traveling by road, because of the excitement it brings when I get to meet new people that I will be spending six to nine hours with on the road. I get to hear those people tell their stories. What provokes them, what excites them, why they are traveling and so on. My a goal now is to travel the world, see new people, learn new cultures and get to enjoy their weather. The first country on my travel list is Canada. So Canadians get ready for me I will be coming soon… by the grace of God. Haha. I am very selective when it comes to food. I don’t eat everything. Unless it’s junk food — I am crazy for those. But I do have one favorite food and it is Egusi soup and pounded yam(a native Nigerian dish). I used to love the color yellow when I was a kid, but now most of the T shirts I buy are gray. I am a twenty-four year old guy, who loves listening to songs from the 60s, 70s, 80s, 90s And very few of my generation’s music. My love for writing began when I was in secondary school. There was an assignment giving to us to write a thousand word story. It took me a whole weekend to write a thousand word story. We submitted the assignments and later that day my English teacher walked into the classroom and said that their were only two people who’s work he enjoyed and he was going to call their names. I was never among the top ten smartest student in my class, neither was I among the bottom ten students struggling. I was always in the middle… the average Joe. So when my teacher told me he enjoyed my work, I couldn’t believe it. My teacher called me to read my story in front of the class. Though my work wasn’t perfect there were places he had corrected some spelling errors and grammatical blunder with a red pen. I read my work to my classmates and they loved it. I wrote a lot of stories in my Secondary school days, some were as good as the first one, some were terrible. After secondary school I stopped writing for a very long time. It was when I came across medium that I decided to write again. I have written some articles here on this platform. Few did pretty well, while the rest not so well. But I am happy to be on a platform where you could write and earn money as well. I do not know what the future would be like writing on medium. My interest might change, I might stop writing on medium one day and focus on something else. So I don’t really have a long term medium goals. So thank you for taking your time to read my work. If you wish to reach me, to chat a little, I am available on Facebook. So drop a message and I would reply. I would love to make new friends.
https://medium.com/about-me-stories/about-me-reginald-ben-halliday-6be1dacad05f
['Reginald Ben-Halliday']
2020-12-18 02:02:00.487000+00:00
['Writer', 'Introduction', 'Lifestyle', 'About Me', 'Singles']
The only role?
The only role? I sometimes ponder on the reasons why adult child estrangement is seemingly on the increase…… For me, the rejection by our adult children is because they perceive that their parents have ‘failed’ them in some way. Their expectations on what ‘should’ have been available to them as children and young adults has not been their experience. People who have yet to ‘wake up’, tend to blame others for their needs and wants not being met, and parents, especially mothers, seem to be an easy target. In some ways it comes across to me as a human evolution……we are the only mammals that make our young dependant on us for such a long time….. And if we haven’t got an Estrangement going on, the opposite problem often is a ‘failure to launch’ one. Sometimes parents continue to ‘support’ their children into their 30s and beyond….. I seek to raise awareness of the problem. My reason for doing this would be to help estranged parents to see that being a parent is just one aspect of our lives and not the whole reason for our existence. It’s a role. And like any role it comes to an end. And if we have been a successful in our role, the child doesn’t need us any more. Whether they want us in their lives is a different conversation….. Part of the problem, I believe, is that we parents are increasingly needy of our children for our sense of self worth. There is something in many of us that needs our children to like us, appreciate us, value us. It’s my belief that we have to find that for ourselves and allow our adult children to decide for themselves the value that they give to us…..and in doing so, give value to themselves. Much love Xxx
https://medium.com/@sundygilchrist/the-only-role-c81282f7f973
['Sundy Gilchrist']
2020-12-21 08:28:26.630000+00:00
['Growth', 'Estrangement', 'Motherhood', 'Loss']
Do-You-Mean Laravel Search
We will use Jaro-Winkler Similarity to match the query value against the data in the table both Products and Application_Doc. We don’t have to write an implementation for Jaro-Winkler Similarity there exists an amazing Laravel package we just need to add it via composer to our project. composer require atomescrochus/laravel-string-similarities The above code will filter based on the Jaro-Winkler Similarity and sort the collection based on similarity with the query against each column in the Product table and suggest a new collection. The same for the ApplicationDoc table. The above method will sort suggestions from each table based on the highest similarity. The above method collects the suggestion and creates a new single dimension array. We can paginate the result with this method Add combine all the above in a single controller
https://medium.com/@millionseleshi/do-you-mean-laravel-search-f41c7b5bec89
[]
2020-12-22 14:53:01.747000+00:00
['String Similarity', 'Laravel', 'PHP', 'Search', 'Fuzzy Logic']
The Human as an Antenna
The human is an antenna. We are a receiver for the wisdom of the world. The knowledge and wisdom of the universe. We stand in between. The human is the variable between the spirit and physics. And who we are. Our identity. Me, you. We are the sum of our will power. Our will power is our identity. It is something that is a lever. Leverage. It can be developed and it can be destroyed. Our will power is based on our thinking. Our frame of reasoning. How we think through. How we compute. How we navigate our lives. And the human. The human’s journey is levels of awareness. We are awareness. The signal, the receiver, the transmitter, the atenna. Consciousness. Depending on how much we exercise our identity. Our will power. Extreme will power is a liability too. It is the opposite to inertia. Truth, wisdom, the human, our consciousness, the right path is the balance between the contradiction of the two. Discipline and inertia. Between the tension of two equal and opposing forces. Strengthen your will power. It comes through thinking. And thinking comes through surrender. Live. Love. Be. Rehman Greatness is my journey Greatness is the key Greatness is my signal Greatness is who I want to be. I want to be One. Greatness is the Signal. The signal is My Ideal. My ideal is One.
https://medium.com/@rehmancee/the-human-as-an-antenna-d98a872aed90
['Rehman Cee']
2020-12-21 04:09:01.604000+00:00
['Peace', 'Antenna', 'Humanity', 'Love', 'Base Station']
WebSockets With Spring, Part 3: STOMP Over WebSocket
Photo by Joanna Kosinska on Unsplash Introduction The WebSocket protocol is designed to overcome the architecture limitations of HTTP-based solutions in simultaneous bi-directional communication. Most importantly, WebSocket has another communication model (simultaneous bi-directional messaging) than HTTP (request-response). WebSocket works over TCP that allows transmitting of two-way streams of bytes. WebSocket provides thin functionality on top of TCP that allows transmitting binary and text messages providing necessary security constraints of the Web. But WebSocket does not specify the format of such messages. WebSocket is intentionally designed to be as simple as possible. To avoid additional protocol complexity, clients and servers are intended to use subprotocols on top of WebSocket. STOPM is one such application subprotocol that can work over WebSocket to exchange messages between clients via intermediate servers (message brokers). STOMP Design STOMP (Simple/Streaming Text Oriented Message Protocol) is an interoperable text-based protocol for messaging between clients via message brokers. STOMP is a simple protocol because it implements only a small number of the most commonly used messaging operations of message brokers. STOMP is a streaming protocol because it can work over any reliable bi-directional streaming network protocol (TCP, WebSocket, Telnet, etc.). STOMP is a text protocol because clients and message brokers exchange text frames that contain a mandatory command, optional headers, and an optional body (the body is separated from headers by a blank line). COMMAND header1:value1 Header2:value2 body STOMP is a messaging protocol because clients can produce messages (send messages to a broker destination) and consume them (subscribe to and receive messages from a broker destination). STOMP is an interoperable protocol because it can work with multiple message brokers (ActiveMQ, RabbitMQ, HornetQ, OpenMQ, etc.) and clients written in many languages and platforms. Сonnecting clients to a broker Connecting To connect to a broker, a client sends a CONNECT frame with two mandatory headers: accept-version — the versions of the STOMP protocol the client supports host — the name of a virtual host that the client wishes to connect to To accent the connection, the broker sends to the client a CONNECTED frame with the mandatory header: version — the version of the STOMP protocol the session will be using Disconnecting A client can disconnect from a broker at any time by closing the socket, but there is no guarantee that the previously sent frames have been received by the broker. To disconnect properly, where the client is assured that all previous frames have been received by the broker, the client must: send a DISCONNECT frame with a receipt header receive a RECEIPT frame close the socket Sending messages from clients to a broker To send a message to a destination, a client sends a SEND frame with the mandatory header: destination — the destination to which the client wants to send If the SEND frame has a body, it must include the content-length and content-type headers. Subscribing clients to messages from a broker Subscribing To subscribe to a destination a client sends a SUBSCRIBE frame with two mandatory headers: destination — the destination to which the client wants to subscribe id — the unique identifier of the subscription Messaging To transmit messages from subscriptions to the client, the server sends a MESSAGE frame with three mandatory headers: destination — the destination the message was sent to subscription — the identifier of the subscription that is receiving the message message-id — the unique identifier for that message Unsubscribing To remove an existing subscription, the client sends an UNSUBSCRIBE frame with the mandatory header: id — the unique identifier of the subscription Acknowledgment To avoid lost or duplicated frames, if a client and a broker are parts of a distributed system, it is necessary to use frames acknowledgment. Client messages acknowledgment The SUBSCRIBE frame may contain the optional ack header that controls the message acknowledgment mode: auto (by default), client, client-individual. When the acknowledgment mode is auto, then the client does not need to confirm the messages it receives. The broker will assume the client has received the message as soon as it sends it to the client. When the acknowledgment mode is client, then the client must send the server confirmation for all previous messages: they acknowledge not only the specified message but also all messages sent to the subscription before this one. When the acknowledgment mode is client-individual, then the client must send the server confirmation for the specified message only. The client uses an ACK frame to confirm the consumption of a message from a subscription using the client or client-individual acknowledgment modes. The client uses a NACK frame to negate the consumption of a message from a subscription. The ACK and NAK frames must include the id header matching the ack header of the MESSAGE frame being acknowledged. Broker commands acknowledgment A broker sends a RECEIPT frame to a client once the broker has successfully processed a client frame that requests a receipt. The RECEIPT frame includes the receipt-id header matching the receipt header of the command being acknowledged. Examples Introduction The Spring Framework provides support for STOMP over WebSocket clients and servers in the spring-websocket and spring-messaging modules. Messages from and to STOMP clients can be handled by a message broker: a simple STOMP broker (which only supports a subset of STOMP commands) embedded into a Spring application an external STOMP broker connected to a Spring application via TCP Messages from and to STOMP clients also can be handled by a Spring application: messages can be received and sent by annotated controllers messages can be sent by message templates The following example implements STOMP over WebSocket messaging with SockJS fallback between a server and clients. The server and the clients work according to the following algorithm: the server sends a one-time message to the client the server sends periodic messages to the client the server receives messages from a client, logs them, and sends them back to the client the client sends aperiodic messages to the server the client receives messages from a server and logs them The server is implemented as a Spring web application with Spring Web MVC framework to handle static web resources. One client is implemented as a JavaScript browser client and another client is implemented as a Java Spring console application. Java Spring server Configuration The following Spring configuration enables STOMP support in the Java Spring server. @Configuration @EnableWebSocketMessageBroker public class StompWebSocketConfig implements WebSocketMessageBrokerConfigurer { @Override public void registerStompEndpoints(StompEndpointRegistry registry) { registry.addEndpoint("/websocket-sockjs-stomp").withSockJS(); } @Override public void configureMessageBroker(MessageBrokerRegistry registry) { registry.enableSimpleBroker("/queue", "/topic"); registry.setApplicationDestinationPrefixes("/app"); } } Firstly, this configuration registers a STOMP over WebSocket endpoint with SockJS fallback. Secondly, this configuration configures a STOMP message broker: the destinations with the /queue and /topic prefixes are handled by the embedded simple STOMP broker the destinations with the /app prefix are handled by the annotated controllers in the Spring application For the embedded simple broker, destinations with the /topic and /queue prefixes do not have any special meaning. For external brokers, destinations with the /topic prefix often mean publish-subscribe messaging (one producer and many consumers), and destinations with the /queue prefix mean point-to-point messaging (one producer and one consumer). Receiving and sending messages in annotated controllers Messages from and to STOMP clients can be handled according to the Spring programming model: by annotated controllers and message templates. @SubscribeMapping The @SubscribeMapping annotation is used for one-time messaging from application to clients, for example, to load initial data during a client startup. In the following example, a client sends a SUBSCRIBE frame to the /app/subscribe destination. The server sends a MESSAGE frame to the same /app/subscribe destination directly to the client without involving a broker. @Controller public class SubscribeMappingController { @SubscribeMapping("/subscribe") public String sendOneTimeMessage() { return "server one-time message via the application"; } } @MessageMapping The @MessageMapping annotation is used for repetitive messaging from application to clients. In the following example, the method annotated with the @MessageMapping annotation with the void return type receives a SEND frame from a client to the /app/request destination, performs some action but does not send any response. @Controller public class MessageMappingController { @MessageMapping("/request") public void handleMessageWithoutResponse(String message) { logger.info("Message without response: {}", message); } } In the following example, the method annotated with the @MessageMapping and @SendTo annotations with the String return type receives a SEND frame from a client to the /app/request destination, performs some action, and sends a MESSAGE frame to the explicit /queue/responses destination. @Controller public class MessageMappingController { @MessageMapping("/request") @SendTo("/queue/responses") public String handleMessageWithExplicitResponse(String message) { logger.info("Message with response: {}", message); return "response to " + HtmlUtils.htmlEscape(message); } } In the following example, the method annotated with the @MessageMapping annotation with the String return type receives a SEND frame from a client to the /app/request destination, performs some action, and sends a MESSAGE frame to the implicit /app/request destination (with the /topic prefix and the /request suffix of the inbound destination). @Controller public class MessageMappingController { @MessageMapping("/request") public String handleMessageWithImplicitResponse(String message) { logger.info("Message with response: {}", message); return "response to " + HtmlUtils.htmlEscape(message); } } @MessageExceptionHandler The @MessageExceptionHandler annotation is used to handle exceptions in the @SubscribeMapping and @MessageMapping annotated controllers. In the following example, the method annotated with the @MessageMapping annotations receives a SEND frame from a client to the /app/request destination. In case of success, the method sends a MESSAGE frame to the /queue/responses destination. In case of an error, the exception handling method sends a MESSAGE frame to the /queue/errors destination. @Controller public class MessageMappingController { @MessageMapping("/request") @SendTo("/queue/responses") public String handleMessageWithResponse(String message) { logger.info("Message with response: {}" + message); if (message.equals("zero")) { throw new RuntimeException(String.format("'%s' is rejected", message)); } return "response to " + HtmlUtils.htmlEscape(message); } @MessageExceptionHandler @SendTo("/queue/errors") public String handleException(Throwable exception) { return "server exception: " + exception.getMessage(); } } It is possible to handle exceptions for a single @Controller class or across many controllers with a @ControllerAdvice class. Sending messages by message templates It is possible to send MESSAGE frames to destinations by message templates using the methods of the MessageSendingOperations interface. Also, it is possible to use an implementation of this interface, the SimpMessagingTemplate class, that has additional methods to send messages to specific users. In the following example, a client sends a SUBSCRIBE frame to the /topic/periodic destination. The server broadcasts MESSAGE frames to each subscriber of the /topic/periodic destination. @Component public class ScheduledController { private final MessageSendingOperations<String> messageSendingOperations; public ScheduledController(MessageSendingOperations<String> messageSendingOperations) { this.messageSendingOperations = messageSendingOperations; } @Scheduled(fixedDelay = 10000) public void sendPeriodicMessages() { String broadcast = String.format("server periodic message %s via the broker", LocalTime.now()); this.messageSendingOperations.convertAndSend("/topic/periodic", broadcast); } } JavaScript browser client The JavaScript browser client uses the webstomp object from the webstomp-client library. As the underlying communicating object the client uses a SockJS object from the SockJS library. When a user clicks the ‘Connect’ button, the client uses the webstomp.over method (with a SockJS object argument) to create a webstomp object. After that, the client uses the webstomp.connect method (with empty headers and a callback handler) to initiate a connection to the server. When the connection is established, the callback handler is called. After the connection, the client uses the webstomp.subscribe methods to subscribe to destinations. This method accepts a destination and a callback handler that is called when a message is received and returns a subscription. The client uses the unsubscribe method to cancel the existing subscription. When the user clicks the ‘Disconnect’ button, the client uses the webstomp.disconnect method (with a callback handler) to initiate the close of the connection. When the connection is closed, the callback handler is called. let stomp = null; // 'Connect' button click handler function connect() { stomp = webstomp.over(new SockJS('/websocket-sockjs-stomp')); stomp.connect({}, function (frame) { stomp.subscribe('/app/subscribe', function (response) { log(response); }); const subscription = stomp.subscribe('/queue/responses', function (response) { log(response); }); stomp.subscribe('/queue/errors', function (response) { log(response); console.log('Client unsubscribes: ' + subscription); subscription.unsubscribe({}); }); stomp.subscribe('/topic/periodic', function (response) { log(response); }); }); } // 'Disconnect' button click handler function disconnect() { if (stomp !== null) { stomp.disconnect(function() { console.log("Client disconnected"); }); stomp = null; } } When the user clicks the ‘Send’ button, the client uses the webstomp.send method to send a message to the destination (with empty headers). // 'Send' button click handler function send() { const output = $("#output").val(); console.log("Client sends: " + output); stomp.send("/app/request", output, {}); } Java Spring client Java Spring client consists of two parts: Spring STOMP events handler and Spring STOMP over WebSocket configuration. To handle STOMP session events, the client implements the StompSessionHandler interface. The handler uses the subscribe method to subscribe to server destinations, the handleFrame callback method to receive messages from a server, and the sendMessage method to send messages to the server. public class ClientStompSessionHandler extends StompSessionHandlerAdapter { @Override public void afterConnected(StompSession session, StompHeaders headers) { logger.info("Client connected: headers {}", headers); session.subscribe("/app/subscribe", this); session.subscribe("/queue/responses", this); session.subscribe("/queue/errors", this); session.subscribe("/topic/periodic", this); String message = "one-time message from client"; logger.info("Client sends: {}", message); session.send("/app/request", message); } @Override public void handleFrame(StompHeaders headers, Object payload) { logger.info("Client received: payload {}, headers {}", payload, headers); } @Override public void handleException(StompSession session, StompCommand command, StompHeaders headers, byte[] payload, Throwable exception) { logger.error("Client error: exception {}, command {}, payload {}, headers {}", exception.getMessage(), command, payload, headers); } @Override public void handleTransportError(StompSession session, Throwable exception) { logger.error("Client transport error: error {}", exception.getMessage()); } } The following Spring configuration enables STOMP over WebSocket support in the Spring client. The configuration defines three Spring beans: the implemented ClientStompSessionHandler class as an implementation of StompSessionHandler interface — for handling STOMP session events the SockJsClient class with selected transports as an implementation of WebSocketClient interface — to provide transports to connect to the WebSocket/SockJS server the WebSocketStompClient class — to connect to a STOMP server using the given URL with the provided transports and to handle STOMP session events in the provided event handler. The SockJsClient object uses two transports: the WebSocketTransport object, which supports SockJS WebSocket transport the RestTemplateXhrTransport object, which supports SockJS XhrStreaming and XhrPolling transports @Configuration public class ClientWebSocketSockJsStompConfig { @Bean public WebSocketStompClient webSocketStompClient(WebSocketClient webSocketClient, StompSessionHandler stompSessionHandler) { WebSocketStompClient webSocketStompClient = new WebSocketStompClient(webSocketClient); webSocketStompClient.setMessageConverter(new StringMessageConverter()); webSocketStompClient.connect("http://localhost:8080/websocket-sockjs-stomp", stompSessionHandler); return webSocketStompClient; } @Bean public WebSocketClient webSocketClient() { List<Transport> transports = new ArrayList<>(); transports.add(new WebSocketTransport(new StandardWebSocketClient())); transports.add(new RestTemplateXhrTransport()); return new SockJsClient(transports); } @Bean public StompSessionHandler stompSessionHandler() { return new ClientStompSessionHandler(); } } The client is a console Spring Boot application without Spring Web MVC. @SpringBootApplication public class ClientWebSocketSockJsStompApplication { public static void main(String[] args) { new SpringApplicationBuilder(ClientWebSocketSockJsStompApplication.class) .web(WebApplicationType.NONE) .run(args); } } Conclusion Because WebSocket provides full-duplex communication for the Web, it is a good choice to implement various messaging protocols on top of it. Among STOPM, there are officially registered several messaging subprotocols that work over WebSocket, among them: AMQP (Advanced Message Queuing Protocol) — another protocol to communicate between clients and message brokers MSRP (Message Session Relay Protocol) — a protocol for transmitting a series of related instant messages during a session WAMP (Web Application Messaging Protocol) — a general-purpose messaging protocol for publishing-subscribe communication and remote procedure calls XMPP (Extensible Messaging and Presence Protocol) — a protocol for near real-time instant messaging, presence information, and contact list maintenance Before implementing your own subprotocol on top of WebSocket, try to reuse an existing protocol and its client and server libraries — you can save a lot of time and avoid many design and implementation errors. Complete code examples are available in the GitHub repository.
https://medium.com/swlh/websockets-with-spring-part-3-stomp-over-websocket-3dab4a21f397
['Aliaksandr Liakh']
2020-11-09 11:55:37.655000+00:00
['Stomp', 'Spring', 'Web', 'Websocket', 'Java']
A Look at Server-Sent Events
Server Sent Events are a standard allowing browser clients to receive a stream of updates from a server over a HTTP connection without resorting to polling. Unlike WebSockets, Server Sent Events are a one way communications channel - events flow from server to client only. You might consider using Server Sent Events when you have some rapidly updating data to display, but you don’t want to have to poll the server. Examples might include displaying the status of a long running business process, tracking stock price updates, or showing the current number of likes on a post on a social media network. Here’s a video of me presenting this material at San Diego JavaScript’s Fundamental JS meetup on June 25th 2020… remotely due to the pandemic at the time. Architecture When working with Server Sent Events, communications between client and server are initiated by the client (browser). The client creates a new JavaScript EventSource object, passing it the URL of an endpoint which is expected to return a stream of events over time. The server receives a regular HTTP request from the client (these should pass through firewalls etc like any other HTTP request which can make this method work in situations where WebSockets may be blocked). The client expects a response with a series of event messages at arbitrary times. The server needs to leave the HTTP response open until it has no more events to send, decides that the connection has been open long enough and can be considered stale, or until the client explicitly closes the initial request. Every time that the server writes an event to the HTTP response, the client will receive it and process it in a listener callback function. The flow of events looks roughly like this: Flow of events between Client and Server Message Format The Server Sent Events standard specifies how the messages should be formatted, but does not mandate a specific payload type for them. A stream of such events is identified using the Content-Type “text/event-stream”. Each event is formatted using a set of colon separated key/value pairs with each pair terminated by a newline, and the event itself terminated by two newlines. Here’s a template for a single event message: id: <messageId - optional> event: <eventType - optional> data: <event data - plain text, JSON, ... - mandatory> id : A unique ID for this event (optional). The client can track these and request that the server stream events after the last one received in the event of the client becoming disconnected from the stream and reconnecting again. : A unique ID for this event (optional). The client can track these and request that the server stream events after the last one received in the event of the client becoming disconnected from the stream and reconnecting again. event : Specifies the type of event in the case where one event stream may have distinctly different event types. This is optional, and can be helpful for processing events on the client. : Specifies the type of event in the case where one event stream may have distinctly different event types. This is optional, and can be helpful for processing events on the client. data: The message body, there can be one or more data key/pairs in a single event message. Here’s an example event that contains information about Qualcomm’s stock price: id: 99 event: stockTicker data: QCOM 64.31 In this case, the data is simple text, but it could equally be something more complex such as JSON or XML. The ID can also be any format that the server chooses to use. Client (Browser) Implementation The client connects to the server to receive events by declaring a new EventSource object, whose constructor takes a URL that emits a response with Content-Type “text/event-stream”. Event handler callback functions can then be registered to handle events with a specific type (event key/value pair is present). These handlers are registered using the addEventListener method of the EventSource object. Additionally, a separate property of the EventSource object, onmessage (annoyingly not using camelCase) can be set to a callback function which will receive events that do not have the optional event key/value pair set. The code below shows examples of both types of callback: const eventSource = new EventSource('http://some.url'); // Declare an EventSource // Handler for events without an event type specified eventSource.onmessage = (e) => { // Do something - event data etc will be in e.data }; // Handler for events of type 'eventType' only eventSource.addEventListener('eventType', (e) => { // Do something - event data will be in e.data, // message will be of type 'eventType' }); In the callbacks, the message data can be accessed as e.data. If multiple data lines existed in the original event message, these will be concatenated together to form one string by the browser before it calls the callback. Any newline characters that separated each data line in the message will remain in the final string that the callback receives. Important: The browser is limited to 6 open SSE connections at any one time. This is per browser, so multiple tabs open each using SSEs will count against this limit. See a discussion on Stackoverflow here for more details. Thanks to Krister Viirsaar for bringing this to my attention. Server Side Implementation While the client implementation has to be JavaScript as it runs in the browser, the server side can be coded in any language. As this demo was originally put together for a JavaScript Meetup group I used Node.js. The server side could just as easily have been built in Java, C, C#, PHP, Ruby, Python, Go or any language of your choosing. The server implementation should be able to: Receive a HTTP request. Respond with one or more valid server sent event messages, using the correct message format. Tell the client that the Content-Type being sent is “text/event-stream” indicating that the content will be valid server sent event messages. Tell the client to keep the connection alive, and not cache it so that events can be sent over the same connection over time, safely reaching the client. Code for a basic example of a server in Node.js that does this and sends an event approximately every three seconds is shown below. const http = require('http'); http.createServer((request, response) => { response.writeHead(200, { Connection: 'keep-alive', 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache' }); let id = 1; // Send event every 3 seconds or so forever... setInterval(() => { response.write( `event: myEvent id: ${id} data:This is event ${id}.` ); response.write(' '); id++; }, 3000); }).listen(5000); Stopping an Event Stream There’s two ways to stop an event stream, depending on whether the client or the server initiates the termination. From the Browser / Client Side The browser can stop processing events by calling .close() on the EventSource object. on the EventSource object. This closes the HTTP connection — the server should detect this and stop sending further events as the client is no longer listening for them. If the server does not do this, then it will essentially be sending events out into a void. The client side code for this is very simple: const eventSource = new EventSource('http://url_serving_events'); ... // We want to stop receiving events now eventSource.close(); When the server realizes that the client has closed the HTTP request, it should then close the corresponding HTTP response that it has been sending events over. This will stop the server from continuing to send events to a client that is no longer listening. Assuming the server is implemented in Node.js and that request and response are the HTTP request and response objects then the server side code looks like: request.on('close', () => { response.end(); console.log('Stopped sending events.'); }); From the Server Side The server can tell the browser that it has no more events to send by taking the following actions: Sending a final event containing a special ID and/or data payload that the application code running in the browser recognizes as an “end of stream” event. This requires the client and server to have a shared idea of what that ID or payload looks like. AND by closing the HTTP connection on which events are sent to the client. by closing the HTTP connection on which events are sent to the client. The client should then call .close() on the EventSource object to free up client side resources, ensuring no further requests are made to the server. Assuming the server is written in Node.js, server side code to end the event stream would look like this (response is the HTTP Response object): // Send an empty message with event ID -1 response.write('id: -1 data: '); response.end(); And the client should listen for the agreed “end of event stream” message (in this example it will have an eventId -1): eventSource.onmessage = (e) => { if (e.lastEventId === '-1') { // This is the end of the stream app.eventSource.close(); } else { // Process message that isn't the end of the stream... } }; Browser Support Server Sent Events are supported in the major browsers (Chrome, Firefox, Safari), but not yet in MS Edge where this feature is currently “under consideration” for implementation (details here). Chrome currently appears to have the best debugging support — if you select an XHR request that is receiving Server Sent Events in the Network tab, it will format the messages for you and display them in a table: Debugging Server Sent Events in the Chrome Browser If you want to use Server Sent Events in MS Edge or Internet Explorer, there are polyfills available that mimic the functionality. One example is Yaffle, whose documentation also contains a list of alternative implementations. A Quick Demo The image below shows a simple browser application that is receiving Server Sent Events from a Node.js server. The server sends a randomly generated stream of events that can have one of four event types. Each event type contains a different type of payload: coinToss: payload contains one of two strings — “heads” or “tails”. dieRoll: payload contains a random number from 1..6 inclusive. catFact: payload contains a cat fact as a string. meme: payload contains the URL of a meme image. The client listens for each event type using a separate listener function for each, and updates the area of the page corresponding to that event type on receipt of an event. Additionally, event data is logged to the logging area across the bottom of the page and to the browser’s Javascript console. The server will stop sending events after 30 have been sent, or if the user presses the “Stop Events” button before then. Server Sent Events Demo Running If you want to grab the complete code for this project and use it yourself, it’s available on GitHub. You’ll need Node.js installed on your machine, and setup instructions are included in the project README. Advanced — Dropped Connection & Recovery As the HTTP connection that is used to send events to the client is open for a relatively long time, there’s a good chance that it may get dropped due to the network temporarily going away or the server deciding that it has been open long enough and terminating it. This may happen before the client has received all the events, in which case it may wish to reconnect and pick up from where it left off in the event stream. The browser will automatically attempt to reconnect to an event source if the connection is dropped. When it does, it will send the ID of the last event that it received as HTTP header “Last-Event-ID” to the server in a new HTTP request. The server can then start sending events that have happened since the supplied ID, if that is possible for server-side logic to determine. The server may also include a key/value pair in the event messages that tells the client how long to wait before retrying in the event of the server ending the connection. For example, this would specify a five second wait: id: 99 event: stockTicker data: QCOM 64.31 retry: 5000 As our demo uses randomly generated event streams, it doesn’t use this capability.
https://medium.com/conectric-networks/a-look-at-server-sent-events-54a77f8d6ff7
['Simon Prickett']
2020-07-09 04:43:09.266000+00:00
['JavaScript', 'Nodejs', 'Html5', 'Web Development', 'API']
Belonging Again (Part 6)
To return to the main point of the last section — as hopefully the example of Christianity and LGBT marriage helped make clear — the doctrine of tolerance can be a threat to communal authority, for as the State demands tolerance of LGBT marriage, for example, Christian communities fragment and inflate, the validity of the Bible is questioned and undermined, and the State is increasingly viewed as a source of justice while small communities are viewed with skepticism as potential sources of bigotry, racism, and so on. ‘The idea of toleration, in the modern sense, [(unintentionally)] calls into question the validity and even the ethical appropriateness of attaching oneself too strongly to the kinds of loyalties and the kinds of transcendent convictions that are the very soul of the association.’¹ Audio Summary As nations are defined by borders, so communities are greatly defined by what they do and don’t do, whether it be the celebration of a certain festival or a prohibition against gluten. Conyers wrote: ‘The more [a community] exists on the basis of a telos or purpose that transcends in significance the practical purposes of the [S]tate (or the ideological vision of the state), it becomes thereby an indigestible, alien, and resistant object.’³ Again, perhaps Christian communities against LGBT marriage should be “indigestible” — the point here is only to argue that as the authority, inevitably, and necessity of other social associations are ‘diminished in people’s consciousness, then the [S]tate proves to be the benefactor’ (for good or for bad).⁴ In a world where everything or nothing was permitted everywhere all the time, there wouldn’t be practically meaningful differences between groups. This isn’t to say there wouldn’t be distinctions (say in location, average height, etc.), but it is to say that those differences wouldn’t signify practical distinctions between peoples. Perhaps this “one world” would be a better place? Perhaps so — for now I only want to focus on how, for good or for bad, a world without borders (thanks to toleration) is a world without definitions. Arguably, such a world is also more inclusive. Definition can be traded for inclusion, but without definition, what standard would be present relative to which people would want inclusive? Justice? Unconditional acceptance? Good things, no doubt. If the State holds authority over what communities can and cannot permit and exclude, then the State holds authority over the very ways communities define themselves, and especially when it exercises this authority (in the name of justice, inclusion, etc.), the State will gradually come to be seen as what controls communities (over the communal leaders). ‘Ever since Solomon attempted to reorganize Israel along administrative districts, cutting across the boundaries of tribal lands, it has been recognized that the organized [S]tate wishes to diminish the role of natural social bodies [(often for the best of reasons)].’⁵ But if to end slavery, racism, bigotry, and/or injustice, isn’t this a wonderful trade? Perhaps so, but it is indeed a trade, one that seems to make “the banality of evil” less likely (but perhaps also worse if it were to occur). And do keep in mind that it is doubtful the State usually if ever expands itself “over” communities for anything but “good reasons.” ‘Each association or group has about it its own goals and its own internal discipline[s], each linking by degrees and in its own way the individual with the whole world, including the [S]tate.’⁶ When the State has power over which goals and “internal disciplines” communities and associations are allowed to practice, the State has ultimate control (indirectly) over the ways people “link” with, are “toward,” and identify with, the world, social arrangements, and the State. Consequently and gradually, ‘[l]oyalties of individuals once distributed to a variety of informal and largely organic associations, are later absorbed into the reified [S]tate.’⁷ The State thus gains more authority while the authority of communities weakens, possibly antagonizing those communities and tempting them into a defensive posture, which could make the communities seem like they should lose their authority and possibly more (perhaps causing a vicious cycle). Gradually, the State will become the main focus of the citizenship, and the people will likely attempt to make the State the foundation of a “new community.” Can the State bear the weight? Perhaps. ‘That which makes a group into a strong community is its adherence to a commitment potent enough to hold the members together,’ but as the State increases its size and power to assure that tolerance spreads (and as tolerance justifies State growth, creating a feedback loop), for good or for bad, the potency of communal commitments necessarily diminishes (except perhaps, that is, commitments somehow involving the State).⁸ Yes, this diminishes the capacity of communities to discriminate, but it also diminishes the ability of communities to give us a sense of “belonging again.” For better or for worse, justice can contribute to rootlessness, and yet those who fight for justice can be those who long most for community and a place to belong. Does this mean we shouldn’t have a State at all? No, but it is to stay the doctrine of tolerance does not help the State maintain a role of being an umpire, but instead pushes it in the direction of becoming a king. Perhaps a benevolent king? One can hope. . . . Notes ¹Conyers, A.J. The Long Truce. Dallas, TX: Spence Publishing Company, 2001: 224. ²Conyers, A.J. The Long Truce. Dallas, TX: Spence Publishing Company, 2001: 43. ³Conyers, A.J. The Long Truce. Dallas, TX: Spence Publishing Company, 2001: 223. ⁴Conyers, A.J. The Long Truce. Dallas, TX: Spence Publishing Company, 2001: 190. ⁵Conyers, A.J. The Long Truce. Dallas, TX: Spence Publishing Company, 2001: 119. ⁶Conyers, A.J. The Long Truce. Dallas, TX: Spence Publishing Company, 2001: 223. ⁷Conyers, A.J. The Long Truce. Dallas, TX: Spence Publishing Company, 2001: 191. ⁸Conyers, A.J. The Long Truce. Dallas, TX: Spence Publishing Company, 2001: 44.
https://medium.com/@o-g-rose-writing/belonging-again-part-6-2efdf5cf469
['O.G. Rose']
2021-03-17 03:04:04.554000+00:00
['Irony', 'Justice', 'Civilization', 'Tolerance', 'Sociology']
How to Make a Festive Vegan Meringue Wreath
Okay, let’s talk strategy — this meringue wreath is much, much easier than it looks, but it requires some organization and careful attention to the instructions — I definitely recommend reading through the whole recipe first before starting it. The ideal process is as follows: On the morning of the day before you intend to eat the meringue wreath, chill the aquafaba and coconut cream. Later that evening, make the meringue, and while it cooks, make and chill the topping. I also recommend using this time to extract the pomegranate seeds, which you can store in a sealed container in the fridge till required. The cooled meringue can be simply kept in the oven until needed, but if you’re understandably going to be using it for other things, it can be carefully transferred to a sealed container and kept in the fridge. The next day, just before serving, transfer the meringue wreath to a plate, and cover with the prepared toppings. I love a little Christmas Eve late-night baking, but you could make this two days ahead — keeping the cooked meringue and the topping in sealed containers in the fridge. Either way, it still needs to be decorated right before serving. With everything prepared, this should a swift and serene undertaking. Feel free to change the decorations to suit your needs, but the sour crunch of pomegranates and gentle zing of the raspberries is perfect against the creamy sweetness below, and the pistachios look gorgeous against all that red. Strawberries, cherries, or redcurrants would all be equally visually effective. If you can’t find pomegranates, just use more berries —and I am yet to find pre-packaged pomegranate seeds which taste any good but if you can find some, feel free to use those instead of fresh. If you can only get hold of frozen fruit, allow it to thaw first in a sieve over a bowl, otherwise their liquid will dissolve the meringue. The important thing to know is, you can definitely achieve this, and however it turns out, it will be delicious. I made my meringue wreath on the most humid day of the year, with the air as thick as a kale smoothie, and it still turned out perfectly. The wreath broke a little at the seams as I transferred it to the plate, and yours might too, but you cannot tell once it’s pushed back together and covered in the toppings. And I accidentally crushed one of the meringues as I spread over the coconut cream, but again — it does not matter, and everyone will love it. Vegan Meringue Wreath Serves: 10 (or one 😉) ¾ cup aquafaba (from a 15-ounce can low-sodium chickpeas), chilled ¼ teaspoon cream of tartar 1 cup sugar 2 tablespoons cornstarch ¼ teaspoon vanilla extract To decorate: 1 15 ounce can full-fat coconut cream, chilled for at least four hours (Check the ingredients — there should be at least 70% coconut extract; thickeners are fine) 1 teaspoon vanilla extract Pinch salt 1 pomegranate 1 cup fresh raspberries ¼ cup shelled pistachios, roughly chopped 4 mint sprigs, or as many as you prefer
https://tenderly.medium.com/how-to-make-a-festive-vegan-meringue-wreath-d40222ade5e3
['Laura Vincent']
2021-01-04 18:48:53.375000+00:00
['Vegan', 'Food', 'Christmas', 'Dessert', 'Recipe']
µcdn: a live, bundlerless, alternative
If there’s something annoying about JS modules in 2020, is that they don’t play very well with the million of modules published in npm … but this is about to change, thanks to µcompress, “micro/you compress”, and µcdn. A basic example Explained in a repository dedicated for this purpose only, the following is all it takes to have an idea of how both µcdn and µcompress work: git clone https://github.com/WebReflection/ucdn-test.git cd ucdn-test npm i That’s it, if everything installs fine, you’ll have a localhost:8080 to point at, and see all requests related to files in the ./src are delivered already optimized. A simple refresh, and see all assets become 304. Modify any file in the source, and see after max 1 second it’ll be served updated, and pre-optimized. Run the following command, and create a ./public folder with all assets already optimized, ready to be published in any static site deliverer. npm run build What is this sorcery? Latest µcompress added the ability to crawl any folder, ignoring eventually the node_modules one, but using it to resolve dependencies automatically. Check the top of the ./src/js/index.js file, as example: import { render, html } from 'uhtml'; import './my-counter.js'; // ... The crawler will resolve uhtml module, finding its dual-module compatible entry point, a technique already described few days ago. That’s it, if you have published an ESM compatible module in npm, the crawler will look for either the "exports" property, and its related "import" , or it will fallback to the "module" field, assuming this points at a standard JavaScript file. Standard JS is key though, as specialized syntax such as TypeScript or JSX, both not standard, aren’t transpiled on the fly, as the toolchain required to handle this or that case is massive, and if included within µcompress, the “micro” prefix of the name would become instantly meaningless. However, you have all the tooling you like to pre-build standard JS from your source, and use that result as source target for either µcdn or µcompress. About multiple dependencies Each module is resolved, and optimized, only once. As example, the js/index.js file requires µhtml, but it also imports the js/my-counter.js component, which requires µce, which in turns uses µhtml to render its content. As result, all imports from "uhtml" will point at the exact same file, unleashing the real power of ECMAScript modules. Shared dependencies will be then resolved through the node_modules folder which must be included in the source, ’cause it has to be reachable from the root of the site, hence being able to be part of the project, once live, but fear not, only pre-optimized and resolved files will land in production so … everything is fine. Are dynamic imports available? Unfortunately, there is no way to know imports targets composed at runtime. However, import("module") will work as expected, resolving "module" the same way any other static imports are resolved. Any other relative import will work too, as long as the file is reachable, and not outside the source root boundaries, so that cases where dynamic import wouldn’t work are reduced to things like: // this doesn't work at all import(condition ? "module1" : "module2"); import(`module-${thing}`); // this works *only* if ./relative is part of // the source and not as 3rd parts module logic import(`./relative/${file}.js`).then(console.log); And what about IE? Once again, there are plenty of tools able to transpile ESM and modules into ES5 compatible syntax, as well as there are plenty of ways to conditionally load modern, or transpiled code, where this one, based on <script type="module"> and its <script nomodule> counter-part would be the easiest way to go. Not only IE though … Using bundlers is still a very good practice, ’cause as cool as standard JS modules are, the amount of requests, specially in big projects, can make your site slower than a pre-optimized bundle split in meaningful chunks that aggregate only code that’s needed. That being said, bundlers are incapable of making code really independent, or testable in isolation, ’cause modules could be duplicated across chunks, even the basics used to import already imported modules. In this scenario, the processing done by µcompress makes any project instantly portable through shared, external, 3rd parts dependencies, so that any file can be also tested in isolation, through the public folder. In order to do so, I’ve also implemented a --no-minify flag, so that the only thing that happens to your JS files, is the eventual module resolution. A new, refreshing, way to deploy JS 🎉 It’s been a very long time since refreshing a page, after changing a standard JS file content, would’ve produced an instant feedback, and on demand: complex projects won’t need to build the universe per each file change only browsed parts are eventually invalidated, all dependencies cost one time only: happy blazing fast browsing! it works 100% offline, no need to use unpkg.com/uhtml?module or similar online helper, which are amazing, but always need bandwidth, and are usually slower than a 304 from your own machine 😉 I hope you’ll give µcompress and µcdn a chance to test, or deploy, pre-optimized assets, and eventually help me improving their potentials which are, in my opinion, even at this early stage, huge! Literally … in 2 minutes! Above video shows how trivial it is to bootstrap any project based, or not, on npm modules, separating FE dependencies from BE one. Isn’t this great?
https://webreflection.medium.com/%C2%B5compress-goodbye-bundlers-bb66a854fc3c
['Andrea Giammarchi']
2020-05-27 08:59:56.034000+00:00
['Web Development', 'JavaScript', 'NPM']
Optimizing Individual-Level Models for Group-Level Predictions
Part 1 — An Analysis of Bias Introduction The goal of this post is to present some of my thoughts about a very common, yet scantily addressed, problem in machine learning at large (and healthcare in particular): how do you construct group-level predictions given only individual-level data in a way that optimizes a pre-determined loss function (e.g., MAE or MSE)? If you first create an individual-level model optimizing for some loss function and then take the average of the predictions, do you automatically optimize for the same loss function at the group level? As it turns out, while the answer happens to be “yes” in the case of MSE, the answer is a resounding “NO” in the case of MAE! In this post I explain why the knee-jerk method of using the same loss function at the individual level is The Wrong Thing To Do. This is a story about uncovering the real nature of this problem. As part of this exploration I use an interesting theorem by Peter Hall whose proof I found to be somewhat difficult to penetrate. As a service to the community, in Part 2 I will present the backstory for the theorem, and provide a slightly more natural proof than the one he provided in his paper, with all of the crucial details that were left out of the original paper filled in. All code for plots seen in this post can be found in this GitHub repo. Example and Assumptions In order to make this exploration easier to follow, I will use a concrete example and fix my assumptions. Example: A health insurance company wants to estimate the per-person cost of employer groups (i.e., the total cost of the group divided by the number of members) over the span of the next year, where each employer group consists of some number of individual members. Base Assumptions: You are given training and test data that consist of individual-level data as well as a member-to-group map. The training data has target values represented by each individual’s incurred cost in the next year, while the test data has no target values. The test data and the training data potentially have different groups. It only remains to specify what “true values” to compare against at the group level. In order to show why optimizing for the same loss function at the individual level that you do at the group level is misguided, it would be easiest using the following assumption: Assumption A: For each group, set its true value to be the average target value of its members. We will later also consider another assumption that arises more naturally if you allow for membership to vary. Why have an individual model at all? One way to optimize a particular loss function at the group level is simply to create a group-level model that optimizes this loss function, whose features are engineered aggregated individual-level features. But the number of groups is presumably much smaller than the number of individuals, so one would expect that such a model would suffer from high variance. A combined individual-level and group-level approach would be wisest. “Why not use Recurrent Neural Networks? That way you can make a group-level model that uses all the data!”, I hear you cry. Well… That would be using all the individual-level features, but not the unaggregated target values, unless a special loss function is employed. And even then, RNN’s are order-dependent, and group membership is not! So let’s go ahead and call that “experimental”. Either way, your group-level predictions can only gain from doing an individual-level model well. So let’s get to it! Intuition about optimizing for MAE/MSE The bottom line is: optimizing for MSE means you are estimating the mean; optimizing for MAE means you are estimating the median. What does that actually mean? Let Y be the target value, and let X_1,…,X_n be the features. If your feature values are X_1=x_1,…,X_n=x_n, then the target value given those features Y|X_1=x_1,…,X_n=x_n is a random variable, not a constant. In other words, your feature values don’t determine the target. For example, if you are predicting cost, then it is perfectly conceivable that two individuals with the same feature values have different costs; though knowing these feature values does change the distribution of cost. If you are optimizing for MSE, then plugging in (x_1,…,x_n) into your model will attempt to predict E(Y|X_1=x_1,…,X_n=x_n); whereas if you are optimizing for MAE your model will attempt to predict median(Y|X_1=x_1,…,X_n=x_n). Indeed, if f(x_1,…,x_n) is the model’s prediction at these feature values for a machine learning model whose loss function is MSE, then it would attempt to approximate the a that minimizes: This last expression is a parabola in a with global minimum E(Y|X_1=x_1,…,X_n=x_n). Similarly, if f(x_1,…,x_n) is the model’s prediction at these feature values for a machine learning model whose loss function is MAE, then it would attempt to approximate the a that minimizes: The a that minimizes this expression is median(Y|X_1=x_1,…,X_n=x_n). (To see this you have to play around with integrals; it’s an easy but annoying exercise.) Technical note: Conditional probability of a random variable given an event only makes sense if the event that you’re conditioning on has positive probability. Attempts to extend the definition to events with zero probability are doomed to be ill-defined unless we specify a limiting procedure; see the Borel–Kolmogorov paradox. If at least one of the X_i’s is continuous, then P(X_1=x_1,…,X_n=x_n)=0, which would imply that Y|X_1=x_1,…,X_n=x_n doesn’t make any sense. (The definition commonly found in undergraduate textbooks for Y|X_1=x_1,…,X_n=x_n is not coordinate invariant. For the more advanced readers: conditioning on sub-sigma-algebras rather than events yields the same issue, since the resulting random variable is unique up to “almost sure equality”, which means probability zero events can be exceptions.) We can and will elegantly avoid this issue by restricting the values of the features to values that a computer can represent. In this way, even “continuous” variables are actually discrete, and everything is well-defined. The problem Fix a group of size m, and let be the features of the i-th person. If we are creating an individual-level model that optimizes MSE, then the average of the results is an estimate of By the linearity of expectation, this is equal to Great! In other words, by optimizing MSE at the individual level, you’re optimizing MSE at the group level! Now let’s do the same analysis for MAE. If we are creating an individual-level model that optimizes MAE, then the aggregation of the results is an estimate of But that is very, very, veeeery far from being Here’s a quick illustration that the median of the sum is very far from being the sum of the medians, even if the random variables are all i.i.d’s: In order to set up this example, I will be using a Gamma distribution with scale 200 and shape 1. This is what this distribution looks like: (Code for plots seen in this post can be found in this GitHub repo.) Figure 1 Now take X_i’s to be i.i.d’s following this distribution. Then here is a comparison of the average of their medians versus the median of their average: Figure 2 As you can see, this gap is quite large! Optimizing MSE at the individual level is superior to optimizing MAE at the individual level for group-level MAE If a group is large then it is somewhat reasonable to assume that is approximately normal; and the median of a normal distribution is its mean. So if you optimize for MSE at the individual level and then take the average of the predictions, you’ll be approximately estimating the median for large groups! This might seem counter-intuitive at first: optimizing MSE at the individual level is superior to optimizing MAE at the individual level not only for group-level MSE, but also for group-level MAE. But how bad should we expect the bias to be for small groups? (As it will turn out, secretly the key is to use estimates of the error of the Central Limit Theorem.) A result by Peter Hall This piqued my interest. So I started looking for results about the median of a sum of i.i.d’s. I found “On the Limiting Behaviour of the Mode and Median of a Sum of Independent Variables” by Peter Hall¹, where he proved the following result. (See Part 2 of this post for a deeper understanding of why this theorem is true.) Since the X_i’s are all identically distributed, let’s simply let X := X_1 for notational ease. If we do not assume that the X_i’s have mean 0 and variance 1, a quick back of the envelope computation, reducing to the normalized case, shows that this reduces to the following approximation: Notice that this is an asymptotic result! For us to be able to use it in the machine learning context discussed above, we must first make sure that this approximation is reasonable for small n. So let’s do a quick proof-of-concept: (Code for plots seen in this post can be found in this GitHub repo.) Figure 3 That’s pretty accurate! Bias estimation in the machine learning setting For each group we would be attempting to estimate for that group. The summands are not i.i.d’s, and so Hall’s result is not directly applicable. To that end, we can try to replace Assumption A with: Assumption B: For each group, assume that its “true value” is the expected value of the mean of m samples, with repetition, of the true values of its members. (Arguably, Assumption B comes up more often in real life. Often in group level predictions the members don’t stay fixed, but the current members’ target values are indicative of the distribution target values of future members.) Now that the target value is the mean of i.i.d’s (because the sampling is done with repetition), we may employ Hall’s approximation. Fix a group, and let T_1, …, T_m be the sampling, with repetition, mentioned in Assumption B. The T_i’s are now i.i.d’s, and so for the sake of clarity let T:=T_1. We are now faced with the challenge of approximating V(T) and E((T-μ)³) for each group. This can be difficult, given that the groups that we want the most to bias-correct are small groups. One simplistic approach is to estimate V(T) as V(Y), and estimate E((T-μ)³) as E((Y-E(Y))³). In turn, estimate V(Y) and E((Y-E(Y))³) by taking the average values over the target values in our data, and denote these estimates as Vˆ and κˆ_3 respectively. Whether these estimates work well or not depends on how different the groups are from one another — the more similar they are, the better this bias correction will be. Either way, these approximations are good enough to get a relatively clear picture of how small a group needs to be so that the bias is beyond your level of comfort. Namely, a rough estimate is that if the size of a group is m, then averaging an individual model trained using MSE should have a bias of about For groups for which this number is intolerably big, I would suggest extra care: Vˆ and κˆ_3 may not be good enough estimates, and may lead to a bad bias correction. I would simply suggest to flag groups of this size as requiring a bias correction, and do more research on the bias correction needed that is informed by the data and the particular application you have in mind. (Notice that while it is tempting to use Bayesian approaches that will allow you to sample from the distributions of the Y|X_1=x_{i, 1},…,X_n=x_{i, n}’s, the vanilla version of these approaches assumes that the Y|X_1=x_{i, 1},…,X_n=x_{i, n}’s are normally distributed. If that were the case, there wouldn’t be any bias to correct for… This holds also for using dropout at prediction time in a neural network, which, by the work of Yarin Gal and Zoubin Ghahramani², is roughly equivalent to sampling the posterior in an appropriately defined Gaussian process setting — but that setting also assumes normally distributed error.) If this post got your interest and you would like to challenge yourself with similar problems, we are hiring! Please check our open positions. GitHub: https://github.com/lumiata/tech_blog Visit Lumiata at www.lumiata.com and follow on Twitter via @lumiata. Find us on LinkedIn: www.linkedin.com/company/lumiata Citations: 1. Hall, P. (1980) On the limiting behaviour of the mode and median of a sum of independent random variables. Ann. Probability 8 419–430. 2. Gal, Y. and Ghahramani, Z. (2016) Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. ICML.
https://medium.com/lumiata/optimizing-individual-level-models-for-group-level-predictions-655243e88344
['Hilaf Hasson']
2019-06-26 21:54:34.135000+00:00
['Bias', 'Modeling', 'Machine Learning', 'Data Science', 'Optimization']
The best, easiest and guaranteed way to start earing online via Affiliate marketing and other ways
Zapable Agency Review The best, easiest and guaranteed way to start earing online via Affiliate marketing and other ways istiqlal hassan May 8·1 min read What You’ll Get With The White Label Licenses Today: White Label: Very Professional. Maybe than having a client login to “zapable.com” to deal with their application, they login to YOUR name of decision ​Your Own Logo: You can add your own logo, shading decision and foundation. So when your customers login that is the thing that they see Add And Remove clients — You can add clients and “relegate” which applications they can see with 1 snap. ​Great Recurring Upsell: Sell the customer admittance to their own private login to deal with the application (worth $197) Message pop-up Upsell: Furthermore, you can (your decision) charge a common expense to customers so they can access the white mark message pop-ups territory. This implies your customer can send pop-up messages from the Zapable stage without realizing that it’s Zapable! (worth $47) Full 1 snap the board: If a customer quits paying your month to month expense you can “1 snap” and turn their entrance off without any problem. Your in full control Zapable White Label is an incredible bolt on to make repeating income! Save 67% Today so Click Here Affiliate Disclosure: I have used links via which I earn a bit commission.
https://medium.com/@hassanistiqlal/commission-creator-review-aff-38072582834d
['Istiqlal Hassan']
2021-05-08 17:04:27.860000+00:00
['Easy Money Online', 'Earn Money Online', 'Earn From Home', 'Best Way To Earn Online', 'Affiliate Marketing Tips']
Community Boards Love Scooters
I love being in the field. Looking back on my 14 years of policy advocacy at Transportation Alternatives, my most cherished moments were the times I got out into the street as a loud-mouthed organizer, a traffic jammer, or, on one fun occasion, an antiquated car driver. Looking back, it was most often these kinds of in-the-street engagements — way more often than reports, conferences, and pronouncements — that made the difference. So back in July, when my new colleagues at Superpedestrian asked if I could organize several in-person borough safety demonstrations of the LINK scooter and its on-board geofencing, I leapt at the challenge. Not just because I knew it would be fun, but because convincing New Yorkers requires showing up and showing what you’ve got. If the proof is in the pudding, then the evidence is in the e-scooter. From Snug Harbor to Yankee Stadium we completed 17 safety and geofence demonstrations over 3 months, 16 of them in Brooklyn, Queens, The Bronx and Staten Island — what John Choe of the Greater Flushing Chamber of Commerce calls the “Better Boroughs.” Roughly 612 test rides, 1.5 gallons of hand sanitizer, and countless conversations later, we’ve learned a lot about what it’s going to take to make shared e-scooters a success in the big city. While some of the feedback we received, such as the importance of safe e-scooters and reliable geofencing, was expected, some of what we heard was less predictable: Community Board Members Love E-Scooters Going into it, I had a few misconceptions, namely that New Yorkers would be skeptical about scooters, especially Community Board Members, who, as conventional wisdom dictates, are all about cars, driving, and parking. I was wrong. Transit cuts and pending carmageddon have made Community Board members excited about shared e-scooters coming to New York. They just want to make sure they are safe, and that they stay off sidewalks. Photo by Paul Mondesire Universal Access is just as Important as Safety We expected to have lots of discussions about pedestrian protection, rider education, and parking enforcement. But just as often, we heard questions about how everyone might gain access to the service. These conversations have already led to some new partnerships, from working with bodega owners, to bringing our LINK-Up discount program directly to New Yorkers who need it, to teaming with industry allies at Charge and Rio Mobility to push the envelope on accessible design. Stability is More Important than Zipiness Among scooter veterans and newbies alike, we found a clear preference for reliability and stability over acceleration. This was a departure from my conversations with riders in other cities who always asked about speed. Maybe New Yorkers have learned the hard way that vehicles have to be strong to stand up to our tough streets. It was common for riders to refer to the LINK Scooter as a “tank” and a “scooter version of Citibike,” with one rider saying, “it’s like riding a sturdy work boot!” Indeed, compared to other scooters, the LINK Scooter has a much lower center of gravity and higher weight capacity, enabling it to roll over street defects that would jostle weaker scooters. In sum, New Yorkers know that keeping the city moving in 2021 and beyond is going to require the rapid arrival of new modes. As urgent as this demand is, they do not want to sacrifice the values New Yorkers hold dear: public safety, social inclusion, and respect for the vulnerable pedestrians who are already struggling to navigate our streets. LINK scooters offer a path forward without compromise — safe, inclusive transportation that compliment existing modes such as walking and public transit while expanding access to many.
https://medium.com/@link-city/community-boards-love-scooters-7cb55f7b2577
[]
2020-12-09 16:36:20.879000+00:00
['Scooters', 'New York City', 'New York', 'Community', 'Transportation']
Reflections of a Youth Worker: Throwing a Young Person in a Lake
Context: A reflection from a series I did for my studies several years ago, reviewed and published to encourage reflection on both a small and large scale within youth work. No real names or organisations have been revealed. The reflections all follow David Kolb’s Experiential Learning Cycle as a method for reflective practice. On a youth exchange in what is now North Macedonia, I was the coordinator of the entire exchange as well as male group leader for the young people from North Macedonia and Wales. There was one young person from North Macedonia of whom I had worked with before on a youth exchange in Wales and had built up a good, fun working relationship. The young person was in their early teens old and quite small for his age. I will name him Bobbi for now, Bobbi liked to try and annoy me, have me chase him, sometimes pick him up and spin him around, etc. Sometimes our interaction was quite physical, but it never dawned on me as a problem because it was a ‘play’ style of a working relationship, social norms for youth workers in North Macedonia were and still are quite different and I was regularly working alongside his older sister who was also a youth worker. However, one day on the youth exchange the usual situation happened where Bobbi was trying to annoy me and pushing for a reaction. Bobbi was just in sandals, shorts and a t-shirt so I decided to act in kind. I chased after him, picked him up and ran to the beach where at the edge of the lake I threw him into the water. However, I slipped and fell into the water with him which caused him to scrape his arm slightly. Everybody laughed, including his sister and both of us were fine and Bobbi was happy as usual. However even if there were no issues, I was personally feeling bad and worried. Asking myself questions like “did I cross the line here?” or “does this comply with our health and safety policy?” and “am I a good youth worker?” I was completely unsure of myself and my professional nature working with young people. Being coordinator as well as group leader for two countries on the youth exchange was a lot to take on and even though it was not by choice, it was still not the wisest of decisions. It was good to be working with young people whom I had worked with before and this helped build an excellent rapport and relationship. The relationship was that of which I’ve seen a lot of between youth workers and the younger spectrum of youth; playful, ‘childish’, etc. Although sometimes I wondered if the interaction was too physical by the Code of Occupational Ethics for Youth Provision in Wales, the fact of the matter was it was tame by local standards of educators interacting with young people. The one thing that has always worried me about such policy development has been that we may become a nation of paranoid educators with the most basic of relationships with young people. In my mind, I went through a mini risk assessment in my head before throwing Bobbi in the water. Firstly, I noticed the boy had nothing expensive on him such as a mobile phone and secondly, it was a hot day, and the water was warm. Finally, I knew the boy could swim, that the water was deep enough and that the bedding of the lake was soft sand. I fell in after him because he grabbed the back of my neck which I was not expecting and caught me by surprise, this meant as we fell, my weight pushed him slightly more towards to rock we fell off and he slightly scraped his arm. It was a relief to see he was ok and that everybody (including himself) found it funny and amusing. However, it did have a strong effect on me questioning my judgement. When answering said questions in my head I did feel maybe I had been too lax and crossed the line, that what had happened did not adhere to health and safety policy. Being coordinator as well as group leader for two countries was a lot of extra work, but that did not have an impact on the situation that occurred. It is always good to work with young people you have worked with before; however, can this make one over-confident? The relationship was one of which was developed more in a ‘play’ setting than a ‘youth work’ setting, at least by Welsh standards. Someone in my position regularly has to ask themselves as a Welsh youth worker who regularly works in North Macedonia with Macedonians as well as other countries, whose standards do I adhere to? Do I need to follow European standards or create international standards? Do we need to create separate codes and standards for individual organisations themselves? As a youth worker with a background in outdoor pursuits, I am regularly weighing up the risk-benefit balance of an outdoor interaction. I am happy with my preliminary risk assessment on the matter as it means I was thinking carefully about my interaction with the young person. I was not what some would call ‘gung ho’ and I was undergoing ‘reflection-in-action’. Reflection-in-action is where: “The practitioner allows himself to experience surprise, puzzlement, or confusion in a situation which he finds uncertain or unique. He reflects on the phenomena before him, and on the prior understandings which have been implicit in his behaviour. He carries out an experiment which serves to generate both a new understanding of the phenomena and a change in the situation” — Donald Schön. Falling in after was not what I expected as well as for the young person to be slightly injured. The fact that everyone including the young person was happy and enjoyed the situation was of no relief for me. It had me questioning myself, what happened and my professional ability. This time I was engaged in ‘reflection-on-action’: “The practitioner allows himself to experience surprise, puzzlement, or confusion in a situation which he finds uncertain or unique. He reflects on the phenomenon before him, and on the prior understandings which have been implicit in his behaviour” — Donald Schön. So upon reflection, I have found myself in a place where I feel I need to look for and develop on a set of youth work provision standards on an international level, at least a basic set. The line of work I am involved in is relatively new and the differences in youth work between the cultures I engage with is large. A balance of feeling professional, integrated and without compromise on the quality of my work needs to be created. Photo by bantersnaps on Unsplash I do not believe that working with young people you have worked with before can make you over-confident; it only helps improve the quality of the work. Understanding of professional standards at an international level is over paramount importance, more comparison between Welsh, North Macedonian, European and International Standards needs to be made. My background in certain areas if of a huge benefit to my work, however, I do need to watch to see if it allows me to take risks that are not needed at certain times. Understanding my reflective process if excellent for my development, nonetheless, I need to improve on my reflection-in-action.
https://medium.com/age-of-awareness/reflections-of-a-youth-worker-throwing-a-young-person-in-a-lake-b18ecd99a500
['Daniel John Carter']
2020-12-15 15:02:49.438000+00:00
['Youth Development', 'Professional Development', 'Youth', 'Education', 'Reflections']
Test Your Python Program the Right Way
Test Your Python Program the Right Way How to use Python Unittest to find bugs in your Python projects easily and faster Matteo Follow Jul 15 · 7 min read Photo by Chris Ried on Unsplash Why a testing framework? For a long time, I have assumed that finding bugs in my programs was just a matter of running them a few times manually. This may be okay for small projects, but it rapidly becomes impractical as the number of methods and classes increases. That’s the main reason why I started using a testing framework. It doesn’t mean, however, that you should only use these libraries if you are starting a big project: on the contrary, I am writing tests for every program I create and I have found that this makes the whole bug-finding process much faster. But what is exactly a testing framework? To answer concisely, it is a library that allows you to check if methods and classes behave as expected, and often can also give a clear explanation of what has gone wrong if a test fails. In the following tutorial, I will use the unittest library as a testing framework for Python. I chose it because it is really easy to use, and has many useful features. The program we are going to test In this guide, we will use as a running example a (very simple) class for a calculator, which is able to do some arithmetic operations. Here is the code to implement it (I created it in a file named Calculator.py): import math class Calculator: def add(self, a, b): return a+b def subtract(self, a, b): return a-b def squareRoot(self, a): return math.sqrt(a) def multiply(a, b): return a*b Note: At the end of this article there is, for reference, the full program with all the tests we will create. Let’s start testing! Now that we have our program, we only need to write the tests for it! First of all, we need to create a new file which will contain the tests. In it we have to import the Calculator class and obviously unittest : from Calculator import Calculator import unittest Now we can create the class that will implement the tests. It has to be a subclass of unittest.TestCase , and inside it each test will be a new method. class CalculatorTests(unittest.TestCase): The first function we want to test is add. We want to check that, for example, add(7, 3) gives the right result ( 10 in this case). To check if the return value of a function is the one we are expecting, we can use assertEqual(expected, result) from the TestCase class. Here is the code to put inside CalculatorTests : def test_add(self): calc = Calculator() self.assertEqual(10, calc.add(7, 3)) Now is time to run the tests! To do this, simply add these lines of code at the end of the file: if __name__ == '__main__': unittest.main() And run the program. The result should be something like: . -------------------------------------------------------------------- Ran 1 test in 0.001s OK This is a quick report of our tests: we have executed one test, and all the tests passed (this is why it wrote OK). But what would have happened if the test failed? Let’s check this by modifying the assertEqual and run the test again. self.assertEqual(13, calc.add(7, 3)) #This should NOT pass Now the output is a bit longer: F ==================================================================== FAIL: test_add (__main__.CalculatorTest) -------------------------------------------------------------------- Traceback (most recent call last): File "d:\Desktop\PythonTestingTutorial\CalculatorTest.py", line 10, in test_add self.assertEqual(13, self.calc.add(3, 7)) AssertionError: 13 != 10 -------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (failures=1) In the first line, the report prints an “F” for each test that failed and a “.” for each one that passed (in this case we only have one failed test, so it shows one “F” ). Then for each failed test there is a description, starting with the test name ( test_add in this case) followed by a description of the error. In the last line of the description, we can see that the test failed because 13 (the expected result) is different from 10 (the result we obtained). But wouldn’t it be better to have a more clear description of the error? Luckily unittest provide an easy way to achieve this: we only need to add a string as the third parameter for assertEqual , and it will be shown whenever the test fails: self.assertEqual(13, calc.add(3, 7), "The adddition is wrong") And the output now becomes: F ==================================================================== FAIL: test_add (__main__.CalculatorTest) -------------------------------------------------------------------- Traceback (most recent call last): File "d:\Desktop\PythonTestingTutorial\CalculatorTest.py", line 10, in test_add self.assertEqual(13, calc.add(3, 7), "The adddition is wrong") AssertionError: 13 != 10 : The adddition is wrong -------------------------------------------------------------------- Ran 1 test in 0.000s FAILED (failures=1) Now it is more clear what went wrong in our test. You may even make a more detailed message, such as: def test_add(self): calc = Calculator() a = 3 b = 7 expected = a+b res = calc.add(a, b) message = "The sum of "+ str(a) + " and " + str(b) + " should be " + str(expected) + " but the result is " + str(res) self.assertEqual(expected, res, message) The setUp and tearDown methods Suppose we have created a few tests for addition, subtraction and so on. All these methods will need to create a new Calculator object, so the same line will be repeated across all our tests: class CalculatorTest(unittest.TestCase): def test_add(self): calc = Calculator() self.assertEqual(10, calc.add(3, 7), "The addition is wrong") def test_subtract(self): calc = Calculator() self.assertEqual(12, calc.subtract(15, 3), ”Subtraction is wrong”) def test_multiply(self): calc = Calculator() self.assertEqual(30, calc.multiply(5, 6), ”Multiplication is wrong”) Luckily unittest has a feature that helps to avoid the repetition of the same code over and over: the setUp and tearDown methods. 1. setUp() contains the code that should be run before each test. In this case it would be: setUp(self): self.calc = Calculator() 2. tearDown() , on the other hand, contains the code that should be run after each test (for example the code for closing a file). Now the code of the tests will be much cleaner: def setUp(self): self.calc = Calculator() def test_add(self): self.assertEqual(10, self.calc.add(3, 7), "The addition is wrong") def test_subtract(self): self.assertEqual(12, self.calc.subtract(15, 3), ”Subtraction is wrong”) def test_multiply(self): self.assertEqual(30, self.calc.multiply(5, 6), ”Multiplication is wrong”) Tests with multiple parameters Until now we have only tested a single case (for example, 7+3 in the test for addition). However, it is often more useful to test multiple cases for the same function, because not all of them may cause bugs. To do so we may use the subTest functionality. It allows us to run a loop inside a test function, and at each iteration a different sub-test is executed. The following code does the work: def test_multiple_addition(self): for a in range(10): for b in range(10): msg = "add " + str(a) + " and " + str(b) with self.subTest(msg): self.assertEqual(a+b, self.calc.add(a, b)) Now we run 100 tests, each one with different parameters, to check if the addition is correct. The msg variable contains the name of the subtest, to be shown (together with the general name of the test) in case it fails. In this way you know exactly which one of the multiple inputs caused an error. Checking for an exception Sometimes we want to check that a function throws an exception as a result of a certain input. For example in our Calculator class we expect squareRoot(-1) to return a Value Error . As before, unittest can help us in such cases: there is a method assertRaises that can be used to achieve this. def test_negative_root(self): self.assertRaises(ValueError, self.calc.squareRoot, -1) assertRaise takes as input the exception that should be thrown, the function to check, and then the arguments for the function (in this case -1 ). The test will pass only if the desired exception is raised. Other Assertions Until now we have only used AssertEqual to check for equality and AssertRaises to check that an assertion was thrown. However, there are many more methods that can be used inside the tests, depending on our needs. Here is a list of the most useful (you can add a message as an optional parameter to each of them, as we did for assertEqual ): assertNotEqual(expected, result) checks that the result is not equal to the expected value checks that the result is equal to the expected value assertTrue(result) checks that the result is true checks that the result is assertFalse(result) checks that the result is false checks that the result is assertIn(element, container) checks that the element is one of the values in the container (e.g. a list) checks that the element is one of the values in the container (e.g. a list) assertAlmostEqual(expected, result, places=7) checks that the two numbers are the same up to a certain number of decimal digits (7 by default, can be changed using the places optional parameter) checks that the two numbers are the same up to a certain number of decimal digits (7 by default, can be changed using the optional parameter) assertGreater(expected, result) and assertLess(expected, result) check that the expected value is greater (or less) than the result and check that the expected value is greater (or less) than the result assertGreaterEqual(expected, result) and assertLessEqual(expected, result) check that the expected valuer is greater/less or equal to the result To see the full list of assertions, including assertDictEqual , assertListEqual and many more, read the documentation here. How to test efficiently Now that you have learned the basis of testing with unittest, here are a few suggestions to test your projects efficiently. Test EVERYTHING The only way to really avoid bugs is to test every line of code. In fact, you may be sure that a simple function is working when you first write it, but future changes may break it, and having tests to immediately check that will save you hours of work (and headache). Run the tests after each change Every time you change something in your project, you should rerun all the tests, because you never know what are the effects of your new code on the rest of the program. Write tests before code This is a principle called Test Driven Development (TDD) and it helps you make sure that you are testing extensively and your tests are not influenced by the code you just wrote. Keep tests simple It’s easy to get excited and write over-complicated tests, but beware! You do not want to have to write tests to check your tests. As a rule of thumb, each test should only be a few lines of code and should check only one specific functionality. The mock test class Here is, for reference, the complete test class we have created in this guide: import unittest from Calculator import Calculator class CalculatorTest(unittest.TestCase): def setUp(self): self.calc = Calculator() def test_add(self): self.assertEqual(10, self.calc.add(3, 7), "The addition is wrong") def test_subtract(self): self.assertEqual(12, self.calc.subtract(15, 3), "Subtraction is wrong") def test_multiply(self): self.assertEqual(30, self.calc.multiply(5, 6), "Multiplication is wrong") def test_multiple_addition(self): for a in range(10): for b in range(10): with self.subTest("add " + str(a) + " and " + str(b)): self.assertEqual(a+b, self.calc.add(a, b)) def test_negative_root(self): self.assertRaises(ValueError, self.calc.squareRoot, -1) if __name__ == '__main__': unittest.main() Conclusion And there we have it! I hope you have found this to be a useful introduction to testing Python code. Be sure to let us know your thoughts in the comments. More content at plainenglish.io
https://python.plainenglish.io/python-unittest-how-to-test-your-program-the-right-way-30f1a459d9d1
[]
2021-07-21 16:51:12.190000+00:00
['Programming', 'Python Programming', 'Python', 'Software Development', 'Tdd']