title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
How to Process Pain in an Era of Trigger Warnings
I’ve studied trauma from a psychological, neurobiological, and philosophical perspective for 10 years. I’ve published papers and, most recently, a book on the topic. By all accounts, I’m a trauma expert. When I started this work, no one talked about the concept of “triggers” — neither those in the health community nor in the public at large. Today, we hear about “triggers” all the time — the feasibility and necessity of “trigger warnings” is debated at universities and news publications. Triggers are real. They’re everywhere. Many of us can identify triggers in our own behavior. We also see them in the stories around us — in film, TV, and the behavior of those closest to us. But here’s the problem: Often, when we discuss triggers, we don’t actually know what we’re talking about. Triggers are real Let’s start from the beginning: Our brains process events in different ways. In our day-to-day lives, events occur and they basically make sense. We go to work, have conversations, trip over stuff, tell jokes, get frustrated, and get un-frustrated. Our brain encodes all of it, allowing us to organize, relate to, and access memories of our behaviors later in life. If someone tells a joke at work, for example, you may come home at the end of the day and retell it. Or you might forget about it that night and tell the story three days later, or you may do both. In both cases, you’ll retain full cognitive control over the memory. The trauma response is borne of strength, not weakness. When you experience something overwhelming, the recording mechanisms in your brain go a little haywire — and for good reason. Your brain recognizes a threat and reprioritizes its normal processes to help you prepare for that threat in the future. Have you ever heard a crash in the middle of the night and shot up in bed? Do you pause to notice your heart racing and the fact that, all of a sudden, you are really, really awake? That’s because stress hormones are coursing through your body. Your brain recognizes danger and wants to keep you alive. It’s very important to note: This is biological proof that the trauma response is borne of strength, not weakness. Not how we typically think about triggers, is it? But there’s a somewhat inconvenient upshot of the recording mechanism going “offline”: You don’t get a neat and tidy memory to organize, file away, and then resurface whenever you want with full cognitive control. Instead, you get tiny little fragments — sounds, colors, smells, phrases, tastes — that are filed away in disorganized ways. Your brain will remember those things at all costs because they represent threat or danger. Again, this is a strong and positive survival tactic. The only hitch? Your memories are fragmented and shattered. Here’s a quick example. Let’s say you get mugged by someone wearing a shirt that is a deep maroon color. Your brain records that color and saves it — but it’s just sort of thrown into a cabinet, not organized and filed away. If you’re at work and someone walks by wearing a shirt of that exact same color, you may feel all the adrenaline rushing back. But because the signal is so subtle, you may not consciously connect it with your trauma. Instead of thinking, “Oh, Fred is wearing the same color shirt as the person who mugged me, that reminds me of how scary that was,” you’ll probably think, “Oh jeez, I’m panicking for no reason. Why am I panicking? I’m at work and I’m totally safe. Am I losing my mind?” Nope! You’re not losing your mind, you’re just triggered. Your brain doesn’t stop to rationalize — it sees a maroon shirt and shoots stress hormones through your body to help you better adapt to the threat. Of course, the threat doesn’t exist, and now you’re panicking. You’re also probably blaming yourself. Today’s societal narrative says that, if you have a trigger, you should cope by simply avoiding it whenever possible. Most of the time, that won’t work. There are all sorts of different kinds of triggers. They get in our way because they alter our behavior. They make us panic, dissociate, or freeze. Our brain is trying to protect us, and it’s inconvenient. I’ll give you a personal example. I’m bad at conflict, especially in close relationships. If things get too intense, and if someone is really angry, I check right out — I completely dissociate. I would not be able to tell you what we were arguing about if my life depended on it. Checking out can be an amazing, life-saving coping mechanism (and it has been), but our brain’s survival-at-all-costs bias has significant downsides when it comes to, you know, everyday living. Triggers are opportunities — not excuses Today’s societal narrative says that, if you have a trigger, you should cope by simply avoiding it whenever possible. Most of the time, that won’t work. You can’t avoid the color maroon, and I can’t avoid conflict for the rest of my life (though, in all honesty, I do sometimes hide). More importantly, avoidance is not great for your brain. Think back to the filing cabinet. Most of your memories are neatly organized in a metaphorical cabinet, and you’re able to take them out whenever you want. Traumatic memories are like little pieces of ripped paper thrown all over the cabinet. When you go to pull out the funny memory from work, you’ll need to push aside all of these random papers. Sometimes, they may fall out of the cabinet. They may get lost and reappear years later, like an old keepsake you can never quite track down. Your goal, obviously, is to have a cabinet that is as orderly as possible (within reason). Trauma interferes with that. Today’s culture of trigger avoidance worries me. I notice people hiding behind their triggers. They make excuses for themselves, e.g. “I treated you badly because you triggered me,” or “I can’t do that even though I want to because it will trigger me.” This kind of avoidance is unproductive for everyone. Here are three sentences that are all true: Triggers are real. Triggers don’t have to run your life or ruin your relationships. You don’t get to hide behind your triggers. We have an opportunity here to reframe our triggers. To face them, instead of running away. In my case, for example, my intense response to anger is a sign. It signals that I’ve had some scary shit go down in the past. There’s some trauma there. And it’s an opportunity for me to both deal with that past trauma and to get better at dealing with conflict overall. How do you deal with trauma triggers to start making progress? You must reorganize your filing cabinet. Reorganize your file cabinet You know the feeling of getting ready to leave the office at 4:45 p.m. on Friday when, suddenly, three people drop new projects or problems on your desk? If you just want to get the hell out of there, what do you do? You open a drawer (or a new browser tab), toss the disorganized folders in, close the drawer, and get the hell out of dodge. The problem? These files won’t get magically organized over the weekend. Come Monday, they’ll still be there, papers spilling out of them, blocking your access to the organized folders beneath them. Those disorganized folders are the traumatic memories. The trigger is Monday — you wander into the office relaxed from your weekend activities, open your drawer to grab a file, and BAM. You can’t go back and erase the traumatic event, just like you can’t change the fact that someone in your office dropped a bunch of work on your desk late Friday afternoon. But you can reorganize the file. Our memories — like our file cabinets — are somewhat flexible. What we have to file might not be entirely up to us, but how we file is something we have some say over. We are who we are because of our memories, not in spite of them. Here’s where the analogy breaks down (you knew it would break down somewhere). At work you can just do the work, organize the file, shut the drawer. But how do you do that with your memories? 1. Narrate Trauma researchers widely agree (and have since the 1800s) that narrating a traumatic event is a critical part of healing from it (see Freud and Breuer’s Studies on Hysteria, 1895). Now that we know more about the brain, we can surmise that this is because telling a story is a way of organizing it. In order to tell a story about something, you have to render it into story form. This requires giving it a beginning, a middle, and an end. It requires that you put it into language (written or spoken). Most importantly, it requires that you see it from an external perspective. In the telling of the story, you should also be able to tell yourself it happens in the past. In this way, you’re able to speak to your triggers — reminding them over and over that what happened is not currently happening. You’ll begin to recognize the difference between the past and the present. Your triggers may never completely go away, but they’ll stop being so persistent. 2. Reframe When you tell the story — and only when you tell it — you have the opportunity to reframe it. A reframe happens when you see the story from a new perspective and can then assign new meaning to it. Other people can be enormously helpful with this. If you open up to someone else about your trauma, they might say, “you know, I see it a little bit differently.” A whole world may open up that you weren’t capable of seeing. You can’t change what happened to you, but you absolutely can change what it means to you. 3. Return Your memories, good and bad, don’t go away. It’s important to remember that we are who we are because of our memories, not in spite of them. Your traumatic memories will return. Sometimes inconveniently. Sometimes in the form of nightmares that wake you up in a panicked sweat. Sometimes, they’ll get in the way. That’s okay. Just return to them. Ask them what they need. And then put them back in the past. Each time you do that, your ability to exercise control over your triggers will get stronger. If you can be kind to yourself in the process — instead of judging yourself for having memories — your ability to have compassion for yourself will get stronger too. And that makes everything better. So, let’s work on our triggers together, and be kind to ourselves in the process.
https://medium.com/s/story/reframing-trauma-triggers-844637e2aece
['Emsey']
2018-09-14 21:40:54.989000+00:00
['Trigger', 'Trauma', 'Healing', 'Mental Health', 'Psychology']
Language Learning — A Few Useful Tips from a Polyglot
Portuguese It can be difficult to learn a foreign language. Some people learn a foreign language through their work. Others just want to learn something new. Whatever reason you are learning a language, you should receive tips. Matthew Youlden is a polyglot who shares excellent tips for learning a language quickly. This article will help you learn Portuguese. Matthew is fluent in 9 languages. Matthew can speak more than 12 languages. He can help you learn foreign languages if you are having trouble. Find motivation Matthew believes that motivation is key to learning a language. Once you have decided to learn a foreign tongue, it is important to stick to your guns. Partner with you Matthew was assisted by his twin brother. Both are motivated to learn. They encourage each other to learn multiple languages. If you are looking to learn Portuguese, then find a trustworthy partner. Your partner should also be a language enthusiast. You must be pushed by your partner to achieve your goals. Talk to yourself and practice speaking the new language This tip may sound strange. This tip might sound strange, but it can help you improve your skills. Matthew Youlden said that this is a great way for you to practice even if it’s not something you use all the time. This activity will allow you to retain foreign words and phrases in your brain. You will also feel more confident. Learn as a child Children learn faster than adults. Why? They are more curious and humble. They are eager to learn. They are willing to make mistakes. Making mistakes can help you learn. To learn Portuguese well, you must admit that you don’t know everything. This is the key to freedom and growth. You can learn new languages if you act like a child.
https://medium.com/@Francis_MostUsedWords/language-learning-a-few-useful-tips-from-a-polyglot-72e6c0be97fe
['Francis']
2021-12-30 09:02:55.591000+00:00
['Portuguese', 'Language', 'Portuguese Language', 'Language Learning']
PSA: Do NOT Harass Veterans Today
It’s Veteran’s Day — Today, no matter who you are or where you come from, it’s not okay to fart in a veteran’s mailbox. It’s not okay to put a veteran in a headlock and give them a wet willy. It’s not okay to tickle a veteran who has repeatedly told you they don’t like to be tickled. It’s not okay to bet a veteran $20 that he won’t go to Starbucks in a dress, and then after he does it, insist that you said “twenty doll-hairs.” It’s not okay to hide behind a veteran’s compost bin, and when he comes out to throw biodegradable waste in the heap, jump out and scare him. Veterans are often old, and should be praised for their use of alternative waste disposal techniques. It’s not okay to heckle a veteran on a power-walk by yelling “shouldn’t you be running?” Power-walking is a fun and healthy form of exercise that our brave men and women are free to enjoy. It’s not okay to tell a veteran that the Iwo Jima memorial looks “gay.” It’s not okay to ask a veteran how many people he killed. They didn’t go to war to kill people, they went to war because Nixon really hated Communists. It’s not okay to give a veteran a cup of soup when he asked for a bowl of soup, just because you are running low on soup. It’s not okay to chase away dirty pigeons while a veteran is feeding them.
https://medium.com/the-aesthetyka/psa-do-not-harass-veterans-today-287c24075676
[]
2015-11-11 23:41:25.429000+00:00
['Veterans', 'Military']
Memories: Dreaming in Color for the First Time
Memories: Dreaming in Color for the First Time Photo by Tom Barrett on Unsplash The night I turned “flirty 30”, I dreamt in color for the first time. I remember this so vividly, because up until then I had only saw black, grey and white in all my dreams. Before this unexpected surprise, I didn’t even know it was possible to dream in color; my family and friends don’t talk about dreams regularly. I love dreams and have spent a lot of time reading interpretations. Dreams about rushing water usually a direct link to the dreamer feeling out of control in a relationship or a blazing fire can signify anger or the rebirth of something significant in the dreamer’s life. Many moons ago, I read that the average person has 100 dreams a night and is lucky enough to recall one or two when the wake up. And when remembered, the focus is on what occurred and sometimes what we felt — emotionally and physically — and not the colors seen or sounds heard. As a child, my dad and I discussed the toilet dream…You know the one where you think you’ve made it to the bathroom and relieve yourself, but you actually are still in bed. So you feel really warm for a bit and then wake up to the horrid truth, hoping you can either hide the evidence or that your parents won’t be upset that you didn’t make it to the porcelain god. While, I am reminiscing on dreams, I also recall those terrible bad report card/progress report dreams. This happened twice, where I was so afraid to show my parents a failing or low grade that my anxiety leaked into my dreams where I would give them the bad news and they would respond in an understanding manner, not freak out, whip me with a belt or punish me by taking away my lifeline aka my boombox (radio, for those born after 1999). I would become relieved that the truth had set me free — until the morning, when I realized ‘ahh sh*t, it was just a dream…still gotta find the right moment to present the doom’…dunh dunh dunh… Back to my epiphanous dreams can be in color moment… In addition to dreaming without colors, I never heard anything in my dreams. If people spoke I just knew what they were saying, as if it were telepathic communication. Sometimes I felt things like… a penis in the few sex dreams I have had. I often saw blurred faces along with their silhouettes. I always just knew who they were supposed to be — usually a close friend or family member. And Sometimes I wouldn’t even see a blurred image of a body part, but I would know that a certain individual was present beside me or in the room. This night was so different though… Just before, my birthday I started counseling with a duo of older, Black women who provided a co-therapy service like no other. I truly believe my first encounter with them helped awaken these colors, awakened a part of me that had never been or ceased to be activated. The way color was presented in this dream reminded of the 1998 movie Pleasantville — whenever new emotions were experienced an object or body part turned a beautiful bright color. (If you haven’t seen it, I recommend not just to understand this reference.) I remember waking up thinking ‘OMG this is the greatest thing that has ever happened to me in my dreams!’ I will never forget that in the summer of 2014 my Chakras were starting to align and with a mainly blurred montage of grey and white as if a moving train was in slow motion. I observed the moving objects in: Purple, green, cream, blue. I cannot recall the objects, but I can see what I jotted down in my journal as if it was last night. I have not been the same since that miracle transpired. It’s as if I shed a layer and entered a section of the unfinished book that is my life. Every birthday is a new chapter for me, but this marked a new section entirely. Now, my dreams are long movie productions that I could easily use as inspiration to compose a first draft for a mini series or film. I try to talk or journal about them as soon as I wake up, because I do forget dreams as the day progresses. By the end of the day I might only be able to grasps snippets from the night before. So here I am rambling on about my dreams, in the hopes that more people will be inspired to share their literal and unrealized dreams with me. Mood: Everything I Wanted — JP Cooper Do you dream in color or heard noises in them? Is there a dream you can never forget? Do you record your dreams or look up interpretations?
https://medium.com/@kadewo/memories-dreaming-in-color-for-the-first-time-a480ac828a5d
['Ka De Wo']
2020-11-25 05:11:23.420000+00:00
['First', 'Dreaming', 'Remember', 'Colors', 'Dreams And Visions']
Hiring Your First Employee
HR & Employment Law for Entrepreneurs Series By Adrienne B. Haynes, Esq. Managing Partner, SEED Law Congratulations! You’ve made the decision to bring on a new employee or review and update your employee onboarding practices. Bringing on an employee is an important step in company growth, and each relationship should be carefully developed from the beginning. This requires regular review of HR and employment processes so that there is compliance in the department design. Once the decision is made to hire an employee, the following checklist should help your company remain in compliance with employer responsibilities: · Update position description to make sure it clearly and specifically explains the position duties, responsibilities, and qualifications necessary to complete the work · Set up a system to manage, track, and report employee payroll and tax withholdings · Calendar necessary due dates for local and state filing and communication compliance · Calendar necessary due dates for IRS filing and communication compliance · Prepare any employee benefit documentation for review and onboarding · Prepare employment offer and agreement · Schedule orientation to review and collect signed employee documentation, communicate expectations, and onboard through training or orientation · Prepare and file state and federal required documentation for employers · Schedule regular formal and informal reviews with the employee(s) to share and receive feedback · Work with an external financial professional to review your record keeping and documentation and make any necessary updates · Calendar an annual review of your department’s documentation and recordkeeping and any changes in the law that may impact the way you do business This article is an overview of legal considerations and does not cover every legal right or obligation, consideration, exception, or restriction. Every business decision should be well researched and discussed with a professional before being made. To schedule a consultation with a SEED Law attorney, you can give us a call at (816)945–4249 or schedule your consultation here. Additional Resources: Employment Taxes, https://www.irs.gov/businesses/small-businesses-self-employed/employment-taxes (last visited April 14, 2020) Hire and Manage Employees, https://www.sba.gov/business-guide/manage-your-business/hire-manage-employees#section-header-6 (last visited April 14, 2020) IRS Video Portal, https://www.irsvideos.gov/SmallBusinessTaxpayer/Employers (last visited April 14, 2020) Present Law and Background Relating to Worker Classification for Federal Tax Purposes., https://www.irs.gov/pub/irs-utl/x-26-07.pdf (last visited April 14, 2020) Publication 15 (2020), (Circular E), Employer’s Tax Guide, https://www.irs.gov/publications/p15 (last visited April 14, 2020) Small Business Taxes: The Virtual Workshop, https://www.irsvideos.gov/SmallBusinessTaxpayer/virtualworkshop (last visited April 14, 2020) State of Missouri Employer’s Tax Guide, https://dor.mo.gov/forms/4282_2019.pdf (last visited April 14, 2020)
https://medium.com/seed-law-column/hiring-your-first-employee-a8e347c41142
['Adrienne B. Haynes']
2020-04-29 22:56:21.694000+00:00
['Employment Law', 'Entrepreneurship', 'Documentation', 'Hiring Strategy', 'Growth']
14 Instagram Accounts That Will Feature You For Massive Exposure
​ Instagram is for sure the most effective visual advertising channel right now. With their impressive growth in the last few months, they got every marketer’s attention. The platform itself is build to drive more engagement than any other social platform. One of the most effective ways to get the attention of as many users as possible is to get featured on one of the popular Instagram Feature Accounts. WHAT IS AN INSTAGRAM “FEATURE” ACCOUNT? The “feature accounts” are on a mission to discover and share unique photography in niche communities. These curators search for hidden gems through hundreds of photos each day. There’s no doubt that their purpose is to find the best of the best and inspire their audience to create more similar art. Everybody can join these communities. You just have to use their hashtags, tag their account or sending your creations, depending on the guidelines. Even if your picture is not featured, by using their specific hashtags, your photos will be discovered by thousands of people interested in content just like yours. The trick is to use the hashtags that are relevant to your content and to be perseverant. BUT, HOW TO MAKE SURE YOUR PHOTOS ARE GETTING FEATURED? 1. HAVE A CONSISTENT FEED For a clean and consistent look, try to stick to the same filter for your photos. It will take a lot of try and error and creativity process, but when you finally get it stick with it. To find out which filter works best, you can use tools like Iconosquare and see what type of posts get more engagement and do more of that. Another way to ensure consistency is to find the niche or topic that best suits you and post the majority of your content in that niche. Don’t spread yourself too thin. 2. IMPROVE YOUR PHOTOGRAPHY SKILLS Invest in a good editing app. You’ll be impressed how much you can improve your photos with a simple filter. While you’re at it, learn a few editing tricks and improve your technique. 3. STUDY OTHER INSTAGRAMMERS THAT GET FEATURED See what type of photos they post, the colors they use, filters and composition. Engage with them and try to learn from the best. Also, study the curation accounts in your niche. See what are their favorite types of photos and what they prefer to share with their audience. They want to keep their feed consistent, so if your photos are not in line with what they already share you will not be featured. 4. POST YOUR CONTENT AT THE RIGHT TIME When you’re competing with hundreds of talented Instagram users for a spot on a feature Instagram account, the time of your posting in extremely important. First of all don’t try to submit old photos because Instagram displays the tagged photos in chronological order. So your photos will not show up when people search for a specific hashtag. Then, use a monitoring tool like Iconosquare to figure out what are the best times to share your photos. As you can see in the screenshot below, Iconosquare tells you what your posting habits are and when you should post based on the engagement you receive for each post. Once you know when to post, it’s easy to create scheduled campaigns using tools like Mass Planner and share your photos at the perfect times. Now, let’s see some of the most popular accounts that can get you featured and drive your own account to explosive growth. The Instagram team is sharing each day the most inspiring pictures from the community. On top of that they have weekly challenges that they announce each Friday. On Monday morning, the team chooses nine favorite pictures tagged with he Weekend Challenge hashtag, and they share them with an audience of over 102 million users. That’s an impressive exposure for the lucky few. So, keep an eye on the challenges and submit your best work. To submit a photo to be reviewed and featured by the Instagood team, you need to use the hashtag #instagoodmyphoto. The team behind the Instagood account is in a continuous search for creative, conceptual art, unique composition and use of colors. Each month they select a team designated with the searching and sharing the best pictures from the community. 2instagood is a second account managed by the same team behind the Instagood account. They have pretty much the same rules: a specific hashtag #2instagoodportraitlove to use for the submitted works. Just like for Instagood they choose a team of curators from the community (you can be one of them!). The difference is that this account focuses on portraits and fashion photography. If your content is in this category, go ahead and tag your best work! The JustGoShoot team is on a mission to provide exposure to talented, underrated photographers and to develop a growing community. To join the community, you need to use the #justgoshoot hashtag. Thepeoplescreatives is another feature account that shares people’s creative pictures each day. Like the say in their own description: “You create. We curate. “ Use the hashtag #peoplescreatives and you get the chance to get featured and get tons of exposure from their community of engaged creators. The Visuals Collective team tries to bring the concept of visual storytelling through unique photographs shared by adventurous travelers. Let your imagination run free and come up with the most impressive visual stories and tag them with #exploretocreate to be featured. As you could easily guess this is a community of travelers, that love to share their memories and inspire other people to travel too. Their feed is full of unique places to visit and experience. Tag your own travel memories with #passionpassport and your pictures will get in front of a large travel community. This is another Instagram account that focuses on travel photography. To share your photos, you need to tag them or contact them via e-mail. The advantage of the existence of so many travel feature accounts is that you can submit the same photos and increase your chances. The Outbound Collective is dedicated to outdoor activities and adventures. You simply need to tag your favorite adventures with #theoutbound and get featured by the outbound team. Another account for your authentic travel memories. Tag your creative visual stories with #livefolk and get featured. Live Folk is similar to the Folk Magazine account. They share unique, authentic stories from around the world. To submit your photos, you need to use the #lifeofadventure hashtag or tag their account. While you’re at travel photos, here’s another account that might feature your creations. Capture new places or old places but from a different, unusual angle and tag them with #BestVacations. World Travel Book is the last travel photography oriented Instagram account on our list. Their hashtag is #worldtravelbook and for more exposure you can use more related hashtags. The account’s authors surely won’t mind as long as you submit original photos. The travel, landscape, and colorful photography is certainly the most popular category on Instagram right now. But here are a few brave artists that want to express their stories in black and white. It’s difficult not to rely on colors when creating impressive compositions, but it’s not impossible. If black and white photography is your thing, then you should follow the Monoart account and try to get featured. Use the #monoart_ hashtag and you will find other photographers passionate by monochromatic photography. Read more about How to Get Featured on Instagram and What Hashtags to Use to Get Noticed on Instagram.
https://medium.com/@zoesummers/14-instagram-accounts-that-will-feature-you-for-massive-exposure-7e0a2d66b02
['Zoe Summers']
2015-10-16 13:12:55.717000+00:00
['Instagram', 'Instagram Marketing', 'Social Media']
SEO Company in Coimbatore
SEO Company in Coimbatore offers search engine optimization assistance to industries to help them improve their view online. Search engine optimization (SEO) is the technique of increasing the integrity and quantity of website commerce by increasing the perception of a website or a web porter to users of a web search engine. Search engine optimization (SEO) is the technique of creating modifications to your website design and content to create your site more impressive to the search engine result page(SERP). The better optimized your locale is for SEO search engines like Google, Yahoo, Bing, the more probable your site will be to rank on the first sheet of the search engine results. SEO Company in Coimbatore. providing outstanding Best SEO company in Coimbatore, Tamil Nadu. An SEO Company in Coimbatore can help your industry improve your SEO (Search Engine Optimisation) .they can help enhance the integrity of your website traffic. In improvement to helping your business’s site rank increase on the search engines, SEO Company can also help you increase the integrity of organic traffic reaching your site rankings.
https://medium.com/@vetriseoanalyst/seo-company-in-coimbatore-offers-search-engine-optimization-assistance-to-industries-to-help-them-1ed31299bcaa
['Vetri Seo Analyst']
2021-12-24 09:45:57.004000+00:00
['Seo Services', 'Digital Marketing', 'SEO', 'Seo Training']
Writer Of The Week: Robyn Powell
‘Writing means empowerment.’ Though people often view objective reported journalism as the pinnacle of respectable media work, I’d argue that the personal essay can be, in its own way, just as integral to creating change in society. As a medium explicitly devoted to bridging the ever-widening empathy gap, essay-writing can push people to consider brand-new perspectives or reconsider existing ones, shifting entire ideologies while helping to engender equality. And it is actually subjectivity — the sharing of a definitively personal experience — that most powerfully makes this happen. Robyn Powell provides a particularly compelling example of these forces in action. Her candid, nuanced essays addressing disability rights through the lens of her own lived experience have no doubt helped countless people question their ideas and biases. At the same time, her writing also deftly weaves in legal context (she’s an attorney) and research-based reporting to provide a multifaceted approach to journalism. It is through weaving together all these elements—the personal, the contextual, the factual — that Robyn is able to so convincingly argue that, for example, disabled mothers have historically faced grave injustices, or that Trump, Sessions, and Bannon represent an unholy trinity of anti-disability-rights ideology. When asked what writing means to her, Robyn replied that she finds it empowering. And through her richly layered writing, Robyn empowers us all. Below, Robyn shares her thoughts on Cyndi Lauper, ice cream, and which Sex and the City character she is. The TV character I most identify with is Miranda from Sex and the City. I think “paying writers in exposure” is exploitative and devaluing. My most listened to song of all time is “Girls Just Want to Have Fun” by Cyndi Lauper. My 18-year-old self would feel surprised but content about where I am today. I like writing for The Establishment because it is women-run. If I could only have one type of food for the rest of my life it would be ice cream. The story I’m working on now is about sexual assault among students with disabilities. The story I want to write next is about reproductive justice for women with disabilities. If I could share one of my stories by yelling it into a megaphone in the middle of Times Square, it would be “As A Disabled Person, I Implore You Not To Vote For Donald Trump.” This was written pre-November 2016 — if only more people had heeded this advice. Writing means this to me: Writing allows me to express myself in ways that my day job does not. Now more than ever, we need the stories of those from marginalized communities front and center, and writing enables me to do this. Writing also provides the opportunity to give exposure to the issues facing people with disabilities — something that is far too often overlooked. In sum, for me, writing means empowerment. If I could summarize writing in a series of three GIFs, it would be: GIFs are usually inaccessible to people with disabilities.
https://medium.com/the-establishment/writer-of-the-week-robyn-powell-3143225e0940
['The Establishment']
2017-10-30 15:06:33.844000+00:00
['Publishing', 'Disability', 'Arts Creators', 'Writing', 'Writer Of The Week']
On Facing Death while Being Alive
Mumbai, India — 2019 © It has always pleased me to believe that I was born twice. I was born in February 1979 in a Parisian suburb, through a painful delivery, pulled out from my mother’s womb. I thought I was born a second time in April 2012, in a Buddhist monastery located on the hills of Kathmandu, in the salty sweetness of my tears. In reality, my life has been a series of simultaneous deaths and births. I died a thousand times. I died, crying, curled up on the lap of a multimillionaire in a car in Sri Lanka knowing that the business I had created in Singapore was over — all right, this is bad bragging, but let me just say that the glamorous circumstances didn’t make the experience any less painful. I died in pain each time I had to leave places or people behind, not knowing if I will ever meet them again. I died when I had to let go of everything I worked for to become the person I wanted to be. I died each time I shifted my views dropping beliefs and perspectives that no longer served me. I died sometimes on my yoga mat, too, in shavasana, while letting the weight of my body sink and melt to the ground beneath me, releasing blockages and healing wounds within me. I died looking for meaning and certainty when there was no meaning and certainty. And I died in that Buddhist monastery when I took the firm decision to clear out the old to make way for the new, in the process, embracing the ephemerality of life. My life has given me the opportunity to give up things I cherished the most. It has introduced me to the world of non-attachment and holding on to nothing. It has been a slow and long succession of experiments with my diet, relationships to others, and myself. We have all been taught that, if nothing else, death is the end of the end. The end [horror gasp]. Damn! We fear it. Birth and death, the coming and the going of things, are the most dramatic scenarios of our existence. On a personal level I have come to know that death is not an ending. I have found more peace in all those little deaths experienced than in any temptation of the material world. The little death. La petite mort. An apology to the dirty-minded here, but I’m not actually referring to anything sexual here. I’m talking about this brief momentary release (still not sexual) from our minds and egos; when for an instant the world vanishes, and we become open to an ecstatic union with something beyond ourselves. Those little deaths are not like the ones where people report their lives flashed before their eyes or that they saw a bright white light before they crossed the river to the other side. Nope. Those little endings are a doorway, a transition into a different inner world. Mind you, the time when one part of us must die before another part comes to life can be a very confusing, disorienting, and unsettling moment — especially when relating to people. As a human in the midst of reinventing yourself, no question produces more boredom and angst like the typical conventional: “So, what do you do?” I struggle to come up with a punchy one-liner reflecting my current state of nothingness: “I’m in transition” when the real answer could have been “I’m sort of in this weird kind of combo twilight zone of the last bits of my previous career, although it belongs to my past.” Erm, pause. Rewind. “I’m in transition.” Those little resets represent some sort of emptiness where your ego is confronted with blackness, swimming in the abyss of non-existence. As a person who walked in the darkness for a substantial amount of time and survived, I know that the world does not fade out. It continues in all sorts of ways, including the persistence of conflicting emotions and personal doubts — I am afraid and I’m not afraid. It matters and it doesn’t matter. I know and I don’t know. It’s real and it’s not real. I am nothing and I am everything. I am this and I am that. I am not this and I am not that. Despite all these mixed thoughts and feelings, there hasn’t been anything greater than simultaneously facing the beauty of life and the reality of the death of my old self. I look at my past self as a completely different person. Ah! I was so much older then. Now, I’m growing younger year by year! My habits have changed; my dispositions and emotional reactions to things are different, and after letting go of almost everything the ego holds dear: personal pride, social status, financial wealth, individual accomplishments, … and after stripping away almost anything that was inauthentic and false in my identity, now, I just want to BE. Marcus Aurelius, one of the most brilliant and influential philosophers I live by, tells us in his personal writings Meditations, “Soon you’ll be ashes or bones. A mere name at most — and even that is just a sound, an echo. The things we want in life are empty, stale, trivial (…) Think of yourself as dead. You have lived your life. Now take what’s left and live it properly.” What Marcus Aurelius is imploring himself and in the meantime us to do is to view ourselves as dead because it is a powerful tool to improve our lives in the present. So, following his advice, I just want to BE and have the courage to LIVE this one-time offer (aka life) in my true essence. This winter, I’m shedding another little part of myself, dying a little more. I have been brought back to life and given a second chance. Second chances are rare. If you found out that you had an opportunity to live again, what would you do? Would you do the same as before or would you approach life in a different manner? For me, I want to maximize every aspect of my life. I believe the end of the year represents one of the most important moments of the year, it’s a time for reflection and going inwards. It’s a great time to assimilate the past, recharge ourselves and look ahead. Twelve months have gone by — too fast or too slow? No matter what side of the fence we sit on, it’s likely all of us will agree that 2020 has definitely been a challenging and tumultuous year. The chaos might have been pivotal for those of you seeking change. There is no better time to ponder on the impermanence of life — everything is change and nothing can be held onto — and on this eternal saying: “Live like it’s the last day of your life!” Because you can only die well if you understand that your disappearance is part of the natural process of life. And so, friends, with the upcoming New Year, let’s all die well! Thanks for reading. If you liked this piece, please help me out by clicking the clap button below ❤
https://medium.com/@anouchkablessed/on-facing-death-while-being-alive-1bc67d4b313c
['Anouchka Blessed']
2021-01-24 08:42:30.021000+00:00
['Life', 'Yoga', 'Coaching', 'Death', 'Change']
Saturday~A Losing Battle
“A free republic! How a myth will maintain itself, how it will continue to deceive, to dupe, and blind even the comparatively intelligent to its monstrous absurdities. A free republic! And in little more than thirty years a small band of parasites have successfully robbed the American people, and trampled upon the fundamental principles, laid down by the fathers of the country, guaranteeing every man. woman, and child “life, liberty, and the pursuit of happiness”.” Emma Goldman~The Psychology of Political Violence~1910 18 Years and Counting It’s been a little over 18 years since the attacks of September 11,’01 changed the complexion of American life, as well as the world’s, laying the foundations for the continued theft of capital, from the average worker’s meager savings, to be sucked up by the vampires of corporate enterprises. Yes, a big statement. The attacks ushered in the modern dystopia that a large portion of Americans face daily, ya know, the ones never mentioned on corporate(MSM) media as they paint pictures of Disneyland, — one network is owned by Disney, ABC — a wonderland of endless, lush meadows of unicorns and faeries. Wow! What a place to live if it actually existed, but it doesn’t. Not by a long shot. And if we go by Emma’s essay, fast-forwarded to the 21st century, we have 22 more years to go before the bloodletting is complete. Maybe that’s when the impending collapse happens; maybe sooner, I’m no prophet; maybe not at if luck falls from outer space as a deus ex machina, a giant skyhook. A Few Examples Well for one, the Brits voted for Boris Johnson as Prime Minister. The smears against Labour’s Jeremy Corbyn worked their evil magic and the UK’s version Donald Trump was elected again as their leader. Apparently he used some of the same playbook’s techniques that worked for Trump. He’s predicting being able to work with Trump, as in blood brothers, besties, after Trump wins in 2020. Yikes! The eviscerating of the average person will continue with a new fervor if this happens, and the way the DNC is loaded against Bernie Sanders, this nasty picture of dystopia might come true. It worked over there against their version of Sanders so it could well happen here too in 2020. Here’s an excellent link from one of their analysts, a postmortem of their election.\/ Over here the spectacle of the impeachment hearings have reached a consensus of sorts and two articles have been laid out. Article one, the 1st; abuse of power on his phone call to Ukraine and his asking their President for an investigation of Biden&Son’s corruption. His mistake; he asked the wrong dude. Biden definitely needs investigating, but it needs to be separate from the race of 2020 as they think Biden has a chance to win. How about sending an off-the-books, non-government, agent to investigate malfeasance? The 2nd; obstructing the investigating of said first article. WT Bloody F? How about the concentration camps south of where I live in New Mexico. How about the children that have been torn away from their parents, some of which still haven’t been located? How about the children that died from lack of medical care or care in general? How about supporting the Saudi’s continued genocide on Yemen? No, just a damn phone call that amounted to nothing in the end. Ukraine still got weapons to fight that rascal Putin’s “aggression” at their border. Some of these hearings were direct propaganda against Russia. What a fucking shitshow. Something a bit different is the continued poisoning of Earth, our only home. Adding to the growing concern are that toxins may/will be overtaking the climate crisis are all the mining scars, tailings, disposal of animal waste, coal ash, nuclear waste, etc. I see both concurrently on a somewhat equal footing, but with toxins a greater threat in the shorter term timeline. This is dystopia on steroids. Here’s review of a new Hollywood film that leaves half of the problem out of its message. After all the military allows Hollywood to make military propaganda pieces by supplying expensive weaponry in a promotional write-off so war pictures look authentic. The film “Top Gun’ was one of these par excellence. So we can’t have Hollywood saying anything negative about our vaunted military machine now, can we? Link \/ Days I could spend days writing about all the negative problems affecting the citizens of Earth. We are all really one big family. So why do certain groups want to kill other groups? They make enemies so we will support their wars for making profit. That’s all wars are ever for, fucking profit; every last damn one of them. Since civilization arose and certain leaders realized they could get others to do the heavy lifting of growing crops by supplying security so the chiefdoms could grow. For sure some of these early chiefs were psychopaths and thought nothing of slaughtering other chiefdoms for their food, stock, or valuable goods. That ethos needs to be put in the past, and the world as a whole, all the peoples, need to rise and grow up and learn that a crisis of unspeakable horror awaits if things don’t rapidly change. So how about today? Change yourself, get a new attitude that we are all really connected and live that way. Then join in solidarity and overthrow the leaders that will destroy the world. One Last Item~Nukes Yes, nukes. The US recently pulled out of the INF treaty signed in 1987 by President Reagan of the US and Soviet Secretary General Mikhail Gorbachev. I blame John Bolton and his band of evil, wicked neocon psychopaths. So guess what? The House Democrats allowed a massive spending bill through just to keep the govt afloat(with no shutdown) without stripping the language from developing new nukes. This article explains it. link \/ Fin I see a losing battle in slow motion unfolding in real time happening all around without much of a chance for turning things around in time to avoid catastrophe. How bad it gets depends on now. Protests won’t do. Maybe nothing can be done. I’ve already grieved the loss a few years ago when I saw the extent of what’s been brought forth by industrial capitalism and the destruction of things I thought would be around for millennia to come. The 6th Mass Extinction Event is real and happening now. Time to find a way to mitigate the wounds and start healing as soon as possible because it is ultimately up to all of us, humankind. Peace, The Ol’ Hippy
https://medium.com/@jrallen1200/saturday-a-losing-battle-cf18ade92896
['John Allen']
2019-12-14 19:14:45.707000+00:00
['Social Commentary', 'Media', 'Politics', 'Propaganda', 'Dystopia']
10 Useful Things To Know About Ransomware
10 Useful Things To Know About Ransomware Critical in the fight against ransomware is raising awareness. Photo by rupixen.com on Unsplash Ransomware is a particularly nasty form of cyberattack that can hit many types of businesses and people. These attacks can potentially disrupt businesses or cause significant headaches due to data loss. They can be challenging to recover from. In cases of actually following through with ransom demands, it can also be frightfully expensive. Here are ten things you should know. It’s not a new thing While the profile of ransomware has risen due to greater awareness, it’s far from being a new problem. In the early 80s, enterprising hackers had infiltrated early systems, encrypted data, and held it hostage for ransom exchange. In 1989, the crime took the next step by automating it with the AIDS Trojan ransomware attack, which was distributed on floppy disks handed out at the World Health Organization’s AIDS conference. Who is vulnerable Pretty much anyone. While the most popular victims are businesses or entities with access to a large amount of funds and a reputation to protect, even small companies and individuals have fallen victim to ransomware attacks. Large-payout perpetrators of ransomware tend to be more targeted for investigation and prosecution. Smaller attacks do not always get the same attention due to limited resources on the part of law enforcement. Irrespective of the size of your business, ransomware can be a major problem. How ransom payment is arranged Bank transactions can be traced, and cash drop-offs carry too many physical risks to cybercriminals. Bitcoin and other virtual currencies are the preferred ransom payment methods as they’re more challenging to trace. Dealing with ransomware mails Have a proper sense of caution with emails. Don’t click on links in emails from unknown or suspicious sources. Also, avoid opening any email attachments from senders you don’t know or are not sure about. In particular, attachments that ask you to enable macros should be disposed of immediately. This is a common method for spreading ransomware. Attacks often sneak past typical security measures Ransomware enters many computer systems via links shared through legitimate-looking emails. Many of these emails are skillfully crafted to look like trusted companies and vendors, or they’re worded in such a way as to present a tempting offer. Users who lack basic cybersecurity sense can believe that these emails are from legitimate sources and will provide their information or accidentally install malicious files. When that happens, ransomware gets a free pass into the network. Photo by Solen Feyissa on Unsplash When ransomware attacks What happens when a piece of ransomware software infects your computer? There are a couple of possibilities. Often, it encrypts all of your data, making it worthless without a key to undo the encryption. Or it could lock down your system and block all your access to your data. Or both. To get access back, the person or persons behind the attack will demand a payment of some kind. Usually, if a business or individual is unwilling or unable to meet the ransom demand, those files are lost forever. Ransomware relies mostly on human error The emails attackers most often use as an avenue of distributing the ransomware look very legitimate. Some even make persuasive pitches that the email recipient needs to take some action (such as updating a password to an account via a link). The success of these harmful emails relies on people being convinced of their legitimacy. In fairness, some can be very difficult for the average user to detect. Even if you pay, there is no guarantee you’ll get your files back Perhaps you’ve heard the phrase, “There is no honor among thieves.” Just because you’ve fulfilled the attacker’s demands doesn’t mean they will always hold up their end of the agreement. Some may demand more money. Some may release a portion of the data and hold the rest for more ransom. Some might just disappear without releasing any data at all. Everyone is a target Large-scale ransomware attacks get the most news, but small-to-medium-sized businesses fall prey to these attacks as well, as do individuals sitting in their own homes. Some attackers prefer large numbers of low-payout ransoms as opposed to large-scale attacks. Computers make it easy for them to run thousands of scams all at once. So don’t think it could not happen to you. Photo by iMattSmart on Unsplash Steps you can take to protect your data Here are a few things you can do to keep from falling victim to ransomware: Keep regular backups with at least one being saved on an external device not attached to the rest of your system. Update your firewalls and email filters. Train employees on smart email use and how to recognize a phishing email or potential ransomware threat. Contract with a reputable third-party IT security service provider Dealing with a significant threat Ransomware can cripple your business and cause severe disruption of operations, and ransom demands can be expensive. That doesn’t mean you can’t take steps to guard against such attacks and raise awareness of the threat. Make sure everyone who has access to your computer or system receives some basic cybersecurity training and knows to avoid suspicious emails. Early prevention is often your best defense. Thank you for reading. I’d love to share more with you via my Bi-Weekly Word Roundup newsletter sent to subscribers every other Sunday. It will feature news, productivity tips, life hacks, and links to top stories making the rounds on the Internet. You can unsubscribe at any time.
https://medium.com/technology-hits/10-useful-things-to-know-about-ransomware-2d82c7ffa5f5
['John Teehan']
2020-12-18 06:13:16.866000+00:00
['Technology', 'Tech', 'Business', 'Cybersecurity', 'Security']
From space truths to basement sleuths, debunking theories about MH370
How a West Australian startup dispatched a supposed tech sleuth in the time it takes to Instagram a selfie. 239. That’s the number of people lost without a trace on March 8th 2014. Four years later the world is still clueless as to what caused MH370 to disappear. The forensic search has been long an exhaustive and yet still without a conclusion. This void of truth has given space for unfounded theories. While useful as clickbait, these theories prey upon the grieving survivors and discount the immediately available hard science, itself waiting to tell the truth. In the cruisey outer suburbs of Perth, Western Australia, a cadre of blockchain programmers and geospatial experts have cracked timeless mysteries surrounding the crystal ball of truth when it comes to using aerial images and maps to tell a story. Digital images (photographs) are imbued with a rich tapestry of pixel based information once thought impossible to fake. Now there’s analytical programs capable of determining altered or anomalous images but there is no program able to tell if a validated, historically accurate, and corroborated, aerial image has been used to tell a fake story, until now. With these three qualifiers, Chris Lowe, head developer and WRX-driving nerd at Soar, quashed notions that MH370 was laying unobserved in Cambodia’s jungle. Chris’ sleuthing took about the same amount of time it takes to scope out urban legend on Snopes.com. Step One: Chris Lowe utilised Soar to locate and identify the conspicuous image Hindsight isn’t 2020, it’s 2018 and before Knowing that he had a comprehensive satellite photo database for this location using Soar, Chris quickly gathered satellite images post March 8, 2014 for evidence of MH370. Coming up trumps with any evidence of the wreckage was Chris’ first indication that this theory was thinly supported. But like any fake news, the idea that MH370 had finally been found after four years was highly emotive or at least ‘click-baitive’. According to Chris, “knowing that I had a collection of images from the same location for various dates after 2014, it was a no-brainer, I could easily prove that the plane never crashed here. Because satellite photos are effectively time lapse photos, I can go back in time and tell you what the earth says”. Step Two: Soar uncloaks erroneous MH370 discovery claims using satellite imagery from September 2018 One picture tells a thousand stories, but a thousand pictures tell a true story It’s easy to control the narrative when you only give a singular perspective. Lets try that. An immigrant committed a crime. So because someone else is an immigrant, they’re capable of committing the same crime. Therefore if we ban immigrants, they can’t commit crime. Here’s a another example, I have a grainy photo of an aeroplane in an inaccessible area where people don’t have the internet and because they never found MH370, this must be it! Speaking to the first analogy, were the conversation to be; one immigrant committed a crime, another immigrant came to Australia to learn English where he met his wife and started a family, another came here for the surf but and now writes content for a startup, another came here and now leads a team of developers, the fifth immigrant came here grew up and founded a geospatial company. Four of these five immigrants now work for Soar. All summed up this paints a truer more highly faceted story. People need a tool to quickly truth egregious claims such as this MH370 fumble in the jungle story. We hope you now know what it is. The MH370 in Cambodia myth, debunked on Soar. Deep into our development, Soar already delivers timely satellite information and as it grows, will include more drone and manned aircraft images. Just envision of the forthcoming ability news outlets will have to utilise high resolution drone images for airplane crash debris scatter analysis, aerial images for the detection of crash scars on jungle landscapes, and furthermore for access to satellite images useful for terrain and weather conditions associated with the crash site.. “The cumulative analysis from three types of aerial platforms will be incontrovertible” says Amir Farhand, CEO and founder of Soar. There’s no controlling how news is disseminated. You know it as I do, it takes longer to disprove an assertion than it does to make one and thus fake news, because it’s first, often stays in people’s minds as the real news because they don’t want to be unconvinced — a humbling experience. Using the blockchain we can soar above the rumblings and grumblings choking our Twitter feeds Soar proposes that the validation and retention of metadata (descriptive information), is crucial to delivering truthful aerial photos in the news. When unaltered, image metadata gives the full picture about how, when, where, and what settings were used to take the photo. When a transaction occurs on the blockchain, this information is permanent and visible for the world to see. Thus, if at any point an image gets manipulated (location, date, or other parameters are changed), this transaction anomaly rises its own red flag for the entire blockchain to see. “You can think of Soar as a dodgyness-detector for aerial images” says Farhand. 143,400 people. Take 239 people and multiply that by 600 (the average number of people known by a person). 143,400 people are potentially disheartened each time a flagrant MH370 claim gets pimped on the internet. Breeders and feeders of untruths need be aware, the blockchain is watching. “The truth is always the strongest argument.” Sophocles
https://medium.com/soar-earth/from-space-truths-to-basement-sleuths-debunking-theories-about-mh370-1201b1859898
['Darren Smith']
2018-09-11 06:19:48.868000+00:00
['Satellite Imagery', 'Mh370', 'Cryptocurrency', 'Blockchain', 'Plane Crashes']
SQL Server on Docker
Ease of having containerized SQL server for isolated programming Often programming teams use a single SQL server for their coding work. This limits how much a developer can find creative solutions for the problem at hand. What if we can isolate a copy specifically for the developer, and maintain the integrity of the SQL server for the rest of team? What if we do not need to install SQL server instance running on your local computer? SQL Server can be run on any platform by leveraging container technology. Let’s look at how to setup a SQL Server database inside of a Docker container. Prerequisites: Docker Desktop Powershell Install docker desktop: Let’s get started: With the use of only 2 docker commands, we can have a localhost instance of SQL server to connect to. The first command pulls mssql image to your local container. The 2nd command creates a local DB with the name my-local-db. Now you can connect to the database using SQL Server management studio. docker run -d -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Str0ngPwd" --name "my-local-db" -p 1433:1433 mcr.microsoft.com/mssql/server:2019-CU8-ubuntu-16.04 docker ps -a As you can see, you have a container hosting your SQL DB, locally. Connect to localhost: To view your container images Visual Studio provides container window To connect using SQL server management studio: Now you can use this as a regular SQL Server, and execute CRUD operations. One thing to remember though, if the containers are removed, the data is lost. If you wish to use this as connection string in your applications: "DbContext": "Data Source=tcp:localhost,1433;Initial Catalog=my-local-db;Persist Security Info=True;User ID=SA;Password=Str0ngPwd" In Conclusion: SQL Server can be run on any platform by utilizing container technology. We can drop and recreate tables, databases, functions, stored procedures. This is specially useful to debug docker DB related issues on pipelines. We can load the exact script used in pipelines for unit tests into the local DB and identify the issues with Unit test tasks. For more SQL image versions and tags on docker: click here
https://medium.com/@indira-raghavan/sql-server-on-docker-7fa3fec44472
['Indira Raghavan']
2020-12-22 18:46:01.352000+00:00
['Docker', 'Containerization', 'Sql Server', 'Linux']
4 Ways to Start a Business with No Money
Starting a business is a goal for many people, but many fail to follow through for the same reason — lack of money. What many people don’t know is that starting a business is more about resources beyond funding. If money is the only barrier to becoming a successful entrepreneur, here are four ways to start a business with no money. The Most Valuable Resource While money may seem like the most valuable resource when starting a business, great ideas actually contribute more to the bottom line. If you don’t have a clear idea on how to build a successful business, look at what other companies are doing and find ways to do it better. This may include higher quality customer service, offering similar products at lower prices, or tapping into markets that are overlooked. Make a Plan When you are ready to transform your idea into reality, start simple, and stay within the confines of your current resources. Make a clear plan on the steps that you will take, including benchmarks and milestones. The plan should be in writing so you can refer back to it as needed. Include details, such as a description of your brand, basic budget, action steps, and expected results. If the results don’t come to fruition, rather than putting more money into the approach, be creative, and find new ways to adapt. Look for New Funding Your pocketbook does not have to be the only financial resource for your new business. Start researching other avenues, such as crowdfunding, grants, angel investors, and small business loans. Many of these rely only on a great idea, knowledge of the market, and solid business plan. A little thought and research now make your business concept more attractive to those who are able to help you launch your new company. Sound Financial Management Look into the future and see yourself making incredible profits from your business. Set yourself up for success by introducing practices and sound financial management now. Keep fixed costs to a minimum, avoid unnecessary expenses, and minimize necessary expenses. The tools and techniques that you learn now will ensure that you are ready for the time when you transition from starting a business with no money to running a profitable company in the future.
https://medium.com/@stuartfrost/4-ways-to-start-a-business-with-no-money-db232bebe182
['Stuart Frost']
2020-12-08 21:11:56.136000+00:00
['Laguna Niguel', 'Leadership', 'Busines', 'Stuart Frost', 'California']
“And Here’s To You, Mrs. Robinson”
The next tenant in the Dragon House was recommended to us by a casual acquaintance. Janet was a single mom, with a good rental history. You bet we checked her references carefully. Her current landlord told us she was always on time with her rent and they had positive things to say about her at the convenience store where she worked. That seemed good so we accepted her application. She ended up being our worst nightmare. The den mother to two teenaged boys, she wanted to be their best friend instead of their parent. As far as we could see, there were no rules. The cool mom, she wanted to hang around with their friends, to the point where it was seriously inappropriate. It reminded me of the movie The Graduate, where a young college graduate has an affair with an older married woman. Only in this case, the young men were in high school. *Spoiler alert: this trailer summarizes the entire movie. YouTube Trailer — The Graduate Janet’s parenting style, if you could call it one, was to open her doors to all the kids in town and let them do what they wanted. Our beautiful little house turned into a party house. The neighbors would call to tell us that there were kids on the roof in the middle of the night, drinking and throwing bottles. The police became very familiar with the family’s antics. So did we. Each time there was a complaint, we would take our official clipboard and visit Janet. After a review the rental rules,we would give her an ultimatum that she had to stop causing problems with the neighbors or we would evict her. It was an empty threat. The Landlord-Tenant Act in our Province is very lenient toward tenants. We couldn’t serve her notice because of noise complaints from the neighbors. The complaint had to be from another tenant in a building we owned. The town didn’t have a noise bylaw, so we were unable to use that as a way to get her out. The only way we could remove her was non-payment of rent, or if she did something illegal.
https://medium.com/illumination/and-heres-to-you-mrs-robinson-d203d1550b89
['Tree Langdon']
2020-11-14 18:59:40.297000+00:00
['Self Improvement', 'Business', 'Series', 'Fiction', 'Landlord']
How to Help Kids Manage Anger
Some kids just seem to have a short fuse, but that doesn’t mean they can’t learn how to manage their anger. Here are a few tips for teaching your child healthy anger management in everyday life. We are sure that it will help your little ones. 😊 . Tip 1: Create an anger thermometer on a piece of paper and ask your kid to match their anger. In this case, they will be able to know how they are behaving. Tip 2: If you’re unsuccessful with tip 1, then don’t lose hope because we have some more ideas in our basket. There are certain situations when your kids have to control their anger whether they are in any social gathering, family meetings, school, playground, etc. So, instead of scolding them or blaming their behavior, you can ask them to try these activities. 1. Listen to Music: It’s a proven fact that listening to music can help in controlling anger among kids. It helps them to get distracted from their sadness or anger and instead they start tiptoeing on the music beats. In one study (PDF) done at the University of Gothenburg in Sweden, it was found that participants who listened to music after a stressful episode in their everyday lives reported decreased levels of stress when compared to individuals who didn’t listen to music after a stressful episode. 2. Count till 10: This is an old saying that if you are angry then count till 10 and if you are angrier then count till 100. But have you wondered how it works? Don’t worry, we will help you. It has a very simple logic behind this. “The familiar childhood admonition of ‘counting to 10’ before taking action works because it emphasizes the two key elements of anger management — time and distraction,” says Johnston, PhD, an assistant professor of psychiatry and behavioral science at Mercer University School of Medicine in Macon, Ga. 3. Take a Breath: We all know that when we angry then our breath gets quicker and shallow. And if that continues for a long then your might see some worse symptoms. So, the best way to calm your body is to slow and deepen your breathing. 4. Do Yoga: Here comes yoga in the picture. Try breathing slowly into your nose and out your mouth. Breathe deeply from your belly rather than your chest. It will help you in remaining calm and you can shut down from all the stress around you. Try It: Corpse Pose (Savasana) Lie down with your limbs gently stretched out, away from the body, with your palms facing up. Try to clear your mind while breathing deeply. You can hold this pose for 5 to 15 minutes. If they practice these tips on a regular basis, we are sure they will be able to control their anger soon. Do let us know if you have any other ideas to control anger. ❤️ Note: If you wish to get a downloadable PDF for Anger Thermometer, kindly click here:
https://medium.com/@themollycoddle/how-to-help-kids-manage-anger-bbe723c9de7c
['The Molly Coddle']
2021-03-05 18:04:36.799000+00:00
['Anger Management', 'Parenthood', 'Parenting Advice', 'Kids', 'Parenting']
Selenium Tutorial For Beginners Step by Step With Examples | Testbytes
Everybody knows about the impeccable selenium! The ultimate tool for testing web applications! for you to learn in detail about how to carry out automation testing, we have written an extensive Selenium tutorial just for you! This blog comprises of three part, 1. Selenium Tutorial For Beginners 2. Selenium Intermediate Level Tutorial 3. Selenium Advanced level Tutorial Selenium Tutorial For Beginners What makes Selenium better? You don’t need to code anything in Selenium and with this; any beginner will be able to record and play the simplest web application scripts. Usually, Selenium RC needs a server to be up and running for sending commands to the browser. It is used for cross-browser testing and you can write the code in any language. Selenium Web Driver is a better version of IDE and RC. It directly sends commands to the browser without the need of a server to be up and running. Different languages can be used for coding the scripts like Java, C#, PHP, Python, Perl, and Ruby. Selenium Grid is used for parallel testing in multiple browsers and environments. It used the hub and node concept where hub acts as a source of Selenium commands and each node is connected to it. Now, here we will discuss Selenium WebDriver. How a beginner can start learning Selenium WebDriver and how he can excel in it. Now, first, we will look at the steps we need to follow to download Selenium Web Driver in your machine. Ways to download and install Selenium WebDriver You should have Java installed in your machine. This is the pre-requisite for Selenium Web Driver to work. You can visit the page: http://seleniumhq.org/download/ and download the client drivers and language bindings. You have the select binding for Java. This download will be named — selenium-2.25.0.zip. Now, you can import all the Jars in Eclipse. You have to right click on the project and import jar files by selecting all the downloaded jar files. For this, you can click on the Libraries tab and then click on “Add External JARs”. Now Let’s look the First Selenium WebDriver Script Let’s take an example of the first Selenium Script which we would create using Selenium basic methods. Let’s first look at the script in detail. In this script, we will do the following test steps. Go to the home page of the test application Verify the title of the page Do a comparison of the result. Close the browser after the script is done. package projectSelenium; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; public class seleniumTest { public static void main(String[] args) { System.setProperty(“webdriver.chrome.driver”,”C:\\chromeDriver.exe”); WebDriver driver = new ChromeDriver(); String baseUrl = “https://google.com"; String expectedTitle = “Google”; String actualTitle = “”; // launch Fire fox and direct it to the Base URL driver.get(baseUrl); // get the actual value of the title actualTitle = driver.getTitle(); /* * compare the actual title of the page with the expected one and print * the result as “Passed” or “Failed” */ if (actualTitle.contentEquals(expectedTitle)){ System.out.println(“Test Passed!”); } else { System.out.println(“Test Failed”); } driver.close(); } } Things to look at the above code: In the first two lines, we have imported two packages. You have to import — org.openqa.selenium and org.openqa.selenium.firefox.FirefoxDriver. The most important step is to instantiate the browser. This is done by line WebDriver driver = new ChromeDriver(); //This is done to invoke a chrome browser. You can invoke a FireFox browser by following line of code WebDriver driver = new FirefoxDriver(); You can invoke an IE browser by following line of code: WebDriver driver = new InternetExplorerDriver (); Also, while invoking a browser you have to pass the path of the executable file. You can do it by following line of code: System.setProperty(“webdriver.chrome.driver”,”Path of chrome driver”); System.setproperty(“webdriver.ie.driver”,”Path of ie driver”); Get() method is used to enter a url in a browser. getTitle() method of selenium webdriver is used to fetch the title of a web page. Now, we have to compare the expected title with the actual title. If(expectedTitle.equals(actualTitle)) { System.out.println(“TEST PASSED”): } For terminating the browser, close() method is used. Driver.close() closes the active browser window. If you want to close all the opened browser windows by selenium web driver then you can use driver.quit(). You can run this test by right clicking on the program and then select as “Run As” as “Java Application”. Next thing which is of utmost important while writing a test script is to identify web Elements which will be explained in detail in the below section. Locating Web Elements Locating web elements is very easy. Various selectors are available for that process. find Elements is one such 2 in which selenium webdriver is used for locating a web element and then, you can perform an action on that. Know More: Selenium Automation Testing With Cucumber Integration Let’s see some of the methods by which you can identify web element on a web page. className — It will locate web element based on the class attribute. Eg: By.className(“abc”); — It will locate web element based on the class attribute. Eg: By.className(“abc”); cssSelector — used to locate web element based on css selector engine. Eg:- By.cssSelector(“#abc”); — used to locate web element based on css selector engine. Eg:- By.cssSelector(“#abc”); id — If some web element has id attribute, then you can directly identify the web element using id tag. Eg:- By.id(“abc”); — If some web element has id attribute, then you can directly identify the web element using id tag. Eg:- By.id(“abc”); linkText — It will find a link element by text mentioned by you in the test script. By.linkText(“Login”); — It will find a link element by text mentioned by you in the test script. By.linkText(“Login”); name — If any web element has name attached to it then you can identify it using name attribute. Eg: By.name(“name”); — If any web element has name attached to it then you can identify it using name attribute. Eg: By.name(“name”); partialText — It will find a link element by text containing the text mentioned by you in the test script. By.partialText(“abc”); — It will find a link element by text containing the text mentioned by you in the test script. By.partialText(“abc”); tagName — It will locate all elements which will have this tag. — It will locate all elements which will have this tag. xpath — It is the most used locator in a selenium test script. It will identify the element using html path. It can be relative or absolute. Absolute xpath traverses the path of the web element by root and relative takes the reference of any web element and then traverse to that specified web element. It is better to refer an element by relative xpath rather than absolute xpath. Basic Actions on a web element You can click on a web element by using click() method of selenium web driver. You can locate a web element and then perform an action on it. Eg: driver.findElement(By.xpath(“”)).click(); Also, you can send keys to a particular web element by using send Keys() method of selenium web driver. You can locate a web element and then you can enter some text in it using sendKeys() method. Eg: driver.findElement(By.xpath(“”)).sendKeys(“name”); Also, there are other actions which you can perform on a web element by using action class. WebElement wb = driver.findElement(By.xpath(“”)); Actions actions = new Actions(Driver); Actions.moveToElement(wb).build(). Perform (); You can even switch to alert boxes which come when you click on some webelement. You can do it by switchTo().alert() method. Eg code: WebElement wb = driver.findElement(By.xpath(“”)); Wb.click(); Driver.switchTo().alert(); Now, you will be able to access the alert box. You can retrieve the message displayed in the text box by getting the text from it. String alertMessage = driver.switchTo().alert().getText(); Also, you can accept the alert box by function accept(). You can see the sample code as below: Driver.switchTo().alert().accept(); You can even check conditional operations on a web element. Also, check whether a web element is enabled or not. If it will be enabled then you can do some operation on it. Apart from all these, you can check if some web element is displayed or not. In the case of radio buttons, you can check if the radio button is selected or not. You can do these checks by — isEnabled(), isSelected() and isDisplayed() option. Waits in Selenium Web Driver If you want some step to get completed before any other step then you have to wait for the prior step to get completed. In manual testing, it is very easy to achieve but in automation testing, it is bit tedious and you have to wait for the previous step to get completed or a condition to be fulfilled before moving on wards to the next step. This can be achieved by adding waits in between. There are two types of wait- explicit and implicit wait. If you are expecting a particular condition to be fulfilled before moving to the next step, Another feature is that you can use explicit wait while if you just want a universal wait, then you can go ahead to use implicit wait. The implicit wait is used to set the default time out for the whole script. A perfect automation script is made by adding both type of waits — Explicit and Implicit. You have to judiciously use both types of waits to make an efficient test case. Know More : Top 50 Selenium Interview Questions and Answers Explicit Wait Syntax of Explicit Wait: WebDriverWait wait = new WebDriverWait(WebDriverRefrence,TimeOut); wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath(“”)); Expected Condition can be used with so many conditions. Some conditions which can be used with it are: alertIsPresent() elementSelectionStateToBe() elementToBeClickable() elementToBeSelected() frameToBeAvaliableAndSwitchToIt() invisibilityOfTheElementLocated() invisibilityOfElementWithText() presenceOfAllElementsLocatedBy() presenceOfElementLocated() textToBePresentInElement() textToBePresentInElementLocated() textToBePresentInElementValue() titleIs() titleContains() visibilityOf() visibilityOfAllElements() visibilityOfAllElementsLocatedBy() visibilityOfElementLocated() Implicit Wait Syntax of Implicit Wait: driver.manage().timeouts().implicitlyWait(TimeOut, TimeUnit.SECONDS); For this, you have to import a package into your code. The package name is java.util.concurrent.TimeUnit; Selenium Intermediate Level Tutorial Through the section Selenium Tutorial for Beginners, We gave you the basic info that you need to know about the tool. Now let’s go further and learn much more about this impeccable web app testing tool. How to upload a file in Selenium test cases? To upload a file, first, you have to identify the element on which you have to upload your file. There you can directly use sendKeys() function of selenium web driver. You can pass the path of the location in sendKeys. In this way, you will be able to upload a file using selenium web driver. public static void main(String[] args) { System.setProperty(“webdriver.gecko.driver”,”path of gecko driver”); String baseUrl = “http://www.google.com /upload/”; WebDriver driver = new FirefoxDriver(); driver.get(baseUrl); WebElement element = driver.findElement(By.id(“id of element”)); uploadElement.sendKeys(“C:\ ewhtml.html”); //Here, above you have to pass the path of the file where your file is located. // Then you can click the upload file link driver.findElement(By.xpath(“”)).click(); } How to use a web table in selenium script You have to access a web table and the elements present in the table. You can get it by making an xpath. Suppose you have had1 0a table with four blocks. The first thing which you have to do is to find the XPath of the web element in this web table. Let’s say you want to get to the third element in the above web element. The coding of the above web table is as below: Now, you can analyze that first there is a table and then there is a tbody. In that tbody there are two rows. One row is having two tables. The first row is having two cells — First and Second. The second row is having two cells — Third and Fourth. Our goal is to reach to the third cell. Let’s try to make the XPath of it. The XPath of it will be //table/tbody/tr[2]/td[1] So, the table is the parent node from which we will iterate to go the third element. From there, we will go to the tbody section and then the second row. From there we will get the first column. Let’s write a script to get the text out of it. public static void main(String[] args) { String url = “http://testsite.com/test/write-xpath-table.html"; WebDriver driver = new FirefoxDriver(); driver.get(baseUrl); String txtWebelement = driver.findElement( By.xpath(“//table/tbody/tr[2]/td[1]”).getText(); System.out.println(txtWebelement); driver.close(); } } Let’s take an example of a nested web table. You have to then analyze it carefully and get the XPath of it. Let’s look at the example below to get more information on it. So, if you want to access the web element which is having text 10–11–12 then you can do it by traversing from the table and then iterating through the rows and columns to reach there. Xpath would be: //table/tbody/tr[2]/td[2]/table/tbody/tr[2]/td[2] public static void main(String[] args) { String url = “http://testsite.com/test/write-xpath-table.html"; WebDriver driver = new FirefoxDriver(); driver.get(baseUrl); String txtWebelement = driver.findElement( By.xpath(“//table/tbody/tr[2]/td[2]/table/tbody/tr[2]/td[2] “)getText(); System.out.println(txtWebelement); driver.close(); } } This way, you can iterate through the rows and columns to reach to a specific cell from a web table. Now, one of the most important concepts in selenium which will help you in many cases when you won’t be retrieving any text from a web element or to enable a web element to get the text or to perform any action on it. Let’s talk about JavaScript Executor in detail. It is an interface which helps to execute javascript. JavaScript Executor Sometimes you are not able to click on a web element using click() function. You can then use javascript executor to execute click function on a web element. Let’s have a look at the code. WebDriver driver= new FirefoxDriver(); // JavascriptExecutor interfaceobject creation by type casting driver object JavascriptExecutor js = (JavascriptExecutor)driver; You can now click on a webelement using below command. WebElement button = driver.findElement(By.xpath(“”)); Js.executeScript(“arguments[0].click();”,button); Also, if send keys isn’t working. You can make use of java script executor to send keys. Let’s look at the example below. WebDriver driver= new FirefoxDriver(); // JavascriptExecutor interfaceobject creation by type casting driver object JavascriptExecutor js = (JavascriptExecutor)driver; js.executeScript(“document.getElementById(‘id’).value=”value;”); You can even make use of java script executor to refresh a web page. You can do it by following command: WebDriver driver= new FirefoxDriver(); // JavascriptExecutor interfaceobject creation by type casting driver object JavascriptExecutor js = (JavascriptExecutor)driver; js.executeScript(“history.go(0)”); Sometimes, getText() doesn’t work and then you have to make use of java script executor to get text of a web element. You can do it by following line of code: WebDriver driver= new FirefoxDriver(); // JavascriptExecutor interfaceobject creation by type casting driver object JavascriptExecutor js = (JavascriptExecutor)driver; js.executeScript(“history.go(0)”); Sysout(js.executeScript(“return document.documentElement.innerText;”).toString()); You can even get the title and URL of a web page using java script executor. The procedure is very simple. Let’s have a look at the following lines of code. WebDriver driver= new FirefoxDriver(); // JavascriptExecutor interfaceobject creation by type casting driver object JavascriptExecutor js = (JavascriptExecutor)driver; System.out.println(js.executeScript(“return document.title;”).toString()); System.out.println(js.executeScript(“return document.URL;”).toString()); Desired Capabilities Concept in selenium web driver You can make the set of configurations on which you want a particular test script to run. You can pass browser name, version to specify the type of environment on which you want a test case to run. Let’s see some of the capabilities which you can set in a test case for IE browser. //it is used to define IE capability DesiredCapabilities cap = DesiredCapabilities.internetExplorer(); cap.setCapability(CapabilityType.BROWSER_NAME, “IE”); cap.setCapability(InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS,true); In the above capability, we have passed the capability browser name and we have ignored the security domain. After setting the capabilities you can pass the capabilities to the web driver instance so that it executes the test on a particular configuration. Let’s have a look at the complete set of code. public static void main(String[] args) { //it is used to define IE capability DesiredCapabilities cap = DesiredCapabilities.internetExplorer(); cap.setCapability(CapabilityType.BROWSER_NAME, “IE”); cap.setCapability(InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS,true); System.setProperty(“webdriver.ie.driver”, “path of executable”); WebDriver driver = new InternetExplorerDriver(capabilities); driver.get(“http://gmail.com"); driver.quit(); } Handling a drop down in selenium web driver You have to first import following packages: org.openqa.selenium.support.ui.Select. First, you have to identify from DOM whether the dropdown is of select type or not. If it is of select type then you have to go for the below steps. Then, you have to uniquely identify the select tag. You have to first make the object of Select class and pass the element for which you have to choose the options from the drop-down. Select dropdown = new Select(driver.findElement(By.xpath(“”))); Now, there are three main methods which you have to use to select any element from this select object. selectByVisibleText selectByIndex selectByValue You can either select any element from this drop-down by matching the visible text with that of the text passed by you. You can also select any web element from the drop-down using an index. Last option is to select by value tag. Also, there are some more functions which are available. Some of them are selectAll() and deselectAll() too select and deselect all the elements when more than one element can be selected. But, if the dropdown is not of select type then you can’t go for the conventional method. You have to follow another method. You have to uniquely identify the web element on which you have to select a dropdown option. You can identify it and the sendKeys() function of selenium web driver to send keys to the uniquely identified dropdown. WebElement dropdown = driver.findElement(By.xpath(“”)); Dropdown.sendKeys(“value to be selected”); How to select the checkbox and radio button is selenium Sometimes, you come across situations where you have to select checkboxes and radio buttons. You can do it easily with selenium web driver. You just have to use click() function of web driver to click on checkbox and radio button. You can even check of the web element is selected or not. .isSelected() checks if any web element is selected or not. It gives false if the web element is not selected and it gives true if the web element is selected. This way, you can handle radio buttons and checkboxes in a selenium script. Selenium Advanced level Tutorial Now, you know almost the stuff which is required for selenium intermediate and beginner level. Now, you are proficient enough to deal with all the advanced level of selenium web driver. You can then practice those and while you are done with those, you can move forward to the advanced level of course. Let’s see what lies ahead and what can give you an edge in your interviews. You can look at this advanced course so that you can be ahead of all the candidates who just know the basic and intermediate selenium course. Let’s have a look at Selenium Advanced level Tutorial Selenium Grid Selenium Grid is issued for running various tests on different browsers, operating systems, and machines in parallel. It used a hub and node concept. You don’t want test cases to run on a single machine. You have your machine as a hub and various systems on which test cases will be distributed. You call those machines as nodes. The hub is the central point and there should be only one hub in a system. This hub is your machine from which you want to distribute the test cases among all the clients. This is the machine on which test cases will run and those will be executed on the nodes. There can be more than one node. Nodes can have different configurations with different platforms and browsers. Let’s see how you can establish a selenium grid in your machine. You can download the selenium grid server from the official site. You can place Selenium server.jar in a hard drive. You have to place it in all nodes and hub c drive. Open the command prompt in your hub and then, go to C directory. There you have to fire the below command java -jar selenium-server-standalone-2.30.0.jar -role hub Now, if you want to check if the selenium grid server is running on localhost that is 4040 port or not. You can visit the local host on your browser. You have to then go to C directory of the node and open command prompt in it. Also, when you have made the selenium grid server up in the hub, you have to note down the IP address and port. java -Dwebdriver.gecko.driver=”C:\geckodriver.exe” -jar selenium-server-standalone-3.4.0.jar -role webdriver -hub ip address of hub -port port number of hub When you fire the above command, again go to hub browser and the port number 4040. Refresh the page and you will see IP address which will be linked to the hub. Now, you have set up machines with different configurations. But how would the hub know which test case would be run on which node. This will be done by desired capabilities which will tell that this test script would run on a particular set of configuration. One can do it using below source code: This way, you will be able to distribute the test cases across different machines and browsers. You will be able to do parallel testing using Selenium Grid. Maven and Jenkins Integration with Selenium Maven is a build management tool which is used to make the build process easy. Maven avoids hard coding of jars and you can easily share the project structure across the team using artifact and version id. Every team member would be at the same page. You can make a maven project by going to File — New — Other — Maven project. You have to then specify the artifact id and then version id. You will be prompted to select a template. For starting, select a quik start template. You will get a folder structure where you will be having two folders — src/main/java and src/main/test. In java folder, you will maintain all other stuff except tests while in test folder you will maintain all the test cases. You will be having pom.xml file where you have to define all the dependencies so that it can download from the maven repository and place them in ./m2 repository in your local project structure. You can get the dependency from the maven official site and then place in pom.xml. It will download all the required jars. Also, Jenkins issued for continuous integration of the latest build with the production environment. Whenever a developer will fire the latest build then the smoke test will start running on that build. If the build gets passed then it can be deployed to the production else the developer and tester would get a notification about the failed build. It makes the delivery cycle very fast. Also, you can do it by downloading Jenkins war file and then running it so that Jenkins server will be up and running on port number 4040. After doing that you can install all the necessary plug-ins. You can then create a new item and does the post-build actions. Also, you can pass the path of git repository from which it will fetch the latest build. If you are using a local server then you pass the path of pom.xml from the system. You can even set nightly runs with Jenkins, You have to specify the time when you want your test to run. They will run and you will get the reports on the Jenkins server the next morning. Isn’t it time-saving? Database Testing using Selenium WebDriver Integrating Selenium tests to the database is so easy in the latest version of the tool. You have to make use of JDBC library for database connections. It allows connection of Java and databases. First, make a connection to the database with the following command: DriverManager.getConnection(URL, “userid”, “password” ) After doing that you can load the JDBC driver. You can do it by following lines of code: Class.forName(“com.mysql.jdbc.Driver”); Now, you have to send the query to the database. How will you do it? You can create an object of a statement class and then execute the query using the statement object. Statement stmt = con.createStatement(); stmt.executeQuery(select * from employee;); Now, you can get the result in the result set. You can do it by following command: ResultSet rs= stmt.executeQuery(query); After doing all the operations, you can close the connection. Con.close(); In this way, you can get the result set and then do all the validations on the result set. How to take screenshots in selenium web driver You should get the screenshot of the failed test cases so that you can get to know where exactly is the problem. This will help in better-debugging skills. Let’s look at the steps involved in taking a screenshot: TakesScreenshot scrShot =((TakesScreenshot)webdriver); This step caste a driver instances to Takes Screenshot interface. Now, you have one method which is getting screenshots which will get the image file. File SrcFile=scrShot.getScreenshotAs(OutputType.FILE); Now, you have taken the screenshot but you have to place it somewhere. You can create some folder which will be used for storing all the screenshots captured during the execution. You can do it by following the line of code: FileUtils.copyFile(SrcFile, DestFile); This is the way to capture screenshots. But if you want that only the screenshot is taken when a test case fails. You can use ITestListener. It has various methods and one of the methods is onFailure() which will do the specified task when every there is any failure in a test case. So you can put this code in that method so that whenever any test fails it will take the screenshot and place it in the folder specified by you. How to drag and drop in a web page using Selenium Web driver Now if you want o drag and drop something o a web page, you would be confused. It is very simple. You have to make use of Actions class which gives you a method dragAndDrop() which will drag and drop a web element from a source destination to the desired destination. Actions actions = new Actions(driver); Actions. dragAndDrop(Sourcelocator, Destinationlocator).build().perform(); Make sure that the source locator and destination locator have correct xpaths. In this way, you will be able to drag and drop any element on a web page. How to do right click and double click on a web page You can do a right-click on a web page using Actions class. Actions class provides a doubleClick method which allows to double click on a web element. You can do it by following lines of code: Actions actions = new Actions(driver); WebElement elementLocator = driver.findElement(By.id(“ID”)); actions.doubleClick(elementLocator).perform(); You have to do right-click in selenium using action class only. It is very easy. It provides a method — contextClick() and then right click on a web element. Actions actions = new Actions(driver); WebElement elementLocator = driver.findElement(By.id(“ID”)); actions.contextClick(elementLocator).perform(); This way, you will be able to right click and double click on a web element. How to switch to an iFrame iFrame is a web page which is embedded into another web page. Whenever you want to click on a web element, which is in another iframe. First, you have to switch to that iframe and then perform an action on it. You can do switching by Driver.switchTo().frame(index or name or id of iframe); Conclusion Before learning Selenium it’s better to have a thorough understanding of any object-oriented language. languages that Selenium support includes, Java, Perl, C#, PHP, Ruby, and Python Currently. We genuinely hope this tutorial has helped in understanding selenium better.
https://medium.com/@testbytessoftware/selenium-tutorial-for-beginners-step-by-step-with-examples-testbytes-6943099fb967
[]
2019-09-03 08:34:07.849000+00:00
['Selenium', 'Selenium Tutorial']
Three Things Brokers Should Know About Trucker Tools’ Digital Freight Matching
Analysts and thought leaders in logistics are calling digital freight matching the big story of the year — and it’s easy to see why. Where traditional load boards come up short, digital freight matching shines. Trucker Tools’ digital freight matching uses machine learning technology and powerful algorithms to automatically match your open loads with the best available trucks from your list of preferred carriers in a matter of seconds. You never have to post on load boards, sift through emails from carriers or pick up the phone to cover your loads. Check out the top three things you should know as a broker or 3PL about Trucker Tools’ digital freight matching platform. Using load boards to find and secure capacity is a time-consuming process. You have to post the load to the load board, respond to carrier inquiries about your post and negotiate the rate with the carrier. Calling or emailing carriers to find and secure capacity is equally inefficient. We’re heard from brokers and 3PLs who rely on these manual processes that it can sometimes take an hour or more to cover a single load. The digital freight matching platform created by Trucker Tools helps you run your business much more efficiently. Covering a load takes minutes or seconds, not hours. You never have to field calls or emails from carriers who may be unqualified or who are outside your network of preferred carriers. The time you save by using Trucker Tools’ digital freight matching decreases your cost per load and helps you maximize your profits. It also allows you to move your human resources away from repetitive manual tasks. Instead, you can reassign your employees to relationship-building with shippers and carriers to grow your business. The beauty of digital freight matching is that it not only makes your operations more efficient, it does the same for drivers and carriers. If a driver or dispatcher doesn’t have to log onto a load board, email you or pick up the phone to find out what loads you have available, that’s a win for the driver. In addition to matching your loads with trucks, Trucker Tools’ digital freight matching lets truckers see what loads you have available any time of day or night in the Trucker Tools driver app. Trucking companies also can view your available loads if they use our free software for carriers. Drivers and carriers can quickly find out what you have for loads and send you a rate quote directly through the driver app or through our carrier platform. Trucker Tools’ Book It Now ® tool even lets carriers book the load right in the driver app and records the transaction automatically in your TMS. Trucker Tools’ digital freight matching platform makes it easy to book reloads with carriers, as well. When you’re assigning a carrier to a load in our digital freight matching platform, you can find and book a reload with the carrier all in one step. Most carriers greatly appreciate it if you can help them eliminate or reduce deadhead miles. The time savings and efficiency that Trucker Tools’ digital freight matching platform offers your carrier partners encourages them to keep pulling your loads again and again. If you hear the words “digital freight matching” and think it’s a technology that is only available to big 3PLs, shippers and freight brokers, think again. Digital freight matching is no longer just for the mega brokers and large logistics companies. Trucker Tools has made digital freight matching technology available and affordable to brokers and 3PLs of all sizes. We’re firm believers that everyone in the industry should have access to cutting-edge technology, whether you’re moving 1,000 loads each month or 10,000 loads/month. If you’re a broker or 3PL on the small size who uses phone calls, load boards and emails to find and secure capacity, it’s difficult to increase your volume without upping your overhead — because in order to move loads, you need to hire more people. One of the big advantages of Trucker Tools’ digital freight matching is that no matter the size of your operations, our platform can help you increase the volume of freight that you move. It’s simple: when you spend less time covering individual loads, you have more time available in the day to move more freight. Trucker Tools’ digital freight matching platform can be the growth accelerator that takes your business to the next level. To learn more about why now is the time to embrace digital freight matching, read How Digital Freight Matching Is Revolutionizing Transportation. Schedule a free demo of Trucker Tools’ digital freight matching, real-time visibility platform and Book It Now ®.
https://medium.com/trucker-tools/three-things-brokers-should-know-about-trucker-tools-digital-freight-matching-f9c4ad8e9d6a
['Tracy Neill']
2021-03-10 20:13:26.275000+00:00
['Broker', 'Shipping', 'Digital', 'Freight Shipping', 'Transportation']
Learning to Live Untethered
by: Zoe Johnsen Joining locals in celebrating a Buddhist holiday at the village temple. Credit: Connor Flynn Over the past year, one idea that I keep coming across is detachment from your identity. It seems paradoxical — aren’t we meant to “find ourselves,” discover our passions and develop our values, stand out as undeniably unique in the face of conformity? Why would we want to be transient beings, floating through life with no attachment to who we are? I first stumbled across this concept, really considered it, during a meditation retreat. You might think of a hippie commune in the mountains, reminiscent of that infamous Mad Men finale, or of pristine white walls and middle-aged women trying to find inner peace, but the “retreat” of my experience was a humble open-air lodge and a few bamboo huts in a small village in northern Thailand. We did wear white flowy clothes, but meditated in the forest, to the sounds of cicadas, roosters, motorcycles revving and people arguing in the streets. We sat quietly with our minds four times a day, guided by a Buddhist monk in bright oranges robes, and followed by lessons on the religion — though to me it seemed less about religious doctrine and more about ways of seeing the world and yourself in it. One of the main things I took out of his words was the harm that comes from attachment, especially to aspects of our own identity. Without getting too much into it, a key principle in Buddhism, something that everyone must accept, is annica, the fact that everything changes. Becoming too attached, then, sets one up for disappointment once that thing you’re clinging to is inevitably lost. Our simple sleeping quarters, with tin roofs that amplified the sound of nightly thunderstorms. A lot of times big ideas like this are meaningless and pass right through you unless you can relate to them in the present moment, to truly see them. Luckily, I was able to — I looked around me, at these thirteen other kids in their quickly-dirtying white clothes, and thought back to the “life rivers” we had shared with each other only a week ago. These rivers were crude, childish drawings of our life stories thus far, told with a quiet and raw honesty that didn’t match the short time we’d known each other. Sitting now in a bamboo treehouse, swinging my legs over the edge, I had what I can only describe as an epiphany, noticing the common thread across our pitfalls. So many in our small group had struggled because we had become too attached to a part of our identity, based all our worth on it, only to have something change. However little, this change left us reeling, feeling lost in the world without that something to support us and to define us. Lost enough to make a radical change like stepping off the path expected of us, of everyone, leaving our known lives behind in an attempt to “find ourselves” again in a totally new part of the world. Over the course of the next eight weeks I spent in Southeast Asia, I did slowly come to “find myself,” though with this idea of detachment in the back of my mind, guiding my growth. I noticed myself opening up, embracing changes and new experiences and all the unknowns around me. I found myself by letting myself be free of expectations — personal, of friends and family, of culture. There was no one that I needed to be, that I was supposed to be, and realizing that allowed for my true self to break through the person that I thought I was. It was less of a discovery and more unearthing what had always been there, then allowing that exposed core of myself to blossom and grow towards whatever called her. I reveled in the different lives and possibilities I encountered, from the stories told by our Laotian trail guide, to the eccentric and careening lives of my program leaders in their 30 short years, to window-seat daydreams of all the different ways my own story could go. Papae Retreat, rainy and a vibrant green, set in the forests north of Chiang Mai. I’d like to think that coming home and being put so easily back into that box of expectations is just a bump in the road, though I do feel myself fighting those intrinsic ideas of limitation too frequently. However, I’m also reminded of my past self and joys every so often, brought back to the inspiration I first felt in that bamboo treehouse. I accumulated a huge number of travel books when I first came home, walking out of the library with foot-tall stacks almost weekly. One book in particular served as the first real reminder, urged me to solidify my thoughts into a permanent mindset, a view of life as impermanent. I felt it again a few days into the new year, while driving towards the Chicago sunset with the life-long friends I’d made on my 10-week trip. It’s our first time together since being in an entirely different world, one that at this moment I’m reminded wasn’t just a fever dream. For some reason it comes again after a hike on a cold afternoon in March, and I feel like standing in the middle of the road arms out and screaming, a smile on my face, infectious in the way that I just can’t stop it from coming. College presents a unique challenge, though walking above towering gorges does help somewhat, sparks that feeling of spontaneity and, honestly, joie de vivre. The philosophy of detachment follows me here too, into a seminar on Buddhism, as anatta: deconstruction of the self. Weekly writing exercises on its value remind me that I’ll never forget it. On those bad days we all have, feeling stuck and limited and near-paralyzed, I try to remind myself that my present situation is one choice in a life of a million choices, one moment of a thousand moments. Nothing is the be-all end-all, and I wish for everyone to know that, myself, friends I’ve just made, strangers I pass on the sidewalk. It’ll work out, and if it doesn’t, there’s another path. There are seven billion different ways to live, and I bet another billion beyond that. The only certainty is that nothing is certain — but I’ve found that’s the beauty in it all.
https://medium.com/guac-magazine/learning-to-live-untethered-7d1b8f781999
['Guac Magazine Editors']
2019-03-12 16:40:45.221000+00:00
['Buddism', 'Travel', 'Retreats', 'Chiang Mai', 'Meditation']
And again Tableau.Powerful tips for connecting to the data.
Photo by Sam Dan Truong on Unsplash When I started working with Tableau Server, it was a real pain to understand all ins and outs: how to connect to data, how to optimise it, how to keep the dashboard up to date, etc. Now, after experience getting to know Tableau, I can offer you a glimpse of tips that can facilitate your data work flow with Tableau. Most of the time use Data Extraction There are two ways to connect to the data source: via live connections and via extractions. In a nutshell, live connection is a real time extraction, without having to copy data to Tableau Server. Extraction is a scheduled connection, which is refreshed every hour/day/period. All of the data is copied from the datasource to Tableau Server. My recommendation is to only use live connection when you need update your data every second or every minute. However, just using extraction will impact the Tableau Server’s performance, and that can cause your team/your coworkers to get a bit grumpy. Try always to keep in touch with your infrastructure team. Separate data sources and visualisations. I‘d like to mention just a couple of advantages of separation : 1. It’s the first step to the general, holistic approach of unifying data/content management, thereby allowing all users to find the right data in one place and not duplicate data. 2. This approach helps to optimize performance on the server side. Try to reuse data sources as often as possible. Don’t use complex sql queries. Only try to connect to tables or join tables. All preprocessing should be done outside of Tableau. It’s important to compartmentalize the visualisation flow from the data preprocessing. This will increase the server’s performance. Appropriate scheduling data extract refreshes Scheduling data extract refreshes during working hours may take a long time, as multiple processes/jobs/queries might be utilizing the databases at the same time. You should schedule your data refreshes during your idle or non-working time. Prioritize Schedule Refreshes. There may be many data refreshes connected to different workbooks in a refresh schedule. Some of the data refreshes might involve enormous amount of data which take a lot of time to refresh data extracts, while others might have lesser data and therefore would take less time. Some of the data refreshes can be critical for business, while others can be implemented out of curiosity. You should always look for the sweet spot here. Use Incremental Refresh instead of Full Refresh Incremental Refresh appends new records to existing records in extracts. Full Refresh delete the whole data and loads old and new records.It is preferable to choose an incremental refresh as it takes much lesser time, unless there is a mandatory business requirement to reload the whole range of data. Good luck with your dashboards!
https://medium.com/@markeltsefon/and-again-tableau-powerful-tips-for-connecting-to-the-data-89808e149605
['Mark Eltsefon']
2020-12-11 13:45:01.323000+00:00
['Tableau', 'Visualization']
6 Things to Know to Get Started With Python Data Classes
3. Equality/Inequality Comparisons Besides the initialization and representation methods, the dataclass decorator also implements the comparison-related functionalities for us. We know that for a regular custom class, we can’t have meaningful comparisons between instances if we don’t define the comparison behaviors. Consider the following custom class that doesn’t use the dataclass decorator. Equality Comparisons As shown above, with a regular class, two instances with the same values for all attributes are evaluated to be unequal because these custom class instances are compared by their identities by default. In this case, these two instances are two distinct objects, and they’re deemed to be unequal. However, with a data class, such an equality comparison evaluates True . This is because the dataclass decorator will also automatically generate the __eq__ special method for us. Specifically, equality comparison is conducted as if each of these instances is a tuple that contains the fields in the order that is defined. Because the two data class instances have the fields of the same values, they’re considered equal. How about inequality comparisons, such as greater than and less than? They’re also possible with the dataclass decorator by specifying the order parameter for the decorator, as shown below in Line 1. Inequality Comparisons Similar to the equality comparisons, data class instances are compared as if they’re tuples of these fields, and they’re compared as tuples lexicographically. For a proof of concept, the above code only includes two fields, and as you can see, the comparison results are based on the tuple’s order.
https://medium.com/better-programming/6-things-to-know-to-get-started-with-python-data-classes-c795bf7e0a74
['Yong Cui']
2020-11-18 17:07:40.546000+00:00
['Data Science', 'Python', 'Software Development', 'Programming', 'Artificial Intelligence']
Understanding Convolutional Neural Network with Malaria Cell Image Detection
In this article, we’ll be learning what is Convolutional Neural Network (CNN) and implement it for Malaria Cell Image dataset. I’ve got the dataset from Kaggle. CNN is a multilayer perceptron which is good at identifying patterns within datasets. It uses mathematics to extract important features of data to make further classification. As these networks are good with pattern recognition, they are mostly used with images. It could also work with other data but the condition is that data should be in a sequence i.e. shuffling this data must change its meaning. Diving deep To understand CNNs in detail, we need to understand two concepts: Convolutions Pooling Convolutions It is a part of processing the image to read its pattern. This would be the type of layer inside our CNN. It uses a filter matrix to extract the most important features of the image. Most of the time this filter matrix is a grid of 3x3 but it’s possible to change it if need be. It is later multiplied with the image matrix using matrix multiplication. The following diagram is a distinct visualization: Filters are very useful when dealing with images. You can see more examples of such filters from here. See how these filter values change the aesthetics of the image and highlight particular patterns. Pooling Pooling is used for decreasing the dimensionality of the image. It takes the most important pixels of the image and discards all the other pixels. The below image represents how MaxPooling works in our neural network. Notice how it decreases the 4x4 matrix to a 2x2 while retaining information of important features. Further multiple layers of convolutions and pooling are used to get the patterns. This step also helps in decreasing the dimensions for feeding these images to dense layers ahead. Into the code We’ll begin with importing the libraries. Our dataset contains two folders with different images, parasitized and uninfected. These images should be preprocessed before passing to the model. This step is crucial because it will have a major impact on the accuracy of the model. For this example, we looped through all the images in the directories and resized every image to 50x50. These images are then added to the data’s list and their respective labels added to the label’s list. Converting the data into a NumPy array for passing into the model and then shuffling these arrays. Now, we separate training and testing images. These image arrays are divided by 255 for normalizing the vectors. Wondering why, are you? Pixels in any image are represented in values between 0 and 255. So dividing the vector with 255 will create values between 0 and 1 which are more normalized and easy for our devices. Training the model A convolutional neural network consists of multiple layers that learn through data stepwise and pass weight to the following layers. It should consist of the layers mentioned below: Conv2D as for convolution layer MaxPooling2D as for decreasing the pixels of image Flatten for converting the result into a flattened array Dense layer with softmax activation for output We can obviously add other layers if required but this is the standard format used while working with images. Summary of this model looks like this: Then, we need to compile our model with loss function, metrics, and optimizer. We’re using adam as optimizer and categorical_crossentropy as loss functions. Finally, we’ll be fitting the model with training images and labels. This gives an accuracy of 99.11% at the end of 20 epochs. And gives a test accuracy of 96.11% which is really good. Let’s plot the graphs of accuracy and loss over time.
https://medium.com/aubergine-solutions/understanding-convolutional-neural-network-with-malaria-cell-image-detection-80bd2328ecd3
['Vivek Padia']
2020-10-13 04:48:54.937000+00:00
['Deep Learning', 'Image Classification', 'Python', 'Convolutional Network']
Apache Spark-3.0 Sneek peak
Apache Spark has remained strong over the years and now is coming back with one of its major releases with its ongoing goal of Unified Analytics to blend both Batch and Streaming world into one. Let’s see some of the features of it. Improved Optimizer and Catalog Delta Lake (Acid Transactions) + Linux Foundation Koalas: Bringing spark scale to Pandas Python Upgrade Deep Learning Kubernetes Scala Version upgrade Graph API — Graph and Cypher Script. GPU Support and along with Project Hydrogen Java Upgrade Yarn Upgrade Binary Files Improved Optimizer and Catalog: i) Pluggable Data Catalog: (DataSourceV2) Pluggable catalog integration Improved pushdown Unified APIs for streaming and batch eg: df.writeTo(“catalog.db.table”).overwrite($”year” === “2019”) ii) Adaptive Query Execution Make better optimization decisions during query execution eg: It interprets the size of the table and automatically changes from Sort Merge Join into a Broadcast join and so on.. if one of the tables is small Dynamic Partition Pruning speeds up expensive joins Based on the dimension table(Small table) filter query fact table(Large table) will also be filtered making the joins easier and optimal Delta-Lake: Delta Lake has been open-sourced for quite some time and has gained its popularity, given its ease of implementation and up-gradation with any existing Spark Applications. I believe, this is a next-generation of Data Lake, which helps overcome Data Swamp as well as the limitations of Lambda and Kappa Architecture. Now with Linux foundation backing up this program will step-up a notch. Here are some of the features which help us move one step closer towards Unified Analytics. ACID transactions Schema enforcement Scalable metadata handling Time Travel Note: More details related to DeltaLake will be updated once I resume on my upcoming daily posts soon — ( Follow me or the hashtag #jayReddy meanwhile) Koalas: Bringing spark scale to Pandas: Koalas have been released recently and it is a big add-on for Python developers both for Data Engineers as well as Data Scientists for its similarities between DataFrames and Pandas. Now they can scale up from a single node environment to the distributed environment without having to learn Spark Dataframes separately. Integrated into Python data science ecosystem. e,g: numpy, matpotlib 60% of the DataFrame / Series API 60% of the DataFrameGroupBy 15% of the Index / MultiIndex API 80% if the plot functions 90% of Multi-Index Columns Python Upgrade: Python is expected to completely move out from Version 2 to Version 3. Deep Learning: Request GPUs in RDD operations. i.e, you can specify how many GPUs to use per task in an RDD operation, e.g., for DL training and inference. YARN+Docker support to launch my Spark application with GPU resources. So you can easily define the DL environment in your Dockerfile. Kubernetes: Host clusters via Kubernetes are the next big thing it could be on-premise or cloud. The ease of deployment, management and the spin-up time is going to be far exceeding compared to the time taken by other orchestrating containers such as Mesos and Docker Swarm. Spark-submit with mutating webhook confs to modify pods at runtime Auto-discovery of GPU resources GPU isolation at the executor pod level spark-submit with pod template Specify the number of GPUs to use for a task (RDD stage, Pandas UDF) Kubernetes orchestrates containers and supports some container runtimes including Docker. Spark (version 2.3+) ships with a docker file that can be used for this purpose and customized to specific application needs. Scala Version upgrade: Scala 2.12 Graph API — Graph and Cypher Script: Spark Graph Api has a new add-on. A graph along with Property Graph and Cypher Script. Cypher query execution, query result handling, and Property Graph storing / loading. The idea behind having a separate module for the API is to allow multiple implementations of a Cypher query engine. Graph query will have its own Catalysts & it will follow a similar principle as SparkSQL. GPU Support along with Project Hydrogen: NVIDIA has the best GPU and it has by far surpassed any other vendors. Spark 3.0 best works with this. (NVIDIA RTX — 2080) is something to watch out for. Listing GPU Resources Auto discover GPU GPU Allocation to a Job and Fall-back GPU For Pandas UDF GPU Utilisation and Monitoring supporting heterogeneous GPU (AMD, Intel, Nvidia) Java Upgrade: With every new JDK version release from the Java community, we can see it moving one-step closer towards functional programming. The release of the Java-8 version was the beginning of it, starting from Lambda Expressions. Here’s an example of a variable declaration: Prior to Java 10 version: String text = "HelloO Java 9"; From Java 10 and higher versions: var text = "HelloO Java 10 or Java 11"; Yarn Upgrade: GPU scheduling support Auto-discovery of GPU GPU isolation at a process level Here’s the configuration setup to support GPU’s from Spark or Yarn 3 version onwards GPU scheduling In resource-types.xml configuration> <property> <name>yarn.resource-types</name> <value>yarn.io/gpu</value> </property> </configuration> In yarn-site.xml <property> <name>yarn.nodemanager.resource-plugins</name> <value>yarn.io/gpu</value> </property> Binary Files: Another file format added to support unstructured data. you can use it load images, videos and so on…. The limitation is that it cannot perform a write operation. val df = spark.read.format(BINARY_FILE).load("Path") Now that you know a glimpse of the next major Spark release, you can check out the Spark 3.0 preview version. If you liked this article, then you can check out my article on Note: Delta-Lake and Koalas can either be part of Spark-3.0 or remain as a separate entity as part of Databricks.
https://towardsdatascience.com/apache-spark-3-0-sneek-peak-284da5ad4166
['Jayvardhan Reddy']
2019-11-25 04:03:05.397000+00:00
['Big Data', 'Artificial Intelligence', 'Data Engineering', 'Data Science', 'Apache Spark']
Worry and Anxiety Are Only Displaced with Trust
The calligraphy I read it every time I sat in what I call my worry chair. A former girlfriend of mine was an artist who gave us a scripture written in calligraphy. She hand-painted it and gave it to us as a wedding gift. The calligraphy was hung on a narrow portion of the wall of our first home next to the front door. It hung directly across from my worry chair but I had to look up a bit to read it. Not long after we got married, the construction job I had ended, and I remained unemployed for several months other than odd jobs and some house painting. We were young and expecting our firstborn son. So I worried. A lot. I worried about money, the upcoming birth of our son, the responsibility of parenting, and how I would provide for my young family. Sometimes I spent hours in my worry chair. When my worrying session was finished, I got up emotionally drained and worse off than before I sat down. Even as I wasted time on all my worrying, I would read this scripture over and over again. I didn’t just memorize it, its truth embedded itself in my heart and soul. What was embedded in my heart took a while to reach my mind. I’ll try to explain why later. Permission given by artist– Rosemary Buczek–The Gilded Quill A truth to live by This scripture became a lifelong source of encouragement for me. More than that, it became the bedrock truth of our life together as a couple, as parents, and during more than four decades of ministry together. Trust in the Lord with all thine heart; and lean not unto thine own understanding. In all thy ways acknowledge him, and he shall direct thy paths. (Proverbs 3:5–6 KJV) These two verses in Proverbs express the essence of a life lived by faith—trust. Let me share with you how this truth became embedded in my heart and life. The Book of Proverbs contains many of these couplets of godly wisdom and practical guidance to live by. Let’s look at these verses line by line from the King James Version as I learned it. “T rust in the Lord with all thine heart” At first glance, this seems pretty basic and simple. But… how does a person learn to trust the Lord with all their heart? Think about it. Is there anything or anyone you really give your whole heart to? If you’re like most of us, probably not. As many have said, trust is easily broken but not so easily gained or restored. Trusting God is a learning process for all of us. It’s not that God is untrustworthy. Quite the opposite! Each of us needs to learn to trust God just like a child’s implicit, natural trust in their parents. During my early-marriage unemployment, we saw God provide for us in ways we never imagined. One day we found cash in an unmarked envelope left in our mailbox on the street in front of our house. At one point, a young unmarried but pregnant friend stayed with us for a while and gave us money for groceries. This was a humbling experience for me as a guy, but also a lesson in trusting the Lord. These two examples are just the beginning of the many ways God taught us to trust Him. Often through ways that made no sense to us and seemed paradoxical to our friends. This brings us to the second part of verse 5. “…lean not unto thine own understanding” Faith is not provable in an empirical sense. Neither is trust. Faith is a personal trust in God. It is developed in us as we learn to trust the Lord through life experiences as I shared above. Faith in God isn’t something to be reasoned out with God, although plenty of us try to do this. God is amazingly patient. All we need to do is observe Jesus with His disciples. Jesus did one miracle among the people found in all four of the gospels—the feeding of the 5,000 (Matt 14:13–21; Mark 6:34–44; Luke 9:11–17; John 6:4–13). It amazed the disciples more than all those fed with the five loaves and two fish from a little boy in the crowd. When you read these gospel accounts, it becomes somewhat obvious Jesus stretched the disciples' faith way beyond what seemed reasonable to them. It never entered their mind to do what Jesus did. After several years of ministry experience in two previous ministries, the Lord made it clear we were to establish a new church in a new area. We started with no following, no funding, and no specific place to meet. Twelve years later, when our firstborn was a senior in high school and the church had grown to a few hundred people, God called us to move our family to the Philippines where we lived and served for fifteen years. As we prepared for this major move for our family, many people in the church and in our community asked us why we would leave such a “successful” ministry. It made no sense to most people, but we knew without a doubt it was what the Lord was leading us to do. Here’s the bottom line on this—we don’t need to figure it all out before we step out in faith. In one sense, this is the point of faith. Not that it’s illogical, but it requires a trust in God beyond what is obvious to others. “In all thy ways acknowledge him” This may be the simplest part of learning to trust God, but it is contrary to our nature. When it comes to faith and trusting God, we tend to be childish in the wrong way. We want our own way. We want to do things by our self. A simple way to understand how to “acknowledge Him” in all our ways is to invite God into all we do, even the simplest of things. When we bring the Lord into all we do throughout a day, we acknowledge Him as our partner or companion. When we involve the Lord in everything we do, talking with Him becomes somewhat natural. This is what Paul meant when he told the young believers in Thessalonica to “pray without ceasing” (1 Thess 5:17). He doesn’t mean to stop what we’re doing to pray before we do anything or to sit around praying all the time, but to pray or converse with God as we go about whatever we’re doing. Shortly before exhorting these believers to “pray without ceasing,” He warns them not to be idle (1 Thess 5:14), and in a second letter he warns these believers if a person is unwilling to work, they shouldn’t eat! (2 Thess 3:10) It’s not complicated or mystical. Think of it more like how a toddler or preschooler wants their parent to see what they’re doing or have done. They want our attention and involvement in what they’re doing. So, as we did as children with our parents, include the Lord in what we do, no matter how insignificant it may seem to us at the time. Learning to live by faith, to trust in God, is a lifelong process. “…he shall direct thy paths” A question I often heard as a pastor was, “how can I know the will of God for my life?” This is akin to asking, “what’s my purpose in life?” Answering either of these questions can be tricky or simple. If you want a clearly laid out plan for your life, I think you’re setting yourself up for disappointment, even if you’re a fatalist. I don’t know of anyone who wants to be lost. Wandering isn’t the same as being lost. If you’ve ever experienced being lost, really lost, and without a clue what direction to go (I have), you know how scary that can be. Of course, sometimes we don’t know we’re lost. When my oldest was a young boy (maybe kindergarten-age), we lost him while shopping. Er, I should say, I lost track of him. We were in a huge shopping mall and began to retrace where we’d been. As I raced through a department store looking for his little blonde head, I almost went right by him. I found him sitting on the floor in front of a TV screen watching a college football game. He didn’t know he was lost. He knew where he was all along. When it comes to direction and purpose in life, God knows where we are all the time. Nothing we do takes Him by surprise. I mean, God being God, He knows all things, is ever-present, and is all-powerful. This is where genuine faith comes in. For me, the essence of faith is found in the letter of Hebrews within Chapter 11, often referred to as the “faith chapter.” No one can please God without faith. Whoever goes to God must believe that God exists and that he rewards those who seek him (Heb 11:6 GW). It should be obvious, but I’ll state it anyway. A person needs to believe in God in order to have faith in Him. But beyond the obvious part of faith, it’s important to see how personal this is. It’s a personal promise that God will honor our trust in Him. This is the essence of faith—it’s personal. A simple and personal life application Personally, I think knowing the will of God for my life is pretty simple. It’s personal and proportional—how well I know the will of God is based on how well I know Him. A simple way to see this is with preschool children and their parents. This is when it’s most obvious but it kind of carries into and through adulthood. Early on, even at two or three years old, a child learns which parent to go to when they want something. They learn this by trial and error—through the training school of rewards and consequences and a battle of wills. In other words, they learn who is an easy touch for one thing or another for what they want. If this sounds like manipulation, well it kind of is but’s it’s also part of the learning process. Here’s a simple example or two. Son to mom— “Mom, can I play baseball in the house?” Mom— “NO! You know better than that!” Son to dad— “Dad, can I play baseball in the house?” Dad– “I guess so, just be careful (as he changes channels with the remote).” That may seem like a silly example, but you get the idea and it really seems to hold true throughout life. At least it has with my four kids and their kids. And well, with my personal relationship with the Lord. Throughout the years, my wife and I chose to follow the Lord’s guidance even when it was difficult to do so or see where it might lead, and even when others thought our faith was misplaced. So far, the Lord’s never let us down and we know He won’t in the future. He has provided for us in ways we never expected or imagined, and blessed us with experiences and opportunities to serve Him that most people can only imagine. We’re thankful for God’s faithfulness and for the simplicity of living a life of childlike faith. Although it’s had its share of difficulties, those are far outweighed by the fun (yes, fun!) and fulfillment we’ve experienced together. Living a life of genuine faith is simple yet challenging—challenging to our pride and our desire to control our own destiny. Here’s how Eugene H Peterson—a pastor, professor, and author—sums it up in The Message (The Bible in contemporary language)— Trust God from the bottom of your heart; don’t try to figure out everything on your own. Listen for God’s voice in everything you do, everywhere you go; he’s the one who will keep you on track. (Proverbs 3:5–6 MSG) Here’s the story mentioned above related to Proverbs 3:5–6— Trip is a seasoned pastor and missionary — a teacher and writer by the grace of God — committed to making what is abstract and conceptual, simple and clear, and to challenge people to think and process the truth of God. You can read more about him and see his other writings at www.word-strong.com
https://medium.com/koinonia/worry-and-anxiety-are-only-displaced-with-trust-e88114edd5f3
['Trip Kimball']
2020-12-07 19:18:59.760000+00:00
['Scripture', 'Worry', 'Christianity', 'God', 'Trust']
Should you buy the 2020 iPhone SE?
Apple finally released the long-awaited iPhone SE refresh. This new iPhone is not entirely new, as it has some old components and newer components too. It features the iPhone 8 design, with only a few subtle changes on the outside. It comes with a black front now on every color model. Unlike before on the iPhone where it had a white front on the gold and silver color models. iPhone SE(right) and iPhone 8(left) side by side(image credit:Apple) They both feature the same 13 hours of battery life and 1 meter of water resistance. Both have the same display. Compared to the iPhone SE 1st gen the 2nd gen has a 4.7-inch display. Today the iPhone 8 display is considered small, and if it were an android it would be classified as “compact”. The iPhone SE rocks a single wide camera with portrait mode. Face id is something you will not find on the iPhone SE since it has an older design with a true depth camera. You get an A13 chip with up to 256 gigs of storage and a third-generation neural engine. The iPhone SE does look dated with a fore head and chin. (Image credit:cnet.com) You don’t get earpods with the iPhone SE anymore. However, with all of these trade-offs, the phone only costs 400$. Overall the iPhone SE is a solid option for those looking to purchase this phone for an older parent, or someone that just doesn’t care about modern technology. Honestly, I give this phone an 8/10 rating. My reason for that is that honestly, for the extra 100 dollars you can get an iPhone XR. The iPhone XR comes with a modern design, better battery life, face id, and is just one step ahead in the long run.
https://medium.com/@vilikhorak/should-you-buy-the-2020-iphone-se-df6f273607a7
['William Khou']
2020-11-18 00:58:58.686000+00:00
['Tech', 'Consumer Electronics', 'Apple', 'Iphone Se']
Best way to validate your Database model in Python
Best way to validate your Database model in Python On this example i will show how to validate faster and cleaner with orator bottle and Python, you could use flask or Django but in the example I’m using BottlePy. When I started using Python for the backend of my projects, I landed into a problem that the phoenix framework for elixir and mongoose for nodeJS had solved a long time ago. It’s a lot better to validate your request data for a new insert on the database from the model instead of the controller so I built a simple library that along orator creates and easy way to maintain your validations with a clean syntax. from orator import Model, SoftDeletes from orator_validator import Validator class User(Model, SoftDeletes, Validator): __table__ = 'users' __connection__ = 'local' __fillable__ = [ 'name', 'last_name', 'email', 'password' ] __guarded__ = ['id', 'password'] __hidden__ = ['password', 'created_at', 'updated_at'] def token_keys(self): return ['id', 'name', 'email'] class ValidateClass(object): def saving(self, user): user.validate('name', require=True) user.validate('last_name', require=True) user.validate( 'email', require=True, regex="your_email_regex" ) # Minimum six characters user.validate( 'password', require=True, regex="^.{6,}$" ) user.errors() user.password = bcrypt.hashpw(user.password.encode('utf-8'), bcrypt.gensalt()).decode('utf-8') User.observe(ValidateClass()) On this simple example I validate that the user send name, last_name, if they didn’t and error will raise letting you know that it’s required, also you can’t use regex type of validation, because we all can’t agree to what is the best regex for email validates. Just put the one you prefer. This leaves you with a fast to program environment for the controllers i just do this to handle the errors and create a new user. from bottle import ( request, abort, error ) class ExampleController(object): def create(self, request_json = None): ''' Function decicated to create new restaurant admin users return: user rtype: json string ''' if not request_json: request_json = request.json try: return User.create(request_json).to_json() except Exception as e: self.handle_exceptions(e) def handle_exceptions(self, error): ''' Function dedicated to handle error with querys param: error: type Exception return: abort_error ''' if str(type(error)) == "<class 'orator_validator.ValidatorError'>": self.abort_error( error.status_code, json.loads(error.body) ) else: self.abort_error(500, str(error)) def abort_error(self, status_code, msg=None, dict_error=None): ''' This is the abort_error function use to finish the request param: int status_code: This is the http error param: str msg: This is the msg param: dict dict_error: If you return multiple errors return: Fucntion abort: finish the request ''' if not dict_error: dict_error = dict() dict_error["code"] = status_code dict_error["msg"] = msg abort(status_code, str(json.dumps(dict_error))) The problem that I encounter after using this new library was, that the updates where taking more time, than the inserts so I updated the library to validate on updates to. from orator import Model, SoftDeletes from orator_validator import Validator class ItemData(Model, SoftDeletes, Validator): __table__ = 'item_data' __connection__ = 'local' __fillable__ = [ 'humidity', 'quality', 'weight', 'stacks', 'sample', 'item_id', 'entry_date', 'lot', 'amount_paid', 'exchange_rate', 'liquidated' ] class ValidateClass(object): def saving(self, item_data): item_data.validate('item_id', require=True) item_data.errors() def updating(self, item_data): item_data.validate_update('id', guarded=True) item_data.validate_update('item_id', guarded=True) item_data.validate_update('entry_date', guarded=True) item_data.validate_update( 'amount_paid', function_callback=self._set_liquidated_bool, item_data=item_data ) item_data.errors() def _set_liquidated_bool(self, item_data): ''' Here you can add code to omit the need for cron jobs or triggers param: item_data ptype: object ''' pass ItemData.observe(ValidateClass()) On the validate Update function I have the same options as validate for create, but the main difference between them it’s that here I have a guarded parameter, when you activate the flag, if the user tries to update that part of the model it will raise and error because it’s forbidden. Also the function_callback works like a trigger if the user sends the value it will trigger the callback the good part about this is that you have all the power of Python, and in most cases you will not required to use cron jobs if the api is well thought out. The update controller look like this def update_one(self, item_data_id): ''' This function is dedicated to update the item_data to match some other data param: item_id ptype: integer return: updated_item_data rtype: json dumps ''' try: item_data = ItemData.where('id', item_data_id).first() item_data.update(request.json) return item_data.to_json(default=self.datetime_json) except Exception as e: self.handle_exceptions(e) To install the library just $ pip install orator-validator The github repo if you want to report and issue or make a pr is.
https://medium.com/@alfonsocvu/best-way-to-validate-your-database-model-in-python-2f60995dc153
[]
2020-11-20 17:46:19.476000+00:00
['Python', 'Validation', 'Bottle', 'Flask', 'Web Development']
The Human Mind and Usability: Cognitive Biases
What is a cognitive bias? A cognitive bias is an error that occurs when humans are processing and interpreting information in the world around them and it affects the decisions and judgments that they make. Our brains are capable of progressing massive amounts of information but it also has its limitations. Cognitive biases are often the result of simplifying information and helping our brain to avoid a cognitive overload. They help us to make sense of the world around us and result in faster decision making. Some of them are related to memory, the way you remember an event might be different than it happened in reality and therefore this can result in biased decision making or problem solving. They can also relate to attention, since our attention span is very limited, people have to be selective about what they pay attention to in their surroundings. The concept of cognitive biases was at first introduced by two researchers Amos Tversky and Daniel Kahneman in 1972. Since then, academics have introduced several cognitive biases that have an influence on our daily lives. In this article we want to show you the most common biases that you need to be aware of when doing user research. Cognitive biases you need to be aware of as a UX Researcher Anchoring Bias We tend to “anchor” our decisions based on the first piece of information we receive. For example, if you see an item reduced due to a Black Friday deal for which you’re used to paying $10 and see it on sale for $8, this reduced price will feel like a deal. However, it can be that the original price used to be $8 dollars anyway but you need an anchor to compare it to, and shopping sites know this all too well and they are happy to provide you with one. The problem is when even you know this, but you just can’t ignore it. How can we avoid the anchoring bias? Framing Effect Bias The manner in which choices are presented to us also affects how we view them. A good example of this is a study that had participants watch a film of a traffic accident and then answered questions about the event, including the question ‘About how fast were the cars going when they contacted each other?’ Other participants received the same information, except that the verb ‘contacted’ was replaced by either hit, bumped, collided, or smashed. Even though all of the participants saw the same film, the wording of the questions affected their answers. The speed estimates (in miles per hour) were 31, 34, 38, 39, and 41, respectively. How to counteract the Framing Effect Bias? False Consensus Bias The fact that we form opinions in favor of our own personal beliefs is an example of the False Consensus Bias. As a UX Researcher, I often fell myself into the trap of the False Consensus Bias when writing e.g. survey questions, unconsciously phrasing questions with the assumption that our users would appreciate the same UX features that I appreciate. Even though the core goal in UX design is to set aside your personal beliefs in favor of the wants and needs of your audience, we are only humans and like to see the product through your own lens — making it difficult to imagine that others would see it differently. How to avoid the False Consensus Bias? Friendliness Bias While doing research I often see this type of bias emerging, especially with people who like to agree with and support others in general. It can happen for many reasons, including seeing the researcher as someone who is a professional and therefore their opinion must be valued. People also try to answer your questions with the least amount of effort, they will avoid wasting time or energy to build up any resistance regarding the task at hand. The friendliness bias can indeed immensely ruin your hard work of gathering data as you will get endresults that are biased and therefore useless. How to avoid the Friendliness Bias? Our conclusion is that… …user feedback is always fundamental for building any digital product! But you need to be aware of the cognitive biases our brains create to absorb the information in our surroundings, in order to get ‘clean’ and unbiased data. By simply understanding what each bias means and by breaking down the ways that it appears during the user feedback gathering process, you can put measures in place to overcome misleading preconceptions and gather the most unprejudiced feedback possible. Are you interested in more psychology topics related to UX Design? Then stay tuned for further blog entries to come in our “Human Mind and Usability” section! Sources which have been used for this blog entry and where you can read more about the topic:
https://medium.com/@smartdesigndigital/the-human-mind-and-usability-cognitive-biases-12db6e07d19a
['Smart Design Digital']
2020-12-09 10:50:36.247000+00:00
['UX Design', 'Bias', 'Cognitive Bias']
Beyond Excel: how data is leaving spreadsheets behind
Beyond Excel: how data is leaving spreadsheets behind Sciant Follow Jan 7 · 3 min read Since the 80s spreadsheet software like Excel has been used to crunch, analyse and present data. But now the sheer volume of data that most organisations handle means spreadsheets are becoming defunct. With 21% of businesses moving towards other software solutions in the US alone, organisations are slowly waking up to the need for bigger and better data handling. But what does this mean for the technology we will interface with and the skill sets we will need moving forward? Problems with Excel Excel has been a solid workhorse for several decades but there are a number of issues that make it unsuitable for the kind of data loads modern businesses deal with. With manual inputting and lack of consolidation across different users, errors are almost inevitable and often costly. A relatively minor error at TansAlta cost the Canadian power company over $24 million and J P Morgan lost a massive $6 billion due to operational errors with Excel. With four out of five CFOs citing problems with their spreadsheets, clearly something needs to change. Spreadsheets are often not integrated with other systems such as accounting and enterprise resource management systems. Often collaboration between users is overly-complicated with manual updates needing to be communicated to all other users. And because spreadsheets weren’t designed to handle the huge amounts of data required by modern businesses, they often have to be segmented into more manageable data sets which then become incredibly difficult to consolidate, making it almost impossible to see the big picture. Also poor data visualisation means that data is often presented in an unclear or even misleading way, leading to key insights being missed. All of which can make spreadsheet data unpleasant, boring and even intimidating, especially for non-technical staff. The way forward Many businesses are now turning to cloud-based solutions where large amounts of data can be handled in an integrated way, connecting various systems and maintaining and presenting a single source of truth. Collaboration also becomes easier — gone are the days of the torturous process of consolidating all the various changes and inputs made by different users. With cloud-based software multiple users can work on the same task with real-time updates and instant communication, meaning that all changes are tracked and errors can be quickly pinpointed. AI is the other big step change. Big data is about integrating information from all systems in an up-to-date, relevant and clean manner in a way that can be analysed effectively by machine learning algorithms to provide novel insights. It automates the often tediously repetitive manual inputting tasks and eliminates the possible errors from the procedure. It also provides easy-to-understand, intuitive visualisation tools which enable people to quickly understand the most relevant and important points, leading to greater insight and access to people from across the workplace. New skills and opportunities Adoption of this new technology ultimately means more freedom for the humans in the chain. Whereas before they had to be number crunchers who spent large portions of their working time manually inputting and updating data, they are now more able to engage the creative side of their minds, free of mindless tasks, free of the fear of error and free of the headache of trying to analyse opaque data. Our invaluable human brains can now focus on what they are best at — finding creative insights from the data presented to unlock and power new and innovative business solutions. At Sciant, we connect your systems to unlock the potential of your data, by building big data platforms with full visualisation so data can be exchanged in real-time allowing each and every department to access only the information they need to drive business performance.
https://medium.com/sciant/beyond-excel-how-data-is-leaving-spreadsheets-behind-b52edee73f45
[]
2020-01-07 08:59:44.353000+00:00
['Data Optimization', 'Efficency', 'Data Visualization', 'Digital Transformation', 'Big Data']
An Overview of Kunernetes and Industry Use Case of Kubernetes
Containers have been helping teams of all sizes to solve issues with consistency, scalability, and security. Using containers, such as Docker, allow you to separate the application from the underlying infrastructure. Gaining that separation requires some new tools in order to get the most value out of containers, and one of the most popular tools used for container management and orchestration is Kubernetes. What is Kubernetes? ♦ Kubernetes(also known as k8s or “kube”) is an open-source container orchestration tool designed to automate deploying, scaling, and operating containerized applications. Kubernetes can support data center outsourcing to public cloud service providers or can be used for web hosting at scale. Website and mobile applications with complex custom code can deploy using Kubernetes on commodity hardware to lower the costs on web server provisioning with public cloud hosts and to optimize software development processes. The Kubernetes project is written in the Go programming language, and you can browse its source code on GitHub. ♦ Kubernetes was originally created by Google as an open source project in 2014. Today, Kubernetes is a rapidly growing open source community, with engineers from Google, Red Hat, and many other companies actively contributing to the project. Additionally, the Cloud Native Computing Foundation, a project of the Linux Foundation, operates to provide a common home for development of Kubernetes and other applications seeking to offer modern application infrastructure solutions. Need of Kubernetes. ♦ We love containers as containers provide a lightweight mechanism for isolating an application’s environment. Containers are a streamlined way to build, test, deploy, and redeploy applications on multiple environments from a developer’s local laptop to an on-premises data center and even the cloud. However, what happens if your container dies? Or even worse, what happens if the machine running your container fails? Containers do not provide a solution for fault tolerance. Or what if you have multiple containers that need the ability to communicate, how do you enable networking between containers? How does this change as you spin up and down individual containers? Container networking can easily become an entangled mess. Lastly, suppose your production environment consists of multiple machines — how do you decide which machine to use to run your container? ♦ Kubernetes is often described as a container orchestration platform. A container orchestration platform manages the entire lifecycle of individual containers, spinning up and shutting down resources as needed. If a container shuts down unexpectedly, the orchestration platform will react by launching another container in its place. On top of this, the orchestration platform provides a mechanism for applications to communicate with each other even as underlying individual containers are created and destroyed. How Kubernates Works? ♦ Kubernetes is an example of a well-architected distributed system. It treats all the machines in a cluster as a single pool of resources. It takes up the role of a distributed operating system by effectively managing the scheduling, allocating the resources, monitoring the health of the infrastructure, and even maintaining the desired state of infrastructure and workloads. Kubernetes is an operating system capable of running modern applications across multiple clusters and infrastructures on cloud services and private data center environments. Kubernetes Terminology 1. Pods A pod is the smallest execution unit in Kubernetes. A pod encapsulates one or more applications. Pods are ephemeral by nature, if a pod (or the node it executes on) fails, Kubernetes can automatically create a new replica of that pod to continue operations. Pods include one or more containers (such as Docker containers). 2. Kube proxy The Kube proxy routes traffic coming into a node from the service. It forwards requests for work to the correct containers. 3. Kubelet A Kubelet tracks the state of a pod to ensure that all the containers are running. It provides a heartbeat message every few seconds to the control plane. If a replication controller does not receive that message, the node is marked as unhealthy. 4. Nodes A Kubernetes node manages and runs pods; it’s the machine (whether virtualized or physical) that performs the given work. Just as pods collect individual containers that operate together, a node collects entire pods that function together. When you’re operating at scale, you want to be able to hand work over to a node whose pods are free to take it. 5. API Server The API server exposes a REST interface to the Kubernetes cluster. All operations against pods, services, and so forth, are executed programmatically by communicating with the endpoints provided by it. 6. Scheduler The scheduler is responsible for assigning work to the various nodes. It keeps watch over the resource capacity and ensures that a worker node’s performance is within an appropriate threshold. 7. Controller manager The controller-manager is responsible for making sure that the shared state of the cluster is operating as expected. More accurately, the controller manager oversees various controllers which respond to events (e.g., if a node goes down). 8. etcd etcd is a distributed key-value store that Kubernetes uses to share information about the overall state of a cluster. Additionally, nodes can refer to the global configuration data stored there to set themselves up whenever they are regenerated. 9. Kubernetes Master This is the main entry point for administrators and users to manage the various nodes. Operations are issued to it either through HTTP calls or connecting to the machine and running command-line scripts. Kubernetes is an orchestration tool for containerized applications. It is responsible for: Deploying images and containers Managing the scaling of containers and clusters Resource balancing containers and clusters Traffic management for services Who Use Kubernetes? 2253 companies reportedly use Kubernetes in their tech stacks, including Google, Shopify, and Slack. Use Case of Kubernetes CASE STUDY: The New York Times The New York Times is an American daily newspaper based in New York City with a worldwide influence and readership. Founded in 1851 and known as the newspaper of record, The New York Times is a digital pioneer: Its first website launched in 1996, before Google even existed. Challenge When the company decided a few years ago to move out of its data centers, its first deployments on the public cloud were smaller, less critical applications managed on virtual machines. “We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center,” says Deep Kapadia, Executive Director, Engineering at The New York Times. Kapadia was tapped to lead a Delivery Engineering Team that would “design for the abstractions that cloud providers offer us.” Solution The team decided to use Google Cloud Platform and its Kubernetes-as-a-service offering, GKE. To get the most out of the cloud, Kapadia was tapped to lead a new Delivery Engineering Team that would “design for the abstractions that cloud providers offer us.” In mid-2016, they began looking at the Google Cloud Platform and its Kubernetes-as-a-service offering, GKE. Impact Speed of delivery increased. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was “just a few seconds to a couple of minutes,” says Engineering Manager Brian Balser. Adds Li: “Teams that used to deploy on weekly schedules or had to coordinate schedules with the infrastructure team now deploy their updates independently, and can do it daily when necessary.” Adopting Cloud Native Computing Foundation technologies allows for a more unified approach to deployment across the engineering staff, and portability for the company. In early 2017, the first production application — the nytimes.com mobile homepage — began running on Kubernetes, serving just 1% of the traffic. Today, almost 100% of the nytimes.com site’s end-user facing applications run on GCP, with the majority on Kubernetes. Conclusion Lastly, the conclusion is that Kubernetes is an exciting project that allows users to run scalable, highly available containerized workloads on a highly abstracted platform. Thank You!!!
https://medium.com/@paragrahate/an-overview-of-kunernetes-and-industry-use-case-of-kubernetes-b3c8fbbaa4e5
['Parag Rahate']
2020-12-26 07:16:13.111000+00:00
['Case Study', 'Kubernetes', 'Kubernetes Cluster', 'Kubernetes Engine', 'New York Times']
Here’s the Most powerful Autoresponder you can own for life (no monthly fees ever! OPEN UP)
Grab It!!! → Life-Time Email Autoresponder Account — No Monthly Charges! (MailPanda) What MailPanda Exactly Is? MailPanda is a premium email marketing software which allows you to send unlimited emails directly to unlimited subscribers. MailPanda is also the easiest email editor to convert templates. It starts to get more opens, sales, clicks and conversions, all it takes is just three easy steps… · Upload Your contacts — just upload or create the opt-in templates with your existing lists in no time. You are not afraid to lose or reject a single lead and you are ready immediately for an extremely profitable campaign. Send Unlimited emails — Select our high conversion email campaigns, customize them to your business or campaigns, just plan or send them straight away. Just relax while MailPanda does all the grunt work for you and you are all set to get the best out of your campaigns. Plan or send right away — Select the list and press the send button to send to unlimited email addresses and sit back. Let MailPanda accomplish all the grunt work and you enjoy more profits. What MailPanda Can Do For You? · Send Unlimited emails to unlimited subscribers · ​Upload and email unlimited leads · ​Most advanced email marketing platform ever · ​Skyrocket your open and click rates · ​Generate More Leads from any blog, ecommerce or WordPress site · ​100+ high converting templates for webforms, emails and more · ​Spam score checker to ensure maximum · ​GDPR Can-Spam Compliant · ​Complete Step-by-Step Video training and tutorials Included With commercial license WHO IS BEHIND THIS AMAZING PRODUCT? Let me introduce to you the prominent figure behind this outstanding software: Daniel Adetunji. He has been in this online marketing thingy for last 5 years and after serving 50000+ customers and generating over 2 million in revenue, one this was super clear. Throughout the journey, he has gained so much experience and perfected his skills in designing products that can help users maximize their potential of making money online. Some of his successfully launched products that you may have heard of are: NoClick Profits, Instant Video Sales Letters, InstaFunnel Formula, Flexsocial, SociOffer, SociClicks, SociLeadMessenger and I advise you to keep an eye out for more wonderful products to come in the future. =>> Get EARLY ACCESS to MailPanda Software Here How MailPanda Differs From Other Tools? Now no more Guessing Game, No more losing your business to cash sucking email autoresponders, Step up. Just upload and send email and let the software take care of the rest. · Forget paying $100/m for autoresponder · Forget lower inboxing and open rate · Forget the fear of autoresponder account being suspended · Forget the fear of import list being scrutinized and rejected · Unlimited Emails, Unlimited Leads, Unlimited Campaigns. · High Inboxing by Reverse Engineering Latest Email Algorithm · List Management — Import, Custom Fields · Single optin/double optin feature · Personalize your emails for high engagement · No Blocking, No Leads Upload Restriction, · 100% secure system and backup of data · Spam score checker · With Commercial License to start your own email marketing Agency MailPanda Review — Reasons Why MailPanda Is a Good Software ♦ POWER OF SUBSCRIBERS ♦ BROADCASTS OR AUTORESPONDERS ♦ SKYROCKET YOUR OPEN AND CLICK RATES ♦ TAKE AUTOMATION TO NEXT LEVEL ♦ CRAFT CONTENT YOUR WAY ♦ PERSONALIZE CONTENT TO FOSTER ENGAGEMENT ♦ DFY Email Templates ♦ MEDIA INTEGRATION ♦ 100% SECURE SYSTEM AND BACKUP OF DATA ♦ DEEP ANALYTICS MailPanda Review — Pros and Cons Pros · Easy save, draft and duplicate campaigns to save time and boost productivity · Complete Step by step video tutorials · ​CAN SPAM and GDPR Complaint · ​Complete and Dedicated Support · ​No Coding, Design or Technical Skills Required · ​​Regular Update · ​Complete Step-by-Step Video training and tutorials · ​Newbie Friendly & Fully Cloud-Based Software · ​With Commercial License Cons · No cons until now MailPanda Review — My Final Thoughts It’s every marketer’s headache: You spend time crafting the perfect message for your subscribers with a clever subject and creative body. You hit send and, voila! Or so you expect. Instead, your open rate tanks. A small segment of your audience reads your message. You waste time and don’t get the ROI you expected. Low email open rates are a pain. What’s going on? Your Emails are landing in spam, open rates are at all time low and your autoresponder isn’t helpful at all Gmail, Outlook, Yahoo are filtering your messages and they never get read. The list you import is constantly being rejected or they scrutinize 20% of your imports And Still pay hundreds of dollars for non-effective non-performing email autoresponders and one bad day Your account has been suspended. Just because your autoresponder doesn’t find you “Legit”. Checkout the crazy open and click rates I got for a simple email I sent using MailPanda. I love this tool. You will love it too. =>> Get EARLY ACCESS to MailPanda Software Here
https://medium.com/@kinginsurance99/heres-the-most-powerful-autoresponder-you-can-own-for-life-no-monthly-fees-ever-open-up-8f89baa94ddf
[]
2020-12-21 10:39:40.699000+00:00
['Autoresponder', 'Email Marketing', 'Email', 'Mailpanda Review', 'Mailpanda']
Album Review: Telas // Nicolás Jaar
It has been a busy year for Nicolás Jaar. The producer returned to his roots with a mid-tempo industrial noise album in February, before turning that on its head with Cenizas in March. This had always been a part of the plan, it seems. The gloomy latter piece was rendered a detox of negativity — one last hack to soak it all in and purge any residual bad vibes. And rightly so. Where else could the artist have gone, having already dug around the limits of texture, space and sound? Telas (which means fabrics) took a little more time to be realised, to process that long, trawling dirge of Cenizas (ashes) and come up with a response. The new album is built as a four-part concept, roughly the same length as the last but with a greater focus on order and construction: 13 songs become four, visual minimalism is swapped out for detail, and so on… Perhaps here we find the significance of ‘fabrics’ in the constant, weaving state of creation. The artist describes the record as “ a panspermic terrain where particles travel through space […] where no matter has a solid or immovable origin.” And hence the album lies in direct antithesis to the last, taking a deliberate step away from the drab landscape of Cenizas and leaning into its optimistic new horizons. In softer moments, choral undulations reminiscent of the last record guide the process, but the aim seems to be to do something different with them. Indeed, the album finds itself a new narrative set on building a new world; the purpose of self-annihilation last time around was in the name of achieving something greater, it seems. This is the end goal. Slowly, carefully, Nicolás Jaar puts the pieces back together. Telas, then, fills the role of sequel. ‘Faith Made of Silk’ wrapped up the angst of the last album, setting up Telas to rise from the ashes. The ambient soundscape feels post-apocalyptic but without a sense of catastrophe. It lacks the drama of Godspeed You! Black Emperor, for example. ‘The end’, instead, seems to symbolise a return to nature, lit up by storms and occasional violent noises, but lacking the cold organisation of human influence. In sum, Telas looks to rebuild from the known, through chaos and back towards order, stripping back overgrown layers of complexity and working on a true synthesis of old and new. The shaky foundations of new life come together little by little, laying on a careful blend of natural and digital noises. Contributions from Milena Punzi (cello) and Susanna Gonzo (vocals) give the sounds poise. In ‘Telahora’ we are introduced to a new world, still humming from the last. Sounds are simple and discrete, some ordered and others faint and uncertain. ‘Telencima’ manages to move on entirely from familiar patterns, interspersing background chatter with curious artificial voices. Instrument makers Anna Ippolito and Marzio Zorio, as well as Heba Kadry (mastering), are perhaps behind the richness of personality given to these digital sounds. Indeed, there are nods to the artificial landscapes of previous albums, but Telas feels worlds away from the physics of repetition, distortion and focus. By the midpoint, the album picks up a little, draws together some of its resources and starts forming patterns. ‘Telahumo’ — humo meaning smoke — plays off textures well, relying less on percussion and putting together a series of constants that sound more like music. Civilisation! Features enter, play their part and disappear, all sewn together into the new order. The author says: Telas is the “ancestral pollination between symbiotic lovers”, tied up in metaphors of spider webs, silk, mist and ritual: “ Cenizas was the ashes of a destruction; Telas is the fabrics of a construction.” The pompous conclusion is that Telas is an album that will either be understood or not. More mildly, it makes itself hard to compare. To be clear, what Telas sets out to achieve it does very well. It passes on its own terms. But it is an experience that requires a certain commitment. It is difficult to review something so far removed from everything else. Cenizas was written in “self-imposed quarantine”, locked away from drink and drugs and set to a mood of necessity. Telas has had a little more room to think, time to enunciate its exact intentions and to create something precise, albeit at the expense of accessibility. An offhand three out of five seems to miss the point. Urgh. The album ends on an uncertain note, slowing down the record to a steady drum beat and reinstating the human inflection. And it’s strange. In some ways, the moody plod of conventional instruments share more similarity with the start of the last album (see: ‘Menysid’). I wonder how intentional this was. Is this the fate of all attempts to self-recreate? Is this the natural state of being: an inverted tendency away from entropy, towards the comfortable and familiar? A strange and inconclusive record, Telas invites more questions than it provides answers. Words by James Reynolds Support The Indiependent We’re trying to raise £200 a month to help cover our operational costs. This includes our ‘Writer of the Month’ awards, where we recognise the amazing work produced by our contributor team. If you’ve enjoyed reading our site, we’d really appreciate it if you could donate to The Indiependent. Whether you can give £1 or £10, you’d be making a huge difference to our small team.
https://medium.com/the-indiependent/album-review-telas-nicol%C3%A1s-jaar-24919facdf47
['James Reynolds']
2020-08-03 08:21:24.929000+00:00
['Nicolas Jaar', 'Music', 'Album Review', 'Chile']
AYS Daily Digest 30/11/20 What was old is new again with new map for Moria 2.0
The Arguineguín camp is emptied // Germany’s debate on whether to deport to Syria // Home Office still saying migrants are traffickers, even after Judge says no // and more… Photo by Stonsi Gr. FEATURE: New map of camp/RIC for Lesvos The Ministry of Immigration and Asylum confirmed on Sunday a map of the new Closed Controlled Structure of Islands in Vastria, north-eastern Lesvos within the boundaries of the Municipality of Mytilene. The building is supposed to start around Easter and end in the fall of 2021. The camp will be accessible from a road that apparently goes to a landfill…but the camp will have “no contact” with the landfill. As Stonsi Gr reports: “It should be noted here that the position was proposed by the Mayor of Mytilene, Strati Kyteli, and the Lesvos MP of New Democracy and 2nd Deputy Speaker of Parliament, Charalambos Athanassiou. The position proposed to create the new structure is noted to belong to the administrative boundaries of the Municipality of Mytilene and specifically of the Community of Nea Kydonia. During their visit to the island, the Minister of Immigration and Asylum Notis Mitarakis and the head of the European Action Group for Lesvos, Deputy Director General of the European Union for Immigration DG HOME Beate Gmter, came to the position. A group of special advisers was with them.” Only two and a half months after Moria burned down, this is what is coming. AYS will continue to report on these developments.
https://medium.com/are-you-syrious/ays-daily-digest-30-11-20-what-was-old-is-new-again-with-new-map-for-moria-2-0-52f4cc9ce14e
['Are You Syrious']
2020-12-01 18:08:38.232000+00:00
['Refugees', 'Digest', 'Spain', 'Greece', 'Germany']
How Google Featured Snippets help improve your search experience?
Most days you find what you are searching for on Google, though it might take a little digging. Some days you don’t, no matter how long you look. It could be the review of a book you’re planning to buy, a recipe for the favourite dish of your loved one or even a product for your business or personal use. Isn’t it always better and much easier when the bit of information you are looking for is presented to you on the very top, even before the “natural search results” of Google? Google has a very useful feature called Featured Snippet and it helps users receive the information they’re looking for quickly and with the least effort. It is not just useful for the user who is looking for the information but also for the ones providing it. What is a Featured Snippet? Google Featured Snippet Featured Snippet, which is also known as an Answer Box, is a very quick, brief and precise answer to a search query and is displayed on top of the Google search engine results page (SERP). This information is gathered from the page with the highest relevance to the search query and is displayed along with the page’s title and URL. According to this article, Google only uses those pages that come in the Top 10 ranking to display in the Featured Snippets. Some of the most commonly used Featured Snippets are Paragraph, List, Table, etc. How is it useful? Featured Snippets provide the opportunity to get more clicks simply through the search results, without considering the Google ranking. Therefore, everyone wants their content to be on the top or at least in the top 10 list of links. According to a research by HubSpot, content with a featured snippet gets double the amount of clicks. Additionally, it was noted that even in the #1 position of the search results, which is right below the Featured Snippet, HubSpot had a boost in their click-through rate (CTR), which increased by over 114%. Graph showing how useful Google Featured Snippets are to improve the Click Through Rate (CTR)(https://blog.hubspot.com/marketing/how-to-featured-snippet-box) The growth of Featured Results over the years is remarkable. According to Search Engine Land, based on a study in 2017, 30% of 1.4 million queries had Google search results with Featured Results. They noted that featured snippets get 8% of the overall clicks. So getting your page in the Featured Snippet for a search query means a huge boost to your business. How to optimize your page for Featured Snippet? For a definition or a simple question, the aim is to provide Google a brief answer of 40–60 words, which they can use to display in the Featured Snippet. Additionally, a heading or a subheading with the same or similar question would be an added advantage. For a list, maintain consistency in the formatting and avoid any typos or errors, especially in formatting. According to an article in moz.com, a neatly organized page with paragraph and heading tags, containing factual information and organized questions have a higher chance of being displayed in the Featured Snippet. It is also better if one article could answer multiple similar questions. The HubSpot article elaborates on the ways to get a Featured Snippet. The content to be displayed should be in a <p> tag directly beneath the header tags (h2, h3, h4, etc) and should contain keywords and images if available. It is always best to keep the word count within the range 54–58. The other paragraphs could have a more detailed explanation about the topic. To summarize, it is always best to have well-formatted and organized pages with high-quality content and images that answers multiple questions in detail. This ensures that your page and your content has higher relevance and therefore has a higher chance of being picked for the Top 10 list and Featured Snippet. Bar graph showing the number of featured snippets with the ideal word count (https://blog.hubspot.com/marketing/how-to-featured-snippet-box) SearchAI Answers What if you could add a similar feature to your page? SearchAI Answers offered by SearchBlox works in a similar way and by using it new or existing customers can quickly and easily troubleshoot their problems and find the right answers to their questions. This makes the search experience better, faster and more efficient. It can easily be integrated into your existing search setup through a search box or chat allowing users to receive the right content or information in a quick and easy way. SearchAI Answers in search results SearchAI Answers ensures that the customer does not have to wait for a response or wait in a call with a customer service employee, thus reducing customer service costs. This also ensures that the customers get the right answer to their question immediately which increases the first contact resolution. The customers’ experience with SearchAI is better and more efficient as it responds to natural language questions wherever they’re asked, either in search, via chat or voice. This saves time and effort as it does not require manual tagging, specific domain taxonomy or knowledge graphs. To know more about SearchAI Answers and try it out for yourself, click here. To know more about SearchBlox, click here. You can reach out to us via email or call us on +1(866) 933–3626.
https://medium.com/searchblox/how-google-featured-snippets-help-improve-the-search-experience-8ea11766bc5c
[]
2020-12-16 16:59:03.223000+00:00
['Featured', 'Google', 'SEO', 'Search', 'Customer Service']
Book Review of “NO MATTER WHAT!: 9 STEPS TO LIVE THE LIFE YOU LOVE” by Lisa Nichols
I can’t help but think of the words from that Yolanda Adam’s song, “The Battle Is The Lord’s”. Part of the chorus rings out to me: Remember that God only wants to use you No matter what you happen to go through right now Remember that in the midst of it all, God only wants to use you. The lyrics to that song are powerful…and so is Lisa Nichols book. In this book, Lisa share with us 9 STEPS TO LIVE THE LIFE YOU LOVE. I had to put that in caps because I think most of us can think otherwise. This life is can be at times…blah. Lisa breaks the book down in terms of Bounce-Back Muscles we all need to develop and empower our life. The 9 Bounce-Back Muscles: Lisa breaks down the meaning of each muscle with honesty and vulnerability from her life. Examples she shares from her life are very real. Lisa does not hold back detail and her honesty about the situations. I was shocked and encouraged to read on at the same time. I was asking myself out load, did she just say and in that way? There is such a down to earth writing style of this book. You feel as if Lisa is speaking with you face to face. She doesn’t just give you a rah, rah speech in this book, but calls you to an honest assessment and action steps for your life too! This is another book in which I keep for reference going back to again and again. If you haven’t picked up a copy of this book, YOU must-have it for encouragement fighting your “battles” of your life. You can order a copy of Lisa’s book > https://amzn.to/2KbvjJi I strongly encourage this book as a beginning read for a teenager. there are many tips for them to transform into great adults having this book. Thanks for reading my review, donovin Other books by Lisa Nichols: Abundance Now: Amplify Your Life & Achieve Prosperity Today Chicken Soup for the African American Woman’s Soul: Laughter, Love and Memories to Honor the Legacy of Sisterhood (Chicken Soup for the Soul) Do you like eBooks? You can browse through a variety of books in my digital stores: Reasonable Otherwise Check Out other Book Reviews of books I have read: https://bookreviewofmindbuildingbooks.blogspot.com 👕 Into T — shirts and mugs? Browse my Designs store at: donovin_4_designs 💻 Interested in starting a Home Business? NEW SIDE HUSTLE
https://medium.com/@donovintheblogger/my-book-review-of-no-matter-what-9-steps-to-live-the-life-you-love-by-lisa-nichols-a0bf2bd8bde4
['Walter Ray']
2021-01-14 05:22:43.031000+00:00
['No Matter What', 'Motivation', 'Lisa Nicols', 'Book Review', 'Transformation']
An archive by Azhar — 2009 > Present
Story An archive of all the works from the beginning of my career that did not get a chance to have a comprehensive case study or in rare cases did not make it through ( AKA, I did not have the heart to leave out these… ) UX Design, UI Design, Responsive design, Branding, Graphic design, Illustration
https://medium.com/azhars-work/an-archive-by-azhar-2009-present-35aeb0e6ebbc
[]
2021-03-15 11:50:08.477000+00:00
['Freelancers', 'Freelance', 'Freelancing', 'Labs']
Cameras for Transparent Production Under Robonomics Parachain Control
In this article, I want to share the details about the visual inspection use case that our team has recently built using Robonomics Network. Why? As the automation trend in industrial manufacturing is reaching maturity, the demand for automated nondestructive testing (NDT) is growing rapidly. Companies are looking for ways to reduce the downtime and avoid defects in the final products, such as cracks, porosity, and manufacturing disorders. In particular, the automated optical inspections market was estimated at 446.5 million USD and it is expected to grow 24% a year. Moreover, the COVID-19 pandemic has significantly complicated the inspection process highlighting the need for automated inspections. That is why we decided to demonstrate how Robonomics Network can be used to organize an automated visual inspection service. Vadim Manaenko, one of the key developers in Robonomics, has assembled the solution and has tested it at his favorite coffee shop in Saint Petersburg. But rest assured, this software stack will satisfy the requirements of even the most regulated sectors, such as the aviation and automotive industries. How it works From the hardware side, this particular implementation is using a single board computer, thermal printer for QR codes, camera, and a big red button so that barista could start and stop video recording. When the system receives a signal from the button, it creates a QR code with a link to the video and begins recording. When the button is pressed the second time, the recording stops, and the video is published to IPFS. IPFS hash is then available through the Robonomics platform and stored there securely. These devices leverage the Robonomics Network software stack to communicate and store value. Robonomics platform gives an easy way to connect many heterogeneous devices together and record data using the latest Web3 technologies. What this results in, is an IoT system with an unprecedented level of security, where data not only is kept secure, but is auditable via a public blockchain. Conclusion The demand for visual inspections is growing and obviously, the security of IoT systems and data transparency is what companies are looking for. This is why this use case is so interesting now. Of course, having Vadim enjoy his coffee more is a worthy goal on its own :) But this solution can be deployed in any setting where the need for secure visual inspections exists.
https://blog.aira.life/cameras-for-transparent-production-under-robonomics-parachain-control-8d30f86d8dbf
['Vitaly Bulatov']
2021-04-09 09:03:53.534000+00:00
['Supply Chain', 'Industry 4 0', 'Blockchain', 'Transparency', 'IoT']
Three common seaborn difficulties
Three common seaborn difficulties Explaining some aspects of using seaborn that most often confound newcomers Michael Waskom Feb 22·8 min read This post aims to explain three of the most common difficulties encountered by users of seaborn, a Python library for data visualization. My hope is that this post can be a helpful resource for users who have read through some of the documentation — which uses toy datasets and focuses on simple tasks — but are now struggling to apply the lessons to their own work. You (might) need to reformat your data Seaborn’s plotting functions are most expressive when provided with a “tidy” long-form dataset. With data formatted this way, you can pass the full dataset and select the columns that you want to visualize by assigning the column names to different roles ( x , y , hue , etc.). But we often work with datasets that are not naturally stored in a tidy format. For example, you might keep a spreadsheet with your household budget that looks like this: A “messy” table representing a household budget. This is a perfectly fine representation of the data from a human perspective: it’s easy to read off the change in your food expenses from year to year. But it would be difficult to plot those changes, because the “year” variable isn’t explicitly represented. Rather, it’s represented by (a subset of) the column names. The same budget, but represented in a “tidy” long-form table. The table on the left shows the same data after “melting” into long-form format. Now the three variables are represented in separate columns and can be explicitly assigned to roles in a plot. The command for this transformation is: budget_long = budget.melt( id_vars="Category", var_name="Year", value_name="Expense", ) It can be difficult to give a general recipe for converting data to long-form, because the details will depend on the original format, and this will be different for every dataset. It can be helpful to think backwards from the plot: what will you assign to x , y , or other roles? How is that information currently encoded in your DataFrame? Once you get the hang of it, preparing your data will become straightforward. And long-form data is useful beyond seaborn: you’ll also need this format to perform group-by aggregations in pandas or to specify a design matrix in statsmodels. But, if learning how to reformat your data still feels like an obstacle, I have good news: it might not be necessary! I said before that seaborn is most expressive when provided with long-form data. But (nearly) all seaborn functions can understand “wide-form” data too. Wide-form data can be a DataFrame, a 2D numpy array, or even a collection of vectors (perhaps of different lengths) held in a Python dictionary or list. To understand all of the possibilities, read this chapter of the seaborn user guide. The key thing is that the values inside the table (i.e. not the index or column names) must represent a single variable. The original budget table above won’t quite work, because it still represents one variable with one column in the table and a different variable across the other columns. But if you do budget.set_index("Category") , you’ll have a tidy “wide-form” table that you can visualize by passing to data . There are many options for passing wide-form data, but different functions will interpret it differently. The drawback is that each function has a fixed way of plotting wide-form data, and if you want to do something different, you’ll need to change the data, not the way you call the function. But it’s still useful for a quick peak. And if you’d rather not think about pandas DataFrame structure, you can also pass vectors of data directly to x and y . You can even mix names that reference columns in data and vectors that directly represent other variables. For more complex plots, this approach may require writing a for-loop and calling the plotting function multiple times (rather than, say, using a long-form hue variable). Users of other statistical programming languages are sometimes made to feel bad for writing a for loop, but I don’t think that’s true with Python. If this approach is easiest for you, you should use it! There are two kinds of plotting functions The second difficulty is typically encountered when you try to combine a seaborn plot with a matplotlib figure that has multiple axes. As you may know, matplotlib has two interfaces, The implicit interface — comprising pyplot functions like plt.plot and plt.bar — draws onto the “current axes” as tracked by an internal state machine. The explicit interface —comprising Axes methods like ax.plot , ax.bar — draws onto the specific Axes that the method is attached to. plt.plot(x, y) # Plots on the "current" axes, creating it if needed f, axs = plt.subplots(ncols=2) # Creates a new figure with two axes axs[0].plot(x, y) # Plots on the first axes of the new figure plt.plot(x, y) # Plots on the second axes of the new figure Both approaches have their use: the implicit interface is quick and easy, while the explicit interface is (slightly) more verbose but better for making complex figures. Seaborn tries to support both styles too. Most plotting functions plot onto the “current” matplotlib axes by default and can be directed towards a specific existing Axes by setting the ax= parameter. sns.lineplot(x=x, y=y) # Plots on the "current" axes f, axs = plt.subplots(ncols=2) # Creates a new figure sns.lineplot(x=x, y=y, ax=axs[0]) # Plots on the first new axes sns.lineplot(x=x, y=y) # Plots on the second new axes Except that’s only true for most functions. Functions in a special subset , the “figure-level” functions, create a new figure every time they are invoked. These functions, such as relplot , displot , and catplot , work this way because they internally use a seaborn FacetGrid , an object that can create a figure where subsets of the data are shown on different axes. As a result, if you do something like f, ax = plt.subplots() sns.displot(data, x="a", ax=ax) sns.displot(data, x="b", ax=ax) You’ll end up with three figures: one with an empty Axes, and two with separate histograms. Which is not what you wanted! This behavior is explained in the user guide, but if you haven’t come across that chapter, it can be very confusing. It doesn’t help that the names don’t clearly distinguish the two kinds of functions — in retrospect, calling the figure-level functions something like relfig , catfig would have made more sense — although you can tell when kind they are by whether ax= appears in the list of parameters and by what kind of object they return. There are a few other complexities, which the user guide chapter covers in detail. Notably, the figure size is parameterized differently in the figure-level functions, and they return a FacetGrid object, which has a few helpful methods that matplotlib Axes functions lack. I generally recommend using the figure-level functions for most applications, but to make arbitrarily complex figures, you’ll need to switch to an axes-level function. There’s a one-to-one correspondence between each axes-level function and the different kinds of plots that the figure-level functions can make. So by default displot has kind="hist" , corresponding to histplot , but displot(..., kind="kde") corresponds to kdeplot . Categorical plots will always be categorical Several seaborn functions specialize in creating plots where one of the axes corresponds to a categorical variable: a variable whose values do not (necessarily) bear a quantitative relationship to each other. Examples would include country of origin (which is both categorical and unordered) and age group (which is ordered, but still categorical). Such variables are often encoded with strings, and at the time these functions were created, matplotlib was not able to interpret string data. So the seaborn functions internally map from the data values to ordinal indices (0, 1, …, n), which are then passed to matplotlib. The surprise is that seaborn’s categorical functions always do this. As a consequence, numeric variables will be treated as categorical. The (sorted) unique values will be mapped ordinal indices, and a label will be drawn for every value. Sometimes, this makes sense and is helpful. For example, in the “tips” dataset, the size variable is numeric, but it only takes a few evenly-spaced values, and the default tick labels that you get from the categorical pointplot are more informative than those from lineplot : Sometimes it makes sense to make treat a numeric variable as categorical… But if you draw a line with more densely-sampled values, they will all be labeled, and the x axis will be impossible to read: …but other times, it makes a huge mess. It won’t help to do ax.set_xticks([20, 40]) , as that will label the 20th and 40th data points, not the data points with those numeric values (because, remember, all matplotlib sees here are the index values, not the original numbers). And even if you did set the labels properly, the plot probably wouldn’t be what you want, because each datapoint would be drawn at a fixed distance from its neighbors rather than at a distance proportional to their values. This issue also surprises users who want to layer categorical and non-categorical functions onto the same plot. Consider the following example, which calls stripplot and lineplot with the same arguments: The stripplot treats size as categorical, but the lineplot doesn’t, so the line is shifted to the right. Now that you know the strips are actually drawn at 0, 1, …, n — with the tick labels set to strings representing the corresponding values — you should understand why this figure looks the way it does. But it’s a common source of surprise and confusion. For now, my general advice would be to avoid mixing categorical and non-categorical plots on the same Axes. You can substitute pointplot for lineplot and stripplot for scatterplot where needed. These days, most matplotlib functions can handle string data, using the same basic approach as seaborn: strings are mapped to 0, 1, …, n indices. As a result, the “non-categorical” seaborn functions can handle categorical variables just fine, and the lines between the two kinds of functions have become blurred. So it’s also possible to force categorical treatment in non-categorical plots by converting your data to strings. The next release of seaborn will include some major enhancements to the categorical functions, which will further smooth away some of these difficulties. Notably, it will become possible to maintain the original scale of numeric (or datetime) data on the “categorical” axis. But you’ll have to explicitly ask for that, so it will be good to keep a slightly modified version of this lesson in mind: categorical plots will always (by default) be categorical.
https://medium.com/@michaelwaskom/three-common-seaborn-difficulties-10fdd0cc2a8b
['Michael Waskom']
2021-02-22 16:28:57.765000+00:00
['Data Visualization', 'Pandas', 'Data Science', 'Matplotlib', 'Seaborn']
Take a minute to take in some good
by Josh Metz, LCSW Did you know that we humans are designed to remember negative experiences that we have better, faster and stronger than positive experiences? It’s one of our hardwired survival skills, since it’s much more important to remember exactly where you saw that snake skin along the wooded path than the bright red cardinal perched in the tree. But remembering only the negative can create an imbalance in our autobiographical memory (that’s the type of memory that tells the story of where we’ve been and what we’ve experienced). Too many bad memories and not enough good memories can lead to stress, depression and even catastrophizing (“Nothing goes right for me because I can only recall the bad things”). We can find balance by taking the time to install positive experiences into our autobiographical memory. In his book, Hardwiring Happiness, Dr. Rick Hanson came up with a simple method to help us remember more of the good things that happen in our life. It’s Called H — E — A — L (or HEAL), it only takes about a minute, and this is how it works: HEAL from Hardwiring Happiness by Dr. Rick Hanson Step 1: HAVE a positive or beneficial experience, either in the moment or by recalling something good that happened to you recently. Step 2: ENRICH the experience by noting where you were, who you were with, and what was going on. Tap your senses to note what you saw, heard, felt, smelled or even tasted. Step 3: ABSORB the experience by allowing everything in Step 2 to wash over you and really sink in. Step 4: LINK this new strong memory to other memories, good or bad, and let yourself know that this is now a good memory that you can recall as easily as a bad memory. So, the next time something good is happening in your life or you’re in need of a little balance, take a minute to take in some good. Joshua Metz is a licensed clinical social worker and one of the lead architects of the app, Emotionary.
https://medium.com/@emotionary/take-a-minute-to-take-in-some-good-1fabe723b913
[]
2020-12-23 15:41:39.568000+00:00
['Mobile App Development', 'Emotional Wellbeing', 'Therapy', 'Mental Health', 'Happiness']
Using JavaScript in Flutter Web
Is there a hot JavaScript library that you want to use in Flutter Web but there is no equivalent for it in Dart? You are in luck! Dart was originally a language for internet browsers to begin with so they have this sweet Dart package called the js which you can use to interop between JavaScript and Dart. To provide an example, we want to access this TypeScript class on the Flutter side. Let’s create a Dog.ts file as shown below. We have A constructor Properties of name and age of type string and number respectively. Getters and Setters for name and age . A bark method A jump method that receives a function parameter that takes in height of type number A sleep method that takes in an object . // Dog.ts class Dog { private _name: string; private _age: number; constructor(name: string, age: number) { this._name = name; this._age = age; } get name(): string { return this._name; } get age(): number { return this._age; } bark() { console.log(`${this._name}:${this._age}:: Woof!`) } jump(func: (height: number)=>void){ func(20) } sleep(options: {bed: boolean, hardness: string}){ if (options.bed){ console.log(`${_name} is sleeping on a ${options.hardness} bed.`) } else { console.log(`${_name} is sleeping on the floor. :(`) } } } Afterwards, we’ll convert this TypeScript file into JavaScript file with tsc --target ES5 Dog.ts . With this done we’ll have a Dog.js file alongside Dog.ts . Note that --target ES5 is required as according to the answer provided in the Stackoverflow post here: “Normally JS interop would rely on function hoisting to ensure that an old-style JS class was in scope. ES6 classes however aren’t hoisted and aren’t available where Dart expects them to be. ” We’ll leave this aside for now and we’ll create an empty Flutter project and import the js library in the pubspec.yaml like so: dependencies: flutter: sdk: flutter js: ^0.6.1 Note: Use Flutter Beta to enable the development of Flutter Web. We’ll create an empty Dart file called Dog.dart . Inside this Dog.dart class, we’ll have our Dart <-> JS interop code. @JS() library dog; // The above two lines are required import 'package:js/js.dart'; @JS() class Dog { external Dog(String name, int age); external String get name; external int get age; external void bark(); external void jump(Function(int height) func); external void sleep(Options options); } @JS() @anonymous class Options { external bool get bed; external String get hardness; external factory Options({bool bed, String hardness}); } the @JS() annotation comes with the js package which annotates our Dog class in Dart to interop with the JavaScript’s Dog class. The external is put in front of the constructors , methods as well as the getter and setters . If we want to have a custom name for the class in the Dart side. we can name it differently if we add a string as a parameter in the JS() annotation like shown below, although it is discouraged according to the comments in the js package . @JS("Dog") class DartDog { ... } JavaScript objects as a parameter To pass a JavaScript object as a parameter. We’ll need to create a class and annotate it with @JS() and @anonymous . We will use the Options class as the example shown below. @JS() @anonymous class Options { external bool get bed; external String get hardness; external factory Options({bool bed, String hardness}); } And with that, we can pass the Options class to the sleep method like so @JS() class Dog { ... external void sleep(Options options); } Passing Functions to JavaScript For the function parameter, we’ll need to wrap our Function with allowInterop like this dog.jump(allowInterop((int height){ print(height); })); Hooking things up To make this work, copy the generated Dog.js from the TypeScript file into the web folder of the Flutter project alongside index.html . Next, open up index.html and add this line before the import main.dart.js script tag. <script src="Dog.js" type="application/javascript"></script> A full example of the index.html is shown below: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <meta content="IE=Edge" http-equiv="X-UA-Compatible"> <meta name="description" content="A new Flutter project."> <!-- iOS meta tags & icons --> <meta name="apple-mobile-web-app-capable" content="yes"> <meta name="apple-mobile-web-app-status-bar-style" content="black"> <meta name="apple-mobile-web-app-title" content="ts_app"> <link rel="apple-touch-icon" href="icons/Icon-192.png"> <!-- Favicon --> <link rel="shortcut icon" type="image/png" href="favicon.png"/> <link rel="manifest" href="manifest.json"> </head> <body> <!-- This script installs service_worker.js to provide PWA functionality to application. For more information, see: <script> if ('serviceWorker' in navigator) { window.addEventListener('load', function () { navigator.serviceWorker.register('flutter_service_worker.js'); }); } </script> ts_app if ('serviceWorker' in navigator) {window.addEventListener('load', function () {navigator.serviceWorker.register('flutter_service_worker.js');}); <!-- Add the import Dog.js script tag here here --> <script src="Dog.js" type="application/javascript"></script> <script src="main.dart.js" type="application/javascript"></script> </body> </html> We are nearly there, create a widget with a button in the centre of the screen which will use the Dog class. class SomeWidget extends StatelessWidget { @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: Text("Flutter2JS"),), body: Center( child: RaisedButton( onPressed: (){ var dog = new Dog("Bear", 12); dog.bark(); print(dog.age); print(dog.name); dog.jump(allowInterop((int height){ print(height); })); dog.sleep(Options(bed: true, hardness: "Soft")); }, ), ), ); } } After that, we can create and call our JavaScript Dog class in Flutter Web! We can see the Dog class is printing out in the console log. Thank you for reading!
https://liewjuntung.medium.com/use-javascript-in-flutter-web-a6eed3efb9a0
['Jt Liew']
2020-04-19 03:35:03.810000+00:00
['Flutter Web', 'Flutter', 'JavaScript']
Wanna learn Chinese? Ride the Shanghai Metro.
Photo by Sean Lim on Unsplash For most expats who live in China, learning Chinese can be a daunting undertaking: all those characters, strange sounds, and those tones (!!) make you wonder if it’s even worthwhile. Well, to any expat living in Shanghai, I’ve found a way to learn Chinese in a very simple, yet passive way: the Shanghai Metro. Impossible you say? 不是!(no way!). It has worked for me, so of course it can and will work for you! Let’s face it: most of us are way too busy or too broke to take a formal Chinese class, but on the flipside pretty much everyone who lives in Shanghai uses the Metro as their primary mode of getting from Point A to B. Why not make most the most of your time in the subway and learn some Chinese! Before we begin, a few disclaimers: While you won’t become fluent overnight, you will learn quite a few characters and how they are pronounced. It will be helpful to be familiar with pinyin; that is, the system of phonetics using the Roman alphabet. In Standard (Mandarin) Chinese, each Chinese character stands for one syllable: e.g. 上海 is ShangHai; 红桥火车站 is HongQiao HuoChe Zhan — Hongqiao Railway Station. Photo by Touann Gatouillat Vergos on Unsplash Now let’s begin! Lesson 1 — Matching syllables to characters. Most of the Metro stops are named after the roads they are on. 衡山路 — HengShan Lu — Hengshan Road. Remembering the rule one character-one syllable, you now have 衡 (Heng) 山 (Shan) 路 (Lu). You may have also picked up that 路 means road, a useful character to know when navigating the city. Lesson 2 — Matching English translations to characters. In other cases the name of a Metro stop is translated fully into English. 人民广场 — RenMin GuangChang — People’s Square Looking at the characters, you have 人民 (RenMin) meaning ‘people’ and 广场 (GuangChang) meaning ‘square’ or ‘plaza’. Lesson 3 — Listening to the Metro loudspeaker On the train, the most important information you need to know is what the next stop is, and when you arrive. The phrase下一站 (XiaYiZhan) means ‘next stop’ and 到了(DaoLe) means ‘arriving’ or ‘arrived’. Keeping everything we have learned so far in mind, can you figure out what the following phrases mean? 下一站:衡山路。 人民广场到了。 The next time you ride the Metro, read the signs in Chinese closely, and keep your ear tuned to the Metro announcer. It will take some time, but before you know it you will be able to read quite a few characters and understand some spoken Chinese. And hopefully for you, Chinese will become less and less of an “impossible” language to learn. 好运!(HaoYun — Good luck!) This article originally appeared on an old personal blog 2012 when I lived in China. It has been edited for clarity.
https://medium.com/@jeremylcadiz/wanna-learn-chinese-ride-the-shanghai-metro-a34daa3e1911
['Jeremy Cadiz']
2019-04-12 23:19:50.346000+00:00
['Chinese', 'Metro', 'Language Learning', 'Language Acquisition', 'Shanghai']
Day 1
As part of my journey into self teaching software engineering, I’m reading 10 pages a weekday of The Pragmatic Programmer to learn how to develop like a more professional developer. For every 10 pages out of 500, I’ll be posting a small extract here so others can learn with me. Yesterday’s pages were just the title and contents, so think of it as day 0 and enjoy the extract for today: “Programming is about trying to make the future less painful. It’s about making things easier for our teammates. It’s about getting things wrong and being able to bounce back. It’s about forming good habits. It’s about understanding your toolset. Coding is just part of the world of being a programmer, and this book explores that world.” Starting a module in engineering this week for Systems and Software Principles should be a good foundation for making development more future-proof and less painful to fix. And after all, forming good habits is never a bad thing. That concludes day 1. See you tomorrow! #PathToSWE
https://medium.com/@mazalkov/day-1-e2c54228cbbb
[]
2021-04-13 10:22:39.214000+00:00
['LinkedIn', 'Tips', 'Programming', 'Swe', 'Education']
Nebula container orchestrator — container orchestration for IoT devices & distributed systems
Photo by Mateusz Dach on Pexels.com Let’s say for example you started a new job as a DevOps/Dev/SRE/etc at a company that created a new smart speaker (think Amazon Echo or Google home), said device gained a lot of success and you quickly find yourself with a million clients, each with a single device at his\hers home, Sounds great right? Now the only problem you have is how do you handle deployments to a million of devices located all across the world? You could go the way most old school vendors do it by releasing a package for the end user to download and install himself on the company website but at this day and age this will quickly lose you customers to the competition who doesn’t have such high maintenance needs. You could create a self updating system built into your codebase but that will require a lot of maintenance and man hours from the development team & even then will likely lead to problems and failures down the road. You could containerize the codebase, create on each smart speaker a single server Kubernetes cluster and create a huge federated cluster out of all of them (as Kubernetes doesn’t support this scale nor latency tolerant workers this is required) but that will lead to huge costs on all the resources wasted only to run all said clusters. You could use Nebula Container Orchestrator — which was designed to solve exactly this kind of distributed orchestration needs. As you may have guessed from the title I want to discuss about the last option from the list. Nebula Container Orchestrator aims to help devs and ops treat IoT devices just like distributed Dockerized apps. It aim is to act as Docker orchestrator for IoT devices as well as for distributed services such as CDN or edge computing that can span thousands (or even millions) of devices worldwide and it does it all while being open-source and completely free. Different requirements leads to different orchestrators When you think about it a distributed orchestrator has the following requirements: It needs to be latency tolerant — if the IoT devices are distributed then each will connect to the orchestrator through the Internet at a connection that might not always be stable or fast. It needs to scale out to handle thousands (and even hundreds of thousands) of IoT devices — massive scale deployments are quickly becoming more and more common. It needs to run on multiple architectures — a lot of IoT devices uses ARM boards. It needs to be self healing — you don’t want to have to run across town to reset a device every time there is a little glitch do you? Code needs to be coupled to the hardware — if your company manufacture the smart speaker in the example mentioned above & a smart fridge you will need to ensure coupling of the code to the device it’s intended to run on (no packing different apps into the same devices in the IoT use case). This is quite different from the big Three orchestrators (Kubernetes, Mesos & Swarm) which are designed to pack as many different apps\microservices onto the same servers in a single (or relatively few) data centers and as a result non of them provide truly latency tolerant connection and the scalability of Swarm & Kubernetes is limited to a few thousands workers. Nebula architecture Nebula was designed with stateless RESTful Manger microservice to provide a single point to manage the clusters as well as providing a single point which all containers check for updates with a Kafka inspired Monotonic ID configuration updates in a pull based methodology, this ensure that changes to any of the applications managed by Nebula are pulled to all managed devices at the same time and also ensures that all devices will always have the latest version of the configuration(thanks to the monotonic ID), all data is stored in MongoDB which is the single point of truth for the system, on the workers side it’s based around a worker container on each devices that is in charge of starting\stopping\changing the other containers running on that device, due to the design each component can be scaled out & as such Nebula can grow as much as you require it. you can read more about Nebula architecture at https://nebula.readthedocs.io/en/latest/architecture/ Nebula features As it was designed from the ground up to support distributed systems Nebula has a few neat features that allows it to control distributed IoT systems: Designed to scale out on all of it’s components (IoT devices, API layer, & Mongo all scale out) Able to manage millions of IoT devices Latency tolerant — even if a device goes offline it will be re-synced when he gets back online Dynamically add/remove managed devices Fast & easy code deployments, single API call with the new container image tag (or other configuration changes) and it will be pushed to all devices of that app. Simple install —MongoDB & a stateless API is all it takes for the management layer & a single container with some envvars on each IoT device you want to manage takes care of the worker layer Single API endpoint to manage all devices Allows control of multiple devices with the same Nebula orchestrator (multiple apps & device_groups) Not limited to IoT, also useful for other types of distributed systems API, Python SDK & CLI control available A little example The following command will install an Nebula cluster for you to play on and will create an example app as well, requires Docker, curl & docker-compose installed: curl -L "https://raw.githubusercontent.com/nebula-orchestrator/docs/master/examples/hello-world/start_example_nebula_cluster.sh" -o start_example_nebula_cluster.sh && sudo sh start_example_nebula_cluster.sh But let’s go over what this command does to better understand the process: The scripts downloads and runs a docker-compose.yml file which creates: a) A MongoDB container — the backend DB where Nebula apps current state is saved. b) A manager container — A RESTful API endpoint, this is where the admin manages Nebula from & where devices pulls the latest configuration state from to match against their current state c) A worker container — this normally runs on the IoT devices, only one is needed on each device but as this is just an example it runs on the same server as the management layer components runs on. It’s worth mentioning the “DEVICE_GROUP=example” environment variable set on the worker container, this DEVICE_GROUP variable controls what nebula apps will be connected to the device (similar to a pod concept in other orchestrators). 2. The script then waits for the API to become available. 3. Once the API is available the scripts sends the following 2 commands: curl -X POST \ http://127.0.0.1/api/v2/apps/example \ -H 'authorization: Basic bmVidWxhOm5lYnVsYQ==' \ -H 'cache-control: no-cache' \ -H 'content-type: application/json' \ -d '{ "starting_ports": [{"81":"80"}], "containers_per": {"server": 1}, "env_vars": {}, "docker_image" : "nginx", "running": true, "volumes": [], "networks": ["nebula"], "privileged": false, "devices": [], "rolling_restart": false }' This command creates an app named “example” and configures it to run an nginx container to listen on port 81 , as you can see it can also control other parameters usually passed to the docker run command such as envvars or networks or volume mounts. curl -X POST \ http://127.0.0.1/api/v2/device_groups/example \ -H 'authorization: Basic bmVidWxhOm5lYnVsYQ==' \ -H 'cache-control: no-cache' \ -H 'content-type: application/json' \ -d '{ "apps": ["example"] }' This command creates a device_group that is also named “example” & attaches the app named “example” to it, 4. After the app & device_groups arecreated on the nebula API the worker container will pick it up the changes to the device_group which is been confiugred to be part of (“example” in this case) and will start an Nginx container on the server, you can run “docker logs worker” to see the Nginx container being downloaded before it starts (this might take a bit if your on a slow connection). and after it’s completed you can access http://<server_exterior_fqdn>:81/ on your browser to see it running Now that we have a working Nebula system running we can start playing around with it to see it’s true strengths: We can add more remote workers by running a worker container on them: sudo docker run -d --restart unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --env DEVICE_GROUP=example --env REGISTRY_HOST=https://index.docker.io/v1/ --env MAX_RESTART_WAIT_IN_SECONDS=0 --env NEBULA_MANAGER_AUTH_USER=nebula --env NEBULA_MANAGER_AUTH_PASSWORD=nebula --env NEBULA_MANAGER_HOST=<your_manager_server_ip_or_fqdn> --env NEBULA_MANAGER_PORT=80 --env nebula_manager_protocol=http --env NEBULA_MANAGER_CHECK_IN_TIME=5 --name nebula-worker nebulaorchestrator/worker It’s worth mentioning that a lot of the envvars passed through the command above are optional (with sane defaults) & that there is no limit on how many devices we can run this command on, at some point you might have to scale out the managers and\or backend DB but those are not limited as well. We can change the container image on all devices with a single API call, let’s for example replace the container image to Apache to simulate that Similarly we can also update any parameter of the app such as env_vars, privileged permissions, volume mounts, etc, — the full list of API endpoints as well as the Python SDK & the CLI is available at documentation page at https://nebula.readthedocs.io/en/latest/ Hopefully this little guide allowed you to see the need of an IoT docker orchestrator and it’s use case & should you find yourself interested in reading more about it you can visit Nebula Container Orchestrator site at https://nebula-orchestrator.github.io/ or skip right ahead to the documentation at https://nebula.readthedocs.io
https://medium.com/hackernoon/nebula-container-orchestrator-container-orchestration-for-iot-devices-distributed-systems-45f8a9a605f8
['Naor Livne']
2019-03-13 11:39:47.571000+00:00
['IoT', 'Orchestration', 'Internet of Things', 'Distributed Systems', 'Docker']
The Stories of Pisces, Leo and Orion — Greek Mythology #4
The goddess Artemis was a strong female character who was unconventional in ancient Greece. Artemis, the twin sister of the Greek god Apollo, is also the goddess of hunting and the forest in Greek mythology. I will tell the story of the birth of Apollo and Artemis later. Artemis is said to have once told his father, Zeus, “The disadvantages of wearing traditional long dresses are much greater than the advantages for girls. I want to shorten my dress. It will be easier for me to walk, run and hunt” The goddess Artemis therefore kept her virginity for life. She was never attracted to any man. Rather she has kept herself away from men as much as possible. However, it is known that Artemis, had a short-lived friendship with Orion, the son of God of the seas, Poseidon. But it is better to call it friendship than love. Yet her twin brother Apollo, the sun god, did not like their relationship. So he decided to kill Orion in the guise of a trick. To this end, he called on his sister Artemis to demonstrate her skills in archery. One day when Orion was swimming in the sea, Apollo pointed at him from a distance and told Artemis to shoot an arrow. But Orion was so far away from Artemis that she could not see Orion’s face at that moment. As a result, when Artemis shot an arrow at him, Orion died instantly. Later, the goddess Artemis felt remorse for this cruel act and placed the dead Orion in the midst of the constellations in the sky in the guise of a hunting warrior. Even today in the night sky we all identify that constellation as “Orion”. Not only that, but Artemis also placed a hound with him in the constellation, which is now known to all as “Sirius”, the companion of Orion in the night sky. This “Sirius” is the “brightest star” in our night sky. Some more information: Even before Artemis, Orion loved Merope, the daughter of Titan Oceanus and Tethys. I will tell you the story of Orion and Merope one day. And those who grew up watching Harry Potter series like me from a young age, now they know where JK Rowling got the name of the character in his story “Sirius Black”! The “Orion” constellation in our night sky is actually made up of a number of stars, one of which is called Bellatrix, the third brightest star in our universe. You remember Bellatrix Lestrange, right? So one more surprise, the real name of Voldemort, Harry Potter’s worst enemy, was Tom Riddle, as everyone knows. But do you know what Rowling gave the name of Tom Riddle’s mother? You may be surprised to hear. Rowling also named Riddle’s mother after a character in Greek mythology. Her name was Merope!
https://medium.com/illumination/the-stories-of-pisces-leo-and-orion-greek-mythology-4-9ef7743c935c
['Samrat Dutta']
2021-01-05 06:34:31.340000+00:00
['Leo', 'Greece', 'Pisces', 'Orion', 'Greek Mythology']
They Buried My Dad Today
Ashes in a cardboard box, with a Tree of Life painted on top. Dark brown soil. Good soil. A curved stone bench tucked into the corner of a short, evergreen hedge. Box woods? My Aunt Linda, Dad’s oldest sister, said words. Good words. Necessary words. Mom sat in her living room with my cat. Big Sister and Little Sister called in from Little Sister’s backyard. Uncle Peter held space, lovingly and quietly, as always, beside my aunt. They scooped dirt back into the hole with gloved hands. Uncle Peter wore a plaid tie and a wide brimmed hat. Uncle Peter, the Priest, and Aunt Linda all stood outside in the snow-frosted graveyard next to an old, brick(?) church in Portland, ME. And thus my father has found his final resting place. May he be at peace. May we all be at peace. Amen.
https://medium.com/@jamtrow/they-buried-my-dad-today-5da662871e49
['Jamie Trowbridge']
2020-12-11 20:04:58.634000+00:00
['Growing Up', 'Death', 'Family', 'Acceptance', 'Father And Son']
Why (black) Christians Get Sexuality So Wrong
Christians are supposed to be in the New Testament. Yes, the New Testament was written in order to make it possible for humanity to receive spiritual salvation. The Old Testament is PART of the Jewish Torah. If you’re not a Jew or better, a Jewish person rooted in Jewish culture and theology, WHAT are you doing in the 21st Century attempting to base your reality on a bygone 1st Century culture… that isn’t yours ethnically, geographically, nor culturally? “The term “Torah” is used in the general sense to include both Rabbinic Judaism ‘s written law and Oral Law, serving to encompass the entire spectrum of authoritative Jewish religious teachings throughout history, including the Mishnah, the Talmud, the Midrash and more…” Most–no, 99.9% of Christians don’t know anything about the Mishnah, the Midrash, the Talmud, nor those oral–unwritten– aspects of the Torah handed down through time from Rabbis. The only constant in the Universe is change, yet far too many of today’s “Christian’s” seek refuge in either a post-Civil War version of theology, or one that is as narrow and rigid as their Ego’s can find… probably because thinking confuses them. “One of the most helpful ways to think about this is to look at the types of laws there are in the Old Testament. The 16th-century Reformer John Calvin saw that the NT seemed to treat the OT laws in three ways. There were Civil Laws, which governed the nation of Israel, encompassing not only behaviors, but also punishments for crimes. There were Ceremonial Laws about “clean” and “unclean” things, about various kinds of sacrifices, and other temple practices. And then there were the Moral Laws, which declared what God deemed right and wrong — the 10 Commandments, for instance.” ~ Pastor JD Scholars aren’t sure the Exodus actually occurred (Moses parted the Red Sea, etc…) but if the Jews did dwell in Egypt, then they were aware of Egypt’s religion and spiritual laws. The Christian 10 Commandments appear to have been edited and summarized from Egypt’s 42 Ma’ats or 42 Admonition to Goddess Maat. Sadly, this wouldn’t be the last time our beautiful Jewish brothers and sisters “lifted” ideas, art, or music from dark skin people without giving them credit… “Thou art worthy, O Lord, to receive glory and honor and power; for Thou hast created all things, and for Thy pleasure they are, and were created.” ~Revelation 4:11 Jungian psychologist James Hillman said, “When you have a religion made up of rules and laws, when calamity hits, all you have is rules and laws.” We need compassion and empathy to grow up into wisdom and functionality. The only– ONLY thing that New Testament Jesus said about Homosexuality, Transgenders, Intersex people, and gay people was Matthew 19:12 — “For by him all things were created, in heaven and on earth, visible and invisible, whether thrones or dominions or rulers or authorities — all things were created through him and for him.” ~Colossians 1:16 In some posts I’ve gone into the issues of how complicated–and diverse–a person’s sexuality is. Yes, sexuality is plastic and can be fluid throughout a person’s life. Yes, many people are born so “Intersexed” that doctors have to call in genetic specialist to determine what gender a child is. And yes, the “Over Powering Mother” and the Oedipal and Electra Complexes are the norm in our unbalanced, Materialist, narcissistic culture that thinks beating and shaming children is a “good idea.” The highest attribute … the greatest virtue of Christianity is … “God is love”. Not that in your shame or inadequacy you puff yourself up as being better than someone else — like “God” broke the mold when your rusty behind showed up late. Mind your business. God is LOVE — Love is love. © 2020
https://medium.com/@journeyman712/why-black-christians-get-sexuality-so-wrong-132c8eabd324
['Freddy G.']
2020-11-27 06:25:16.130000+00:00
['Black Hebrew Israelites', 'Thug', 'Reconquista', 'Islam', 'White Supremacy']
Life Among the Rooftop Pirates
Looking up, I notice the plant baskets hanging from the balcony railings have gone and so has the furniture. I can still picture the chair Kitten used to jump on as a mid point to the square Everest from which he surveyed the word with twitching whiskers, fascinated. I would gaze at his small gray and white face, hoping something would catch his attention long enough. So he wouldn’t jump back down just yet. The first time I spotted Kitten, I was locked into a staring contest with Periscope, his housemate, an orange tabby whose favorite mode of observation was sticking his neck out through the railings as far as it’d go. One day, there seemed to be an extra pair of tiny ears floating next to him. That was my view. My neighbors could survey my kingdom at leisure but I could only glimpse theirs from below. During golden hour, one of us would sit by the window with our legs resting on the edge and Periscope would stare encouragingly at the human whose fingers moved excitedly across a shiny metal rectangle according to a mysterious melody. The human would have a strange contraption around their head and smile a lot, their upper body animated, like that moment when you go from excited to asleep within a second because you’re still a kitten. Periscope stared and, sometimes, the human stared back. The long-haired one never seemed to blink, probably thanks to those small transparent eyeball covers that keep ocular globes extra moist. Cats know about your disposable contact lenses and may have eaten the odd one, which is why litter box gifts sometimes look at you. Today, Kitten and Periscope moved out. There were no goodbyes.
https://kittyhannaheden.medium.com/life-among-the-rooftop-pirates-6f93b3d7cd05
['A Singular Story']
2020-06-09 12:36:24.638000+00:00
['Fiction', 'Society', 'Humor', 'Cats', 'Netherlands']
Various Waste Management Strategies for the Construction Industry
Various Waste Management Strategies for the Construction Industry Construction waste consists of discarded materials such as blocks, bricks, concrete, glass, plastics, steel, wood, and soil generated by new building construction, refurbishment, or demolition. Unfortunately, many such materials are non-biodegradable and inert as well, and their bulkiness and excess weight only exacerbate the issue when they are dumped into landfills. Fortunately, there are many waste management techniques and junk removal companies in Canada that can help the construction industry with their waste management issues. For instance, construction waste recycling can be used to dichotomize and salvage waste products that are recoverable so that they can be reused and recycled. Here, we will discuss some effective waste management techniques and how they can benefit the construction industry. The Fundamental Differences Between Regular Waste and Construction Waste Disposal First and foremost, those who work in the construction industry should be made aware of the differences between regular waste disposal and construction waste disposal. For instance, construction waste is a very serious problem because many of the waste products that are generated from construction and demolition projects end up in landfills where they sit and rot in perpetuity, leading to land and water pollution. As such, the Environmental Protection Agency has helped fund and create landfills that are specifically designed for construction and demolition projects. Moreover, in recent years there has been a shift toward redirecting many conventional construction and demolition projects to recycling facilities in the city. Also, when construction waste products are deposited into a specialized construction and demolition landfill, the costs are drastically lower than dumping the waste into a conventional landfill. In fact, in many cases, such specialized landfills are located near or at many construction sites, which further lowers costs, such as transport costs, and makes things more convenient for the construction company and its staff. Metal and Concrete Recycling Moreover, many waste products generated by construction work, such as metals and concrete, can be recycled and reused. However, construction workers must be trained in order to determine which materials can be recycled and which cannot, and they must learn to segregate the two types in order to avoid issues down the road. Interestingly, the reusability of concrete has grown exponentially in recent years, as many companies now specialize in concrete recycling techniques. For instance, concrete waste can now be crushed and formed into an aggregate, which can then subsequently be utilized as a type of road base. Working with concrete recycling companies is highly recommended, as the construction company can either purchase concrete aggregates at a much lower price than at retail or pay significantly less to discard their own concrete waste products. Thus, the company saves money either way, while also helping to promote environmental sustainability in the process. Regarding metal, it is a highly valuable commodity in the construction industry. In fact, the price of copper rose to such an extent that many thieves began to raid construction sites for their copper deposits in recent years. As such, most metal recycling enterprises now require that companies provide proof in order to accept their metal waste products, as the sale of stolen metal materials has become so commonplace. What’s more, the cost discrepancies between reusing concrete and metals from demolition and construction zones is that the construction enterprise can become far more competitive when it comes down to the bidding process. Construction companies can also use the opportunity to help prove to their clients and prospective clients that they are an environmentally conscious company that values the health and wellbeing of its workforce and the general public. How to Create an Effective Construction Waste Management System In order to devise a construction waste management system that is cost-effective and efficient, construction enterprises need to take into account numerous factors. For instance, the company must ensure that any construction and demolition debris that is earmarked for construction and demolition-based facility must be stockpiled separately so that it can be deposited into a dump truck in an easy and efficient manner in order to be sent to a waste disposal site. Or, at the very least, such waste products should have their own roll-off bin or container. Also, materials can only be separated as intended if workers are properly taught how to identify what waste materials get in the first place. While some may argue that training all of their employees on how to properly spot and segregate construction waste products may be too labour and cost intensive, the long-term benefits more than justify the initial cost as companies will save thousands that would have gone toward resorting materials. Companies will also notice a drop in the number of death or injury claims that were caused by workplace accidents or fatalities caused by negligence or ignorance. Every year thousands of workers are injured or killed by workplace accidents because they were not taught how to properly handle hazardous materials. Years of mishandling construction waste products can also cause lethal respiratory illness and several forms of cancer as well, so workers should be made fully aware of all possible risks and hazards that they may encounter while they are handling construction waste products. Junk Works If you would like to learn more about proper waste management techniques, then we can help. Junk Works is North America’s number one junk removal company when it comes to green initiatives. We also guarantee you an estimate in writing, to beat any written estimate, as well as service that goes above and beyond the call of duty in order to offer the best junk removal and disposal services in North America.
https://medium.com/@cindywilliams-11270/various-waste-management-strategies-for-the-construction-industry-a0ab56ec10f0
['Cindy Williams']
2019-06-26 06:04:50.567000+00:00
['Recycling', 'Waste Management Company', 'Junk Removal', 'Waste Management Services', 'Waste Management']
By-mail voter registration deadlines
Grafiti Grafiti is the first search engine for graphs & charts.
https://medium.com/grafiti/by-mail-voter-registration-deadlines-17fd8963f7b4
['Adriana Navarro']
2018-10-09 15:14:13.328000+00:00
['Calendar', 'Politics', 'Voter Registration', 'Elections', 'Midterms']
Bittrex Global Launches Tokenized Stock Trading
Tesla (TSLA), Apple (AAPL), and 10 other stocks now available to trade Bittrex Global (Bermuda) Ltd. (Bittrex Global) announced that it will be listing tokenized stocks on its digital asset exchange in cooperation with DigitalAssets.AG. This product will allow traders and investors direct access to listed companies without having to use an external broker or pay additional fees. Shares can be purchased using either US dollars (USD), Tether (USDT), or Bitcoin (BTC), twenty-four hours a day, seven days a week. The tokenized stocks available through Bittrex Global will allow customers to purchase a fraction of a stock without needing to purchase entire shares, where the underlying risk of the tokens is derived from the tokenized company. Bittrex Global plans to quickly increase their offerings by giving its customers exposure to ETFs, indices, and additional asset classes. “The traditional stock exchanges of the world’s financial capitals have for centuries set the terms for engagement and trading. Clearing systems are inefficient and complex and trading small volumes can be expensive and take days, all of which is totally unnecessary given the technological advances that have been made in the last decade,” said Bittrex Global’s CEO Tom Albright. “Blockchain technology has the potential to radically broaden access to financial services, and Bittrex Global is very proud to provide people with a portal to build their capital and private wealth in a way that was unimaginable a decade ago.” “The ability to trade tokenized stocks would not be possible without the foresight and regulatory support that the Bermuda Monetary Authority (BMA) has shown,” said Bittrex Global’s CFO/COO, Stephen Stonberg. “Bittrex Global looks forward to continuing its strong partnership with the BMA to develop innovative solutions as the industry grows.” Bittrex Global’s diverse customer base can now purchase and trade the following tokenized stocks: These tokenized stocks are available even in countries where accessing US stocks through traditional financial instruments is not possible. The tokenization of stocks is the first step towards creating more dynamic and accessible markets where securitized token offerings (STOs) can harness more mature and varied investors. Tokenized stocks can be traded alongside over 250 digital assets listed on the Bittrex Global exchange and marks a significant milestone in the adoption of blockchain technology by traditional financial services.
https://medium.com/bittrexglobal/bittrex-global-launches-tokenized-stock-trading-a1b313644483
['Bittrex Global Team']
2020-12-09 16:41:32.329000+00:00
['Apple', 'Amazon', 'Tokenized Stocks', 'Tesla']
This Mantra Kicks the Ass of All Other Mantras
The Story of Om Mani Padme Hum In ancient [country redacted], there lived a holy and revered monk. He would spend his days and nights in a cave on an island outside of the village. For miles around, people had heard stories of this monk’s wisdom. One day, a young apprentice crossed the water and entered the cave to seek the master monk’s guidance. The apprentice heard the master reciting the ancient mantra: Om Mani Padme Hum. The apprentice said to the master, “pardon me, your holiness, but I’ve noticed you are saying the mantra in an unusual pronunciation.” “Oh, thank you, my loved child,” said the master. “How then is this revered mantra enunciated?” The apprentice instructed the master on the correct way to say Om Mani Padme Hum and then left on his way via his rowboat across the water. Before the youngster could finish paddling his way home, the master had caught up to him, walking across the water. The master asked the student, “How again should I be reciting Om Mani Padme Hum? I’m afraid I’ve been saying it incorrectly and want to make sure it is right.”
https://medium.com/mystic-minds/this-mantra-kicks-the-ass-of-all-other-mantras-ea55621c5574
['Ryan Dejonghe']
2020-12-11 16:44:44.515000+00:00
['Prayer', 'Love', 'Mantra', 'Buddhism', 'Meditation']
Simple vs. Compound Interest: Computations and Misconceptions
The Two Main Formulas There are two major formulas to consider for simple and compound interest calculations. Doing lots of practice problems will help you ingrain these problems into your memory, should you need them on the job or for use on an exam. But that’s true for a lot of formulas. Here they are: Image courtesy of Art of Problem Solving software TeXeR Of course, these formulas won’t do much good if you don’t know how to use them. There are many ways to use these formulas, and that is much of what this article will cover. The first step in exploring that is to know what all of the variables mean. I: this is the amount of interest that accumulates over some period of time. In many cases this is the variable of interest, or the thing you want to solve for. P: this is the principal amount. That’s the amount of money in the initial transaction or loan. r: this is the interest rate. This is often given as a percentage, but you need to convert the percentage to a decimal (move decimal to the left two places, and remove the percent sign) before you use it in a formula. t: this is the amount of time that passes, often given in months or years. A: this is the total amount of money that has accumulated. If a loan is made and paid back later, A will be the principal amount of the loan combined with any interest that has accured. I’ll talk about this a bit more in the next section. n: this variable only makes sense for compound interest. It indicates how many times interest is compounded per year. In other words, how many times is more interest applied to the principal amount of the investment? The most straightforward way to use these formulas is to find I for the simple interest problem, or A for the compound interest problem. In either case you’ll need to know the values of all of the other variables in that equation. What if you want to just find the amount of interest for a compound interest problem? How can the things we’ve gone over help us? Let’s talk about that now.
https://medium.com/@joshuasiktar/simple-vs-compound-interest-computations-and-misconceptions-44ad4afb3f7b
['Joshua Siktar']
2020-12-24 16:02:32.409000+00:00
['Loans', 'Personal Finance', 'Derivatives', 'Mathematics', 'Interest Rates']
JavaScript : ES6 or Modern JavaScript Crash Course
🔥🔥🔥🔥 One must know the ES6 or Modern JavaScript Concepts before you working on Angular or React Learn Essential ES6 Crash Course in just one Hour with all practical scenarios. New ES6 syntax 00:00 Introduction to ES6 00:40 Let keyword 04:12 Const keyword 04:45 Default Parameters 07:13 Rest Operator 09:06 Spread Operator 11:10 Object Literal 13:20 Template Literal 15:16 for of Destructuring 17:02 Array Destructuring 22:12 Obect Destructuring Modules in JavaScript 25:43 Modules in Javascript 25:46 What is Modules ? 26:05 Create Module 27:27 Export function constructor and functions 28:26 Import Module Class in JavaScript 31:40 Need of class in JavaScript 33:50 Create an Object of Class 35:40 Getter & Setter 38:53 Inheritance in Javascript Arrow Functions 45:40 Introduction to Arrow functions 46:12 Create Arrow Function 46:53 With Single Parameter 47:15 With Zero Parameter 48:02 How this works in Arrow functions 53:57 Arguments with Arrow functions 55:13 When to use arrow functions Promise in JavaScript 55:36 Introduction to Promise 55:42 What is Promise ? 56:24 Life cycle of Promise 57:22 Crete Promise 1:01 Consuming Promise 1:05:00 Catching Error 1:06:21 Finally 1:06:42 Attach handlers 1:07:51 Chaining Promise
https://medium.com/@codequicklearner/javascript-es6-crash-course-c34ee8796c02
[]
2021-09-04 06:14:57.259000+00:00
['Beginner', 'JavaScript', 'Angular', 'React', 'ES6']
Founder Feature: Ruchita Verma and The Community of Peace & Love
Watch Ruchita’s pitch for The Community of Peace & Love What does your organization do? The Community of Peace & Love provides a safe shame-free space for adolescents to be honored, seen, and heard. We offer a variety of creative sessions ranging from just talking to art, crafts, dance, yoga, journaling, meditation, and to our fan-favorite manicure therapy! These sessions allow adolescents to know that their feelings are there to guide them, that they are worthy, are enough, and that they deserve to take up space! What was your inspiration for your organization? I was inspired to create The Community of Peace and Love after seeing how mental health shapes every aspect of our life since our thoughts are so powerful. After seeing mental illnesses such as schizophrenia and depression affect my loved ones, I found it essential to create a platform where teens can come to take some weight off their shoulders by meeting a new friend who will listen with an open heart and are able to leave with resources to utilize during challenging times! Teens are allowed to feel good in their own skin and are able to realize their worth through our fun activities such as building gratitude chests, five-finger breathing, EFT tapping, and etc! What keeps you motivated? My love for the world and people keeps me motivated because we are in this together! Seeing change and working towards making the world a better place motivates me as well because change is beautiful and it’s the moments that make us uncomfortable, that allow us to grow the most. It’s so inspiring to see people authentically be and share themselves with the world! What does the Que Phillips Social Impact Award mean to you? The Que Phillips Social Impact Award means the world to me since it’s about making a difference in people’s lives. It’s a huge honor to receive an award after Que who is very inspiring himself. I plan on using the award by funding The Community of Peace and Love for our future endeavors and donating a portion to organizations that are also helping to save lives. Fun fact(s) about yourself? Some fun facts about me are that I love to travel and I dance for about 20 hours each week! What’s your advice to other students like yourself out there who are just starting their businesses or thinking about starting their own businesses? My advice to other students like myself who are starting their businesses or are thinking about starting their own businesses is that you can do it because you are so capable!! Commit fully to your vision and do not give up no matter how hard it gets because you are so worthy of making your dreams into a reality. There will be bumps on the road, but trust yourself, let your intuition guide you, and commit to your passion because the world needs you and your light!
https://medium.com/@fulphil/founder-feature-ruchita-verma-and-the-community-of-peace-love-21581a077a9d
[]
2021-05-08 00:18:40.690000+00:00
['Youth', 'Motivation', 'Social Impact', 'Esg', 'Entrepreneurship']
Customer experience vs customer service
‘ Customer service’ vs ‘ customer experience ‘, both terms are used to manage every aspect of your business. The differences between the two terms are often confusing or blurred. So what are the actual differences between these two aspects? Customer service “Excellent customer service is the number one job in any company! It is the personality of the company and the reason customers come back. Without customers, there is no company!” Connie Elder, Founder & CEO, PEAK 10 SKIN The term service refers to what happens from the human perspective and the support the customer receives. Customer service is provided by teams in contact with the customer who possess all the necessary skills, such as knowledge and patience. The idea of good customer service is to help customers and provide assistance with the product or service in question, as well as solutions. Customer service is normally applied in case of after-sales problems. It is a reactive element and can be a unique interaction. For a long time, working on strategies to improve service was seen as a cost center for the company. Customer experience “We see our customers as invited guests to a party, and we are the hosts. It’s our job every day to make every important aspect of the customer experience a little bit better.” Jeff Bezos, CEO of Amazon. Experiences are a series of continuous and daily interactions throughout the entire journey that impact feelings and emotions. A remarkable experience will help you make better decisions that increase customer loyalty, boost sales, and grow your market shares. The experience is part of the overall view customer journey, from beginning to end, on their side and on the company’s side. It also covers the various interactions with the customer as they engage with every touchoint. A proactive approach is mandatory for a good customer experience. Companies have to be actively seeking ways to understand cutomers’ needs and desires. Meeting this criteria in terms of the product or service in not enough, the most important part is the emotions that the customer has towards the company. Customer experience vs customer service: the differences In contrast to customer service, which is reactive, punctual and focused on short-term customer satisfaction, Customer Experience t is above all proactive and aims to improve and grow the relationship with each customer over a longer horizon. This difference in vision leads to very clear differences in mentality depending on whether or not the customer relationship management department attaches importance to customer experience management. A company that is content to ensure the minimum, with a customer service that responds punctually to customer requests, can certainly maintain a correct level of satisfaction but will rarely offer an exceptional customer experience. To become a brand that stands out from your competitors and matters in the lives of your customers, improving the digital customer experience is a step to take.
https://medium.com/@feedier/customer-experience-vs-customer-service-5ada258d999d
[]
2020-12-14 16:54:26.960000+00:00
['Customer Experience', 'Customer Service', 'Feedback', 'Experience Management']
Covariance and Correlation — Part-1, First Date
Covariance and correlation have been the household terms for the people working in the field of statistics, data science, economics, and other quantitative fields. Correlation, which many people have heard more of, is more popular and intuitive than covariance, thanks to its etymology and interpretation friendly mathematical structure. However, correlation itself comes from covariance. Let’s go on our first date with these twins, covariance being the elder one and correlation, the younger. The First Date You, my friend who is reading this, and me, are on the same table. A square-shaped four-legged table smiled in awe at the double date we are having. We are having date with covariance and correlation. You, a big fan of the game Call break, purposed us to play call break there. I, an ardent fan of call break echoed your proposal, before the voice could die out. We played a few rounds of games; enthusiastic waits for cards to be fully distributed, brave calls to make the game more competitive, and cute attempts to bring luck to one’s side, the game was filled with fun, excitement, nervousness, and energy. Photo by Davids Kokainis on Unsplash When I was thinking about the calls I should be making, one thing occurred to me: the relation between the number of spades I have and the amount of call I am making. Both are random variables, maybe the amount of call dependent on the number of spades I have. Let’s suppose a random variable X as the number of spades and Y as the number of calls I make. Now, I am interested in covariance and correlation between those random variables, and see if they are dependent. But, what is covariance? Covariance between X and Y is the degree of joint variability between the two random variables, X and Y. Mathematically, it is written as: Cov(X, Y) = E[(X - E(X))(Y - E(Y))] … … … … … (1) We can get an intuition about covariance from this mathematical expression. As covariance is the expectation of the product, its value depends on the magnitude of the differences and their sign. If both the differences (X - E(X)) and (Y - E(Y)) have the same sign, the product is positive. Similarly, sign being different leads to negative product. In this way, we can have the intuition that covariance is higher and positive when the value of X and Y both tends to be either higher or lower than E(X) and E(Y) respectively. Similarly, if either of them has a different direction, the value is negative. In this way, covariance can be understood as the degree of how the variables vary. (1) can be expanded to give another expression for covariance. Cov(X, Y) = E[XY] + E[X]E[Y] Covariance calculation So, I noted the number of spades I have and the number of calls I make in a table. The table can be simulated using the following Python code. So, I had a dataset of 20 points. [5, 4], [2, 2], [5, 3], [4, 2], [4, 4], [3, 2], [2, 1], [3, 2], [2, 1], [3, 1], [2, 1], [2, 1], [2, 2], [1, 1], [4, 2], [5, 4], [2, 1], [2, 4], [3, 4], [4, 4] Now, I could calculate the covariance between X and Y using the dataset. Cov(X, Y) = (Σ[i] (x_i - E(X)) (y_i - E[Y])) / (N - 1), where E[X] and E[Y] are expected value (or mean) of X and Y respectively. Calculating that, we got Cov(X, Y) = 1. The positive value denotes they are positively correlated. The direction of change tends to be same but we do not know the exact magnitude. This difficulty in interpretability and it being a non-standard value gives rise to another concept called correlation. If we multiply the whole dataset by 10, the covariance would be 100. Although the covariances are different in these two cases, the relation between them has not changed. This is exactly where the correlation becomes significant. Standardization and Correlation The scale which comes from the differences (X - E(X)) and (Y - E(Y)) is creating non-standard value for covariance. We could standardize these differences to get standard value. The standardization is done by dividing the differences by standard deviations. So, the new expression is: E[((X - E(X)) / SD(X))((Y - E(Y)) / SD(Y))] This is called correlation. Plucking SD(X) and SD(Y) out of the expectation expression, we get: Corr(X, Y) = E[(X - E(X)) (Y - E(Y))] / (SD(X)SD(Y)) Corr(X, Y) = Cov(X, Y) / (SD(X)SD(Y)) … … … … … (2) Now, putting standard deviation values in (2) i.e. SD(X) = 1.21 and SD(Y) = 1.26, Corr(X, Y) = 1 / (1.21 * 1.26) = 0.65 So, this is the standard indicator. The correlation coefficient is 0.65, so X and Y are mediumly correlated. As call break players can guess, the relation is acceptable and our mathematics supports it. If the dataset is scaled by the factor of 10, the standard deviations also get scaled by the factor of 10. So, the scales cancel each other and the same value of correlation is obtained. The value of correlation varies between -1 and 1 with values near -1 being strongly negatively correlated, values near 1 being strongly positively correlated and values near 0 being very weak to no correlation. Why correlation is bounded above by 1 and below by -1? Let’s see the variance of X + Y and X - Y. Var(X + Y) = Var(X) + Var(Y) + 2Cov(X, Y) Var(X - Y) = Var(X) + Var(Y) - 2Cov(X, Y) Note that variance of -Y is the same as that of Y. We are dealing with correlation so X and Y are standard random variables. Covariance of standard random variables is the same as the correlation of them. As the random variables are standard, the variance of X and Y is both 1. Also, variance is always greater than or equal to 0. So, 0 ≤ Var(X + Y) = 2 + 2Cov(X, Y) 0 ≤ Var(X - Y) = 2 - 2Cov(X, Y) Using these two inequalities, we can see that -1 ≤ Cov(X, Y) ≤ 1. Therefore, correlation is bounded above by 1 and below by -1. Conclusion As the correlation is not zero and in the medium range, the random variables X and Y are dependent on each other. So, I would wish to have more number of spades as possible to maximize the chance of more number of calls. Correlation (and of course covariance) is a very important metric for seeing the relation between the variables. While doing data cleaning, the redundant variables are dropped by taking the help of correlation between the variables. However, correlation does not ensure causality. Two variables being correlated might mean either variable being the cause for another variable or both the variables being caused by some other variable.
https://medium.com/swlh/covariance-and-correlation-part-1-first-date-94b4220b350b
['Suraj Regmi']
2020-05-09 08:06:02.923000+00:00
['Statistics', 'Correlation', 'Covariance', 'Mathematics', 'Dating']
How To Survive As A Light-Skinned Asian American Woman During Racially Divisive Times
This is not the time to clog social media outlets with our anti-racist awakening story or another performative hashtag to prove to our Black co-workers we’re not racists. Please journal and save those stories for when the protests end and non-Black people stop talking about Black lives, that is when we will need to have those conversations the most. Here’s what you can do to help in four steps: Research and find one Black-led organization in support of Black lives. Donate. Encourage a non-Black person who is close to you, your friend, your mom, your sister, your neighbor to donate. DO NOT get into a fight about anti-Blackness and racism. Take their money first. If they don’t budge, show them the video of George Floyd’s murder, tell them about your research on this one specific organization. If possible, make sure they complete the donation in front of you. Repeat. If you want to survive in America, prioritize Black lives and listen to Black people. They are not the enemy, injustice is.
https://medium.com/365-ally-for-black-lives/how-to-survive-as-a-light-skinned-asian-american-woman-during-racially-divisive-times-5ea37bc4d6e
['Jee Young Park']
2020-06-03 14:42:33.124000+00:00
['Asian Culture', 'Asian American', 'Asian Women Dating', 'Asian']
Indigenous Excellence — A Pueblo Jewelry Brand’s Path to Success
Indigenous peoples are this continent’s original entrepreneurs, members of vibrant communities with centuries-old histories of trade and business. The creatives in Tribal lands laid the bedrock for their region’s culture and commerce, and, in the face of hurdles from geographical remoteness to discrimination and misrepresentation, each successful Native business is a celebration of resilience and strength. The prosperity of these artisans is a crucial element of sustainable economic development on and off reservations. National movements and policies point to the importance of Native business-owners and entrepreneurs. In 2020, the Senate Committee on Indian Affairs sponsored the Indian Community Economic Enhancement Act, promoting entrepreneurship and access to funds by reinforcing existing programs and laws. Organizations like the National Center for American Indian Enterprise Development or the Tribal Link Foundation continue to provide programs, services, and scholarships for Native entrepreneurs. For New Mexico’s 19 Pueblos, the Indian Pueblo Cultural Center is gearing up to open their new Opportunity Center and has partnered with Creative Startups to present a LABS pre-accelerator for Indigenous Creatives, a five-week intensive course designed to guide entrepreneurs to move their businesses towards substantial growth. In these Pueblos, Kirk Jewelry is a perfect example of New Mexican Indigenous entrepreneurial excellence. This jewelry brand is run by Michael and Elizabeth Kirk, a father/daughter duo born and raised in Isleta Pueblo, and their unique pieces have been featured in the Smithsonian and the American Indian Museum galleries. In a recent interview with Elizabeth, we learned more about her story and how she developed Kirk Jewelry into the success it is today, including how she set their business-model apart from other artisans and her tips for burgeoning Native entrepreneurs. Creative Startups— What is Kirk Jewelry’s origin story? Elizabeth— My father is a Vietnam War veteran, and when he returned back, my uncle (his older brother) had been taking jewelry classes at the community college. While my father had enrolled to be a computer engineer (of all things), his brother said, “Hey, you know, I think we can do this on the side and earn money while we’re going for our degrees.” So they turned my grandmother’s kitchen into their workshop, and if you could ask her (she just passed a few months ago), she’d tell you how dirty her kitchen was. She was furious — just such a clean freak! And from there, the business just took off! I don’t think either of them realized how well it would work out. CS — Why do you think it worked so well? What were they doing that piqued their customers’ interest? Photo Credit: Ungelbah Davila-Shiver EK — Even in the beginning, they set out to be different from what other people were doing. They set themselves apart from other artists by working in high carat gold or really high quality turquoise. And so, just a few months after they started, they both realized that it was actually making them more money than what they would be doing with their degrees — that they were pretty good at it! So they officially started the business. CS — When did you realize that you wanted to join your father in making jewelry and also take the reins business-wise? EK — When I was about eight, he had converted the garage into his workshop. I was and still very much am a daddy’s girl, so I was always wanting to be by him and doing whatever he was doing. I would be running around, and one day I spun around and hit an oxygen tank. It fell, and he almost had a heart attack. In order to get me to sit still, he handed me a jeweler saw and a piece of silver and told me he wanted me to learn to make patterns. And from then on, I would just be in there working alongside him. I came into the business aspect of it at about 17, right after I had graduated from high school. My dad was looking at how he could better put things together, and I just thought “I have the summer off, let me start pulling paperwork and getting in touch with buyers.” Then that took over, and I enabled my dad to be able to just create. I took care of getting in contact with new store owners or galleries and reestablishing connections with people he had dealt with over the past 20 years. At the time, that meant getting in contact with potential buyers and saying, “Hey, can I put together a box of our best sellers, and you can let me know if you’re interested? If not, just send it back. No big deal.” Every time I sent out a box, I would just get a check back because everyone was keeping whatever I had sent out. CS — Is that a typical business model for artisans in your area? EK — Not for the vast majority of artists. I would say a good 90% of our artists make their income attending in-person shows, be that bigger art markets or Powwow circuits. Within New Mexico, we have 19 Pueblos here, and each has a feast day with their own celebrations. People show up and sell things as well. That’s probably a big difference between myself and other artists — that I do retail and wholesale. I don’t rely on art markets in the same manner as other artists do. It costs a significant amount of money to actually travel and get out there, and really, when you get down to it, it was better to be home and create than it would be to be on the road. Out there, I’m thinking, “We’re selling all of these pieces, so we have to go back home and create new ones in order to keep going.” It was about finding a happy medium. I got to a point where I would pick what show we did better at, which had the better return or the lowest expenses. That’s how I would decide what shows we would do. CS — How did the pandemic affect that sort of business model? What was it like in the Pueblo? EK — I would say that last year was rough because our Tribe shut down all businesses the first part of March all the way until July 31st. A great concern of mine when I was on the board of the Santa Fe Indian Market was that, if shows went away, what would Native artists have to fall back on because the vast majority of rural Natives have slow or no internet access. Within the reservation, it’s a pretty big struggle. Because I didn’t have a need for it before the pandemic, I just launched our website, I believe, in November. When all of the art shows and gallery openings came to a halt, I saw I needed to change how I had conducted business in the past. Photo Credit: Ungelbah Davila-Shiver There’s a lot to learn with choosing photos, writing descriptions, and marketing online. It’s about looking at what would work best to engage your customer. I’m someone who’s very comfortable in person. I can tell you stories about everything we make, but I struggle to do it on video because you don’t get to see who you’re speaking too. I would say a good portion of people also struggle in that same area. But we have a great deal of businesses coming forth to help in that aspect, and IPCC and Creative Startup’s program is a great start for a good many artists! I really think it would help them. I’m excited for what IPCC is building and what they are putting together to help with entrepreneurs. It’s exciting to see it come to fruition. CS — What advice would you give to these Native entrepreneurs who are trying to start or revitalize their own businesses? EK — I would say first and foremost, get your business set up properly. That’s one thing a lot of Native artists struggle with, and it’s something I’ve had to deal with because my father always operated as a sole proprietorship. Being able to set it up properly as either a sole proprietorship or a LLC, getting the necessary tax numbers together — that’s the part we as artists do not like. It would be nice to have an outlet, like what [the accelerator] is doing, to get you in touch with someone who can walk you through those steps. And that goes along with recognizing what your strong points and weak points are and finding people who can help you. It’s something I’ve had to discipline myself on. You have to tell yourself what you can’t add to your plate and then look for someone who has the same mindset and knows what you are wanting to do and then work with that person. That’s exactly what I did with my website. You do still have to become familiar with the things you aren’t an expert in. If I don’t know something, I’ll take a class so that I know the basics but still find someone who can take that on. It helps you in the long run. If you someone asks you questions, you’ll be able to have answers. “You also have to be able to see the opportunities available to you, reach out and establish those connections that get your name out there.” Another tip is that you also have to be able to see the opportunities available to you, reach out and establish those connections that get your name out there. It’s a constant process, and I think that more artists need to know that — that they definitely still need to be able to look out and stay relevant. My dad always says you either evolve or die off. He says, “Do you want to be a dinosaur, or do you want to still be here?” That’s why I say, with our jewelry, it’s a habit of continuing the fine tradition of jewelry making and blending modern technology with traditional aesthetics. And that’s exactly what we do. It’s a tough process, but it’s definitely one that you want to keep with because you have to evolve. CS — What makes all of this hard worth it? Why are you passionate about wearable art like your jewelry? EK — When I wake up, my day is never the same. Sometimes I’m going to be working on jewelry. Other times, I’m doing photography because I have to take snapshots of the pieces because they can fly out of the building so fast that I need to have a record somehow. There’ll be other days that I’ll be completely dressed up to do an interview or meet a client. I don’t have days that look the same all the way through. When I get up, I have to ask what hat am I wearing? It’s always an adventure. It’s different. But really, for me, it’s the connection to my father first and foremost, and then it’s where we are on the reservation and with my culture, having that connection not just as a Native but also as a person with anyone across the board. It’s my way to communicate exactly what is most important to me and to be able to share part of my world with others.
https://medium.com/creative-startups/indigenous-excellence-a-pueblo-jewelry-brands-path-to-success-be606f4a6ca0
['Creative Startups']
2021-03-23 17:39:34.807000+00:00
['Indigenous', 'Business', 'Startup', 'Life Lessons', 'Entrepreneurship']
Little Steel Strike: Remembering the 1937 Memorial Day Massacre
BY FRED GABOURY Police using guns, clubs, and tear gas attack marching strikers outside Chicago’s Republic Steel plant, May 30, 1937. | Carl Linde / AP Originally published at People’s World South Chicago, Memorial Day 1937: Mollie West was there with a group of high school seniors. Curtis Strong was there for the hell of it. Aaron Cohen was there because of the responsibilities assigned to him by the Communist Party. “There” was the field fronting the Republic Steel plant in South Chicago, site of the Memorial Day Massacre of May 30, 1937. It was the first warm day of spring. Hundreds of steelworkers, on strike against the “Little Steel” companies and backed by hundreds of supporters, some dressed in their Sunday best, had come to assert the right of the Steel Workers Organizing Committee (SWOC) to establish a picket line at the gate of the Republic Steel plant. The line was never established. Before day’s end, they would be attacked by an army of gun-toting, stick-wielding Chicago cops. Ten men would be dead or mortally wounded, countless others severely beaten and many more temporarily blinded by tear gas. Mollie was walking near the front of the group when Chicago’s finest opened fire with tear gas and pistols. “I started to run and fell down. Several others stumbled on top of me. It wasn’t very comfortable,” Mollie said in a telephone interview from Chicago. “But it may have saved my life. And it certainly kept me from being beaten with those riot sticks the cops were using.” By the time Mollie came up for air, the worst was over. “It was unbelievable what I saw,” she said. “The place looked like a battlefield.” And she saw — or felt — something else: “I looked around to see a policeman holding his gun against my back. ‘Get off the field,’ he ordered, ‘or I’ll shoot you.’” From left, Mollie West, Curtis Strong, and Aaron Cohen, as pictured in the original 1997 article from People’s Weekly World. | People’s World Archive Several people came to her rescue and carried her to the first aid station at Sam’s Place, the watering hole that SWOC had rented as headquarters during the strike against the nation’s second-tier steelmakers. Several doctors had responded to the call for public support. “They never imagined that they would need to turn it into a field hospital,” Mollie said. “But they did — just like in M*A*S*H.” Curtis hadn’t planned on doing anything that day. He was working at the Gary Works of U.S. Steel and was an active SWOC member of what is now Local 1014 of the Steelworkers union. “I thought, why should I go? Shortly after General Motors capitulated to the Auto Workers union, U.S. Steel signed a contract with SWOC.” But ever one to seek adventure, Curtis decided to go, “I thought — what the hell, why not?” he said when reached at his home in Gary. “What started as a lark became one of the most damnable experiences in my life.” Curtis thought the first shots were meant to scare people. “I just knew that no one, not even Chicago’s notoriously anti-union police, would open fire on peaceful demonstrators who were demanding the right to put up a picket line at the Republic plant.” But he soon found out how mistaken he was. “A guy about six feet away from me was hit and I started to run — and damn fast. I had set state track records when I was in high school.” 2019 marks a century since the founding of the Communist Party USA. To commemorate the anniversary of the oldest surviving socialist organization in the United States, People’s World has launched the article series: 100 Years of the Communist Party USA. Read the other articles published in the series and check out the guidelines about how to submit your own contribution. Aaron Cohen had been a coal miner in southern Illinois and a leader in the reform movement of the United Mine Workers of America. As such, he earned the wrath of one Van A. Bittner, UMWA district director, whose goons once beat Aaron within an inch of his life. But the heat of the class struggle can melt old relationships and forge new ones — and such was the case with Aaron Cohen and Van A. Bittner. By the time SWOC launched its drive to organize the steel industry, Bittner was running the show in Illinois and Cohen, then 28 years old, was a member of the Communist Party leadership in Chicago. Shortly after setting up shop, Bittner invited Aaron and Bill Gebert, head of Illinois CP, to a meeting where he asked Aaron to find SWOC organizers among the various nationality groups and to help get favorable coverage of the campaign in the foreign-language press. “It was a bit frosty at first,” Aaron remembers, “Bittner didn’t quite know how to deal with me. But I made the first move. I stuck out my hand and said something like, ‘We’re in this together, Van,’ and that was it.” Aaron, who now lives in the San Francisco Bay Area, described the Memorial Day event as — at least in the beginning — a “jolly kind of affair. There was a holiday spirit. Guys were walking with their girlfriends. Some brought their families and picnic lunches. There was a baseball game and things for the kids to do.” The strike began at 11 p.m. on May 26 and police had prevented the union from establishing a picket line at the Republic plant. “So we decided that the whole bunch would go down and set up a mass picket line. After all, Mayor Kelly said SWOC had the right to picket,” Aaron said. Aaron, too, couldn’t believe what was happening. “But when Alfred Causey, who was standing less than arm’s length from me, fell with four bullets in his back, I became a believer.” Aaron’s voice hardened when he added: “There was Causey laying there dead — and they were still beating him.” When the group — “at least 1,000 strong” according to George Patterson, who led the demonstration — neared Republic property, they were met by police lined up for about a quarter of a mile “protecting” the mill. “For once, we had as many pickets as there were police,” Patterson said in his oral history of the massacre. “I went up to Police Commander Kilroy who was reading from a document. ‘I ask you in the name of the people of the State of Illinois to disperse,’ he read and dropped the paper to his side with a flourish.” There was no verbal command, Patterson remembered. “When Kilroy lowered the paper, all hell broke loose. Bullets were flying, gas was flying, and then the clubbing.” When Patterson stopped running, he looked at the carnage — at the young boy limping by, bleeding from a bullet wound in his heel, at men and women lying on the ground, some dead, others mortally wounded. Patterson said he “learned about death” on the prairie before the Republic plant. “It doesn’t take long to know when a man falls forward on his face that he’s been killed, he’s dead, he doesn’t move anymore.” Police may have been able to cover up the massacre had it not been for Orlando Lippert, a news cameraman for the Paramount Newsreel division and his motion picture camera. Within seconds — “fewer than seven,” Lippert told a Senate investigating committee — after the assault began, he had his camera grinding away, eventually shooting several magazines of film which he sent to New York. Paramount executives withheld the film, labeling it “restrictive negatives. Clips and printing of this material absolutely forbidden.” However, the film was subpoenaed by Sen. Robert La Follette’s subcommittee of the Senate Education and Labor Committee and shown to a closed-door meeting that included Commander Kilroy, Patterson, and several reporters, some of whom wrote stories of the events depicted in the film. https://youtu.be/x1H62KeDWZI A short clip of the film shot by Paramount cameraman Orlando Lippert, originally hidden from the public. | Illinois Labor History Society Republic Steel’s Tom Girdler was the lead dog in the employer’s sleigh team that not only provoked the strike but made plans to drown it in blood in a holy war to prevent “the Communists” from taking over. And they meant business. The La Follette hearings, which began on July 2, did more than expose the Memorial Day events. Committee investigators found that Republic was the largest buyer of tear and sickening gas in the country. Republic’s private arsenal was stocked with 552 revolvers, 64 rifles, 245 shotguns, and 83,000 rounds of ammunition. The other companies had similar arms caches. One of the few national news outlets to cover the Memorial Day Massacre, the Daily Worker denounced the actions of the Chicago police in its May 31, 1937 issue. | People’s World Archives In his autobiography, Len De-Caux, first editor of CIO News, described the Little Steel Strike as a “murderous class war.” In addition to the Memorial Day massacre in Chicago: — Strikers were gassed, clubbed, and shot in Youngstown, Massillon, and Cleveland, bringing the total killed to 18. — Governors, mayors, sheriffs, and police were suborned against SWOC and the CIO, sometimes with hard cash. — The Mohawk Valley Formula, with its “citizens’ committees,” back-to-work movements, and other strike-breaking techniques was applied with vigor. — “Friends of labor” in public office betrayed SWOC, as witnessed by Franklin D. Roosevelt’s “curse on both your houses” remark at a press conference. Although the Little Steel Strike ended with only Inland signing an agreement, it has earned a place in the annals of the great battles of the American working class. In 1937 — as they had been in the Great Strike of 1919 — steelworkers were in the vanguard of the class struggle. An earlier version of this article appeared in People’s Weekly World on May 31, 1997.
https://medium.com/peoples-world/little-steel-strike-remembering-the-1937-memorial-day-massacre-34caa04c0f92
['Peoplesworld Social Media']
2019-05-24 22:34:27.877000+00:00
['History', 'Memorial Day', 'Labor', 'Culture']
How to Find Stillness, Productivity, and Enjoyment Every Day
Success is nothing more than an accumulation of positive acts. How can I succeed in business? What’s the secret to becoming a full-time writer? And where can I find the magic formula for learning new skills? Those are common personal growth questions that many people ask themselves. They believe that someone has a recipe for success and that they just need to find it. Those thinking patterns hold you back. No matter if you’re building a business, learning a new language, or improving your physique, a combination of small habits will lead to success. You need to become a little bit better every day and add a small piece to the puzzle. That’s where stillness, productivity, and enjoyment come into play. No matter what you are trying to accomplish, you’ll need those three elements daily. First, stillness will help you remain calm, focused, and determined. Productivity, on the other hand, will help you achieve more in less time. In other words, you’ll use your time wisely. Finally, you need to enjoy your endeavor to stay motivated and retain your purpose. Together, the three can help you attain any summit by creating a daily merger of calmness — ensuring that you do the work without distractions, productivity — boosting your time management, and fun — transforming arduous chores into playful challenges. How do we combine the three? There are various effective methods to incorporate these three positive states into your everyday life. On this basis, here are five ways to find stillness, productivity, and enjoyment every day.
https://medium.com/the-innovation/how-to-find-stillness-productivity-and-enjoyment-every-day-d2f49920595c
['Jack Krier']
2020-12-27 20:32:52.056000+00:00
['Self', 'Self Improvement', 'Lifestyle', 'Mindfulness', 'Productivity']
How to use Stripe’s API for download information?
Be able to use Stripe information when you need it! Stripe is online payment processing for internet businesses, that recollects all the information about our processing money in its platform and which has become a reference on how to create documentation for one API. If you use Stripe for your principal payment processing and want to download information from the platform for creating reports, dashboards, analytics, or simple finance control, the best way is using its API, which has very good documentation on its use, with examples for some good numbers of languages. The Stripe API has official libraries for the more popular languages and has community libraries for other languages. But, what happens if you want to use this API in one language without a library available? or if you want to use this API without use some programming language. Stripe API -Client Libraries If you don’t want to use some programing language for use this API, you can make requests directly into your browser, creating one specific URL with enough information, and if you want to use some specific language for use this API but don’t have one library dedicate for this API, you need to know the general behavior for being able to create all the requests that you need. In both cases, this article has enough information for you, but need to take into account these points: You need to authenticate to be able to take access to your account information. The results of this API will be JSON files and you need to be able to use the information in this format. So, if you want to use this information in Excel or Google SpreadSheets, you need to change the structure of the result. All the API have restrictions in your use. In the case of Stripe, the principal restriction is the limit on the number of objects to be returned, between 1 and 100. Authentification Stripe’s API uses HTTP basic authentication. The normal way of using HTTP Basic Auth is with username and password, and if you try to use the API URL directly in the browser without other information, you will have a pop-up for ingress this information. Stripe API -Use of API directly into the browser This should be a way to confirm what type of authentification uses some API when we don’t have access to its documentation. If you want to use one API without the need to pass the credentials across the pop-up in your web browser, for example into a script of some programming language without support or library for this API, or for making different calls without the need to pass the credentials for each request, you can pass your credentials across the URL with this form: Stripe API -Pass username and password across URL For more information about this type of authentification, you can have a look here. In the case of Stripe’s API, you only need to provide your API key as the basic auth username value without the need to provide a password in the case of using the pop-up way, and if you want to use the authentification directly in the URL you only need pass the API Key next to API link after at sign. Stripe API -Pass API key across URL If you don’t know what is your API key, you can create, view, and manage your API keys in the Stripe Dashboard. Stripe provides one key in your documentation, to test the use of your API, so you can use this key before using your real data.
https://medium.com/@carloseguevarap/how-to-use-stripes-api-for-download-information-d72368a3314a
['Carlos Guevara']
2020-11-24 02:31:09.887000+00:00
['Stripe', 'Rest Api', 'Tutorial', 'Development', 'API']
We Always Give Democrats Grief For “Selling” Smaller-Than-Expected Losses As Wins
Rallying recently, in vain it turns out, for soon to be ex-Republican Governor of Kentucky, Matt Bevin We Always Give Democrats Grief For “Selling” Smaller-Than-Expected Losses As Wins So by all means, go ahead and celebrate the latest election results! And then get right back to work, because the big one’s still coming up next year. Hugest among the highlights was the ousting of Kentucky’s incumbent Governor Matt Bevin by Democrat Andy Beshear. Despite Trump’s best efforts to push Bevin over the line. Republicans — as Trump was quick to point out — did win several state-wide offices, including electing the state’s first ever African-American Attorney General.) This in a state where both U.S. Senators (including Senate Majority Leader Mitch McConnell), 5 out of 6 U.S. Reps., 61% of State Reps., and 76% of State Senators are Republican. We think Bevin’s downfall had a lot to do with going after public school teachers. Much was made by his opponent of his Trump-like insults hurled at them as he tried to cut back on benefits. And teachers just aren’t ever really the right people to anger. (In 2016, Trump won Kentucky by 30%). In Virginia, Democrats held the State Senate and flipped the House, meaning both legislative bodies and all of the highest state offices are now majority Democrat. In the case of the House, it’s the first time it’s been that way in 20 years. (In 2016, Trump lost Virginia by 5%). And in Pennsylvania, Democrats won a county council they have not controlled since the Civil War, according to the Philadelphia Inquirer. (In 2016, Trump won Pennsylvania by less than 1%). There’s one more to watch out for real soon: Louisiana Democrat and incumbent Governor John Bel Edwards faces off a week from Saturday against Republican Eddie Rispone. Edwards is an Army veteran, so Trump hasn’t been able to do his usual “hates the troops” Twitter number on him. Mostly he’s warning he’ll send car insurance rates sky high. If Trump “loses” this one as well, meaning the Democratic incumbent hangs on, that might really start meaning something… But already, for Republicans to view these off-year election results as anything but a disaster is really difficult, because — as we’ve often pointed out to Democrats — you don’t win in politics unless you actually win. So now they’re the ones playing the “close is actually a win” game, with Trump leading the chorus. Especially in races where the President stumped for and strongly backed the eventual losers. In the past, when Republican candidates haven’t prevailed, Trump’s often blamed it on himself being spread too thin to devote enough individual attention to all the races: as if a Trump endorsement would’ve guaranteed a sure victory. Now he’s saying “Fake news will blame Trump!”. But that’s the biggest bunch of BS ever. He should be slammed with the blame. Hard. Because if he’s taken all the credit for all the close wins, then it’s only fair to share some of the criticism for the losses. And Trump’s all about “fairness”, right? Here’s Republican National Committee Chair Ronna McDaniel trying to explain it all away: Yeah, but he still lost… With the shoe on the other foot, Democrats, can you now see how nonsensical and downright unbecoming that appears?
https://medium.com/discourse/we-always-give-democrats-grief-for-selling-smaller-than-expected-losses-as-wins-da77068523
['Eric J Scholl']
2019-11-07 22:09:48.809000+00:00
['Donald Trump', 'Politics', 'Republican Party', 'Democrats', 'Elections']
How to Create and Publish an npm Package
Easily create and publish an npm module to npm repository Introduction In this tutorial, you will create your own npm package and publish it to the npm repository. By doing this, you will understand: How to create an npm package How to install it locally before publishing to test its functionality How to install and use the published package using ES6 import syntax or using Node.js require statement How to manage semantic versioning of the package How to update the package with the new version and publish it again To be precise, you will build a package that will return a list of GitHub repositories of the specified username sorted by the number of stars for each repository. Prerequisites You will need the following to complete this tutorial: A valid installation of Git version control Node.js installed locally, which you can do by following the instructions given on this page This tutorial was verified with Node v13.14.0, npm v6.14.4, and axios v0.20.0 Step 1 — Initial Setup Create a new folder with the name github-repos-search and initialize a package.json file mkdir github-repos-search cd github-repos-search npm init -y Initialize the current project as a git repository by running the following command from github-repos-search folder: git init . Create a .gitignore file to exclude the node_modules folder. Add the following contents inside .gitignore file node_modules Install the axios package that you will use to make a call to the GitHub API. npm install [email protected] Your package.json will look like this now: { "name": "github-repos-search", "version": "1.0.0", "description": "", "main": "index.js", "dependencies": { "axios": "^0.20.0" }, "devDependencies": {}, "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC" } Inside the package.json file, the value for name is github-repos-search . So our package name after publishing to npm repository will become github-repos-search . Also, the name has to be unique on the npm repository so first check if such npm repository already exists or not by navigating to https://www.npmjs.com/package/<your_repository_name_from_package_json> . Otherwise you will get an error while publishing the package to the npm repository if the name already exists. Step 2 — Writing the code Create a new file with the name index.js and add the following contents inside it: const axios = require('axios'); const getRepos = async ({ username = 'myogeshchavan97', page = 1, per_page = 30 } = {}) => { try { const repos = await axios.get( `https://api.github.com/users/${username}/repos?page=${page}&per_page=${per_page}&sort=updated` ); return repos.data .map((repo) => { return { name: repo.name, url: repo.html_url, description: repo.description, stars: repo.stargazers_count }; }) .sort((first, second) => second.stars - first.stars); } catch (error) { return []; } }; getRepos().then((repositories) => console.log(repositories)); Let’s understand the code first. You have created a getRepos function that accepts an optional object with username , page and per_page properties. function that accepts an optional object with , and properties. Then you used object destructuring syntax for getting those properties out of the object. Passing an object to the function is optional so we have initialized it to default values if the object is not passed to the function like this: { username = 'myogeshchavan97', page = 1, per_page = 30 } = {} The reason for assigning an empty object {} is to not get an error while destructuring username from the object if the object is not passed. Check out my previous article to learn about destructuring in detail. is to not get an error while destructuring from the object if the object is not passed. Check out my previous article to learn about destructuring in detail. Then inside the function, you are making a call to the GitHub API by passing the required parameters to get the repositories of the specified user sorted by the updated date. const repos = await axios.get( `https://api.github.com/users/${username}/repos?page=${page}&per_page=${per_page}&sort=updated` ); Here, you are using async/await syntax so the getRepos function is declared as async. function is declared as async. Then you are selecting only the required fields from the response using the Array map method repos.data .map((repo) => { return { name: repo.name, url: repo.html_url, description: repo.description, stars: repo.stargazers_count }; }) Then that result is sorted by descending order of stars so the first element in the list will be with the highest stars .sort((first, second) => second.stars - first.stars); If there is any error, you are returning an empty array in the catch block. As the getRepos function is declared as async , you will get back a promise so you are using .then handler to get the result of the getRepos function call and printing to the console. getRepos().then((repositories) => console.log(repositories)); Step 3 — Executing the code Now, run the index.js file by executing the following command from the command line: node index.js You will see the following output with the first 30 repositories: In the file, you have not provided the username so by default my repositories are displayed. Let’s change that to the following code: getRepos({ username: 'gaearon' }).then((repositories) => console.log(repositories)); Run the file again by executing node index.js command and you will see the following output: You can choose to pass the page and per_page properties to change the response to get the first 50 repositories. getRepos({ username: 'gaearon', page: 1, per_page: 50 }).then((repositories) => console.log(repositories)); Now, you know that the functionality is working. Let’s export this module so you can call this getRepos method from any other file. So remove the below code from the file getRepos({ username: 'gaearon', page: 1, per_page: 50 }).then((repositories) => console.log(repositories)); and add the below line instead module.exports = { getRepos }; Here, you are exporting the getRepos function as a property of the object so later if you want to export any other function you can easily add it to the object. So the above line is the same as module.exports = { getRepos: getRepos }; Step 4 — Testing the created npm package using require statement Now, you are done with creating the npm package but before publishing it to the npm repository, you need to make sure it works when you use it using require or import statement. There is an easy way to check that. Execute the following command from the command line from inside the github-repos-search folder: npm link Executing npm link command creates a symbolic link for your current package inside the global npm node_modules folder (The same folder where our global npm dependencies get installed) So now you can use your created npm package inside any project. Now, create a new folder on your desktop with any name for example test-repos-library-node and initialize a package.json file so you can confirm that the package is installed correctly: cd ~/Desktop mkdir test-repos-library-node cd test-repos-library-node npm init -y If you remember, the name property in our package’s package.json file was github-repos-search so you need to require the package using the same name. Now, execute the following command from inside the test-repos-library-node folder to use the package you created: npm link github-repos-search Create a new file with the name index.js and add the following code inside it: const { getRepos } = require('github-repos-search'); getRepos().then((repositories) => console.log(repositories)); Here, you have imported the package directly from the node_modules folder( This was only possible because you linked it using npm link) Now, run the file by executing it from the command line: node index.js You will see the correct output displayed: This proves that when you publish the npm package on the npm repository, anyone can use it by installing it and using the require statement. Step 5 — Testing the created npm package using the import statement You have verified that the package works by using the require statement. Let’s verify it by using the ES6 import statement. Create a new React project by executing the following command from your desktop folder: cd ~/Desktop npx create-react-app test-repos-library-react Now, execute the following command from inside the test-repos-library-react folder to use the package you created: npm link github-repos-search Now, open src/App.s file and replace it with the following content: import { getRepos } from 'github-repos-search'; import React from 'react'; import './App.css'; function App() { getRepos().then((repositories) => console.log(repositories)); return ( <div className="App"> <h2>Open browser console to see the output.</h2> </div> ); } export default App; Start the React app by executing the following command from the terminal: yarn start If you check the browser console, you will see the output as expected: This proves that when you publish the npm package on npm repository, anyone can use it by installing it and using import statement. Step 6 — Publish to the npm repository Now, you have verified that the package is working fine. It’s time to publish it to the npm repository. Switch back to the github-repos-search project folder where you have created the npm package. Let’s add some metadata in the package.json file to display some more information about the package Here is the final package.json file: { "name": "github-repos-search", "version": "1.0.0", "description": "", "main": "index.js", "homepage": "https://github.com/myogeshchavan97/github-repos-search", "repository": { "type": "git", "url": "git+https://github.com/myogeshchavan97/github-repos-search.git" }, "dependencies": { "axios": "^0.20.0" }, "devDependencies": {}, "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [ "github", "repos", "repositories", "sort", "stars" ], "author": "Yogesh Chavan <[email protected]>", "license": "ISC" } You have added homepage , repository , keywords and author for more information(These are optional). Make changes as per your GitHub repository. Create a new GitHub repository HERE and push github-repos-search repository to GitHub. Navigate to https://www.npmjs.com/ and create a new account If you don’t already have an account. Open the terminal and from inside the github-repos-search folder, execute the following command: npm login and enter your npm credentials to log in. Now, to publish it to the npm repository run the following command: npm publish If you navigate to https://www.npmjs.com/package/github-repos-search in the browser, you will see your published package: Now, let’s add a readme.md file for displaying some information regarding the package. Create a new file with the name readme.md inside the github-repos-search folder with the contents from here Let’s try to publish it again using the npm publish command. You will get an above error. This is because you are publishing the module with the same version again. If you check our package.json file, you will see that the version mentioned in the file is 1.0.0 You need to increment it every time publishing a new change. So what should you increment to? For that, you need to understand the semantic versioning concept. Step 7 — Semantic versioning in npm The version value is a combination of 3 digits separated by dot operator. Let’s say the version is a.b.c First value ( a in a.b.c ) specifies the major version of the package — It means this version has Major code changes and it might contain breaking API changes. Second value ( b in a.b.c ) specifies the minor version which contains minor changes but will not contain breaking API changes. Third value ( c in a.b.c ) specifies the patch version which usually contains bug fixes. In our case, you just added a readme.md file which is not an API change so you can increment the patch version which is the last digit by 1. So change the version inside package.json file from 1.0.0 to 1.0.1 and run the npm publish command again. If you check the npm package now, you will see the updated npm package live here To learn in detail about semantic versioning check out my previous article Conclusion In this tutorial, you created an npm package and published it to the npm repository. For the complete source code of this tutorial, check out the github-repos-search repository on GitHub. You can also see the published npm module here Don’t forget to subscribe to get my weekly newsletter with amazing tips, tricks and articles directly in your inbox here.
https://medium.com/swlh/how-to-create-and-publish-an-npm-package-17b5e1744f26
['Yogesh Chavan']
2020-09-29 13:02:31.581000+00:00
['Nodejs', 'JavaScript', 'Development', 'Programming', 'React']
The One Question To Ask Yourself Before You Sell An Investment.
This simple question, can be answered with a Yes or No. Your answer will give you clarity that few investment advisors can bring. Photo by exectium on Unsplash I don’t think there’s anyone in the world who doesn’t like a bargain. Maybe that’s why it’s so exciting to buy what you feel is a winning investment. With so much excitement and enjoyment associated with buying what may bring possible wealth and success, the other side of the equation is often overlooked. Whether it’s bonds or bitcoins, buy low and sell high is the basic simple mindset of all humans. It’s probably the deepest groove in most investor’s brain; no matter how sophisticated. A recent working paper called “Selling Fast and Buying Slow” (Akepanitaware, Di Mascio,Imas and Schmidt,2018) confirms what many investors have long thought: buying stocks is easy but selling them is hard. The same can be said for other types of investments. You can Google “When to sell your investment” and get as many suggestions as there are investments. Confusing to most who understand the circle of profit is not complete until the sale is made and seek advice when the best time is to take that step. So, how do you to come to the decision that is at least 50% of the profit equation — selling your investment? The best answer is in the form of a question. If you had the cash in your pocket that this investment is worth today, would you take that cash and buy this same investment today? If your answer is YES, then don’t sell. If your answer is NO, immediately start the selling process.* *Check with your investment advisor the tax implications of any possible sale you may be considering. There’s a tremendous volume of investment advice offered now-a-days from bonds to bite coins. No wonder everyone, including many advisers, are confused.
https://medium.datadriveninvestor.com/the-one-question-to-ask-yourself-before-you-sell-an-investment-9ad9c99fb5e5
['Brian Dickens Barrabee']
2021-08-27 06:44:09.501000+00:00
['Investment', 'Wealth', 'Advice', 'Selling', 'Investors']
Latest picks: In case you missed them:
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://towardsdatascience.com/latest-picks-geometric-ml-becomes-real-in-fundamental-sciences-a6cea171e7bf
['Tds Editors']
2020-12-31 14:27:33.083000+00:00
['Towards Data Science', 'Editors Pick', 'Machine Learning', 'Data Science', 'The Daily Pick']
A Brief Introduction to Change Point Detection using Python
The ruptures Package Charles Truong adapted the ruptures package from the R changepoint package. It specifically focuses on offline changepoint detection, where the whole sequence is analyzed. Out of all of the Python changepoint options, it is the best documented. We can install it using the basic pip install command: pip install ruptures The package offers a variety of search methods (binary segmentation, Pelt, window-based change detection, dynamic programming, etc.), as well as multiple cost functions to play around with. In this tutorial, we focus specifically on search methods. Search Method Background This section provides a brief background on some of the search methods available in the ruptures package, including binary segmentation, PELT, window-based change detection, and dynamic programming. Pruned Exact Linear Time (PELT) search method: The PELT method is an exact method, and generally produces quick and consistent results. It detects change points through the minimization of costs (4). The algorithm has a computational cost of O(n), where n is the number of data points (4). For more info on the PELT method, check out this paper. Dynamic programming search method: This is an exact method, which has a considerable computational cost of O(Qn² ), where Q is the max number of change points and n is the number of data points (4). For more info on the dynamic programming search method, check out this paper. Binary segmentation search method: This method is arguably the most established in literature (4). Binary segmentation is an approximate method with an efficient computational cost of O (n log n), where n is the number of data points (4). The algorithm works by iteratively applying a single change point method to the entire sequence to determine if a split exists. If a split is detected, then the sequence splits into two sub-sequences (5). The same process is then applied to both sub-sequences, and so on (5). For more info on binary segmentation, check out this paper. Window-based search method: This is a relatively simple approximate search method. The window-based search method “computes the discrepancy between two adjacent windows that move along with signal y” (6). When the two windows are highly dissimilar, a high discrepancy between the two values occurs, which is indicative of a change point (6). Upon generating a discrepancy curve, the algorithm locates optimal change point indices in the sequence (6). For more info on the window-based search method, check out this paper. Code Example In the below code, we perform change point detection using the search methods described above. We use the time series for daily WTI oil prices, from 2014 to now, pulled via the Energy Information Administration’s (EIA) API ( see this tutorial for more info on using the EIA API to pull data): def retrieve_time_series(api, series_ID): """ Return the time series dataframe, based on API and unique Series ID api: API that we're connected to series_ID: string. Name of the series that we want to pull from the EIA API """ #Retrieve Data By Series ID series_search = api.data_by_series(series=series_ID) ##Create a pandas dataframe from the retrieved time series df = pd.DataFrame(series_search) return df """ Execution in main block """ #Create EIA API using your specific API key api_key = 'YOUR API KEY HERE' api = eia.API(api_key) #Pull the oil WTI price data series_ID='PET.RWTC.D' price_df=retrieve_time_series(api, series_ID) price_df.reset_index(level=0, inplace=True) #Rename the columns for easier analysis price_df.rename(columns={'index':'Date', price_df.columns[1]:'WTI_Price'}, inplace=True) #Format the 'Date' column price_df['Date']=price_df['Date'].astype(str).str[:-3] #Convert the Date column into a date object price_df['Date']=pd.to_datetime(price_df['Date'], format='%Y %m%d') #Subset to only include data going back to 2014 price_df=price_df[(price_df['Date']>='2014-01-01')] #Convert the time series values to a numpy 1D array points=np.array(price_df['WTI_Price']) #RUPTURES PACKAGE #Changepoint detection with the Pelt search method model="rbf" algo = rpt.Pelt(model=model).fit(points) result = algo.predict(pen=10) rpt.display(points, result, figsize=(10, 6)) plt.title('Change Point Detection: Pelt Search Method') plt.show() #Changepoint detection with the Binary Segmentation search method model = "l2" algo = rpt.Binseg(model=model).fit(points) my_bkps = algo.predict(n_bkps=10) # show results rpt.show.display(points, my_bkps, figsize=(10, 6)) plt.title('Change Point Detection: Binary Segmentation Search Method') plt.show() #Changepoint detection with window-based search method model = "l2" algo = rpt.Window(width=40, model=model).fit(points) my_bkps = algo.predict(n_bkps=10) rpt.show.display(points, my_bkps, figsize=(10, 6)) plt.title('Change Point Detection: Window-Based Search Method') plt.show() #Changepoint detection with dynamic programming search method model = "l1" algo = rpt.Dynp(model=model, min_size=3, jump=5).fit(points) my_bkps = algo.predict(n_bkps=10) rpt.show.display(points, my_bkps, figsize=(10, 6)) plt.title('Change Point Detection: Dynamic Programming Search Method') plt.show() Snapshot of the WTI Oil Price Time Series, pulled via the EIA API Change Point Detection with Pelt Search Method, WTI Oil Price Time Series, 2014-Present Change Point Detection with Binary Segmentation Search Method, WTI Oil Price Time Series, 2014-Present Change Point Detection with Window-Based Search Method, WTI Oil Price Time Series, 2014-Present Change Point Detection with Dynamic Programming Search Method, WTI Oil Price Time Series, 2014-Present As you can see in the graphics above, the detected change points in the sequence differ based on the search method used. The optimal search method depends on what you value most when subsetting the time series. The PELT and dynamic programming methods are both exact (as opposed to approximate) methods, so they are generally more accurate.
https://towardsdatascience.com/a-brief-introduction-to-change-point-detection-using-python-d9bcb5299aa7
['Kirsten Perry']
2019-08-15 11:16:23.649000+00:00
['Programming', 'Time Series Analysis', 'Statistics', 'Anomaly Detection', 'Data Science']
Recommender systems explained
In this article, I overview broad area of recommender systems, explain how individual algorithms work. I will start with a definition. A recommender system is a technology that is deployed in the environment where items (products, movies, events, articles) are to be recommended to users (customers, visitors, app users, readers) or the opposite. Typically, there are many items and many users present in the environment making the problem hard and expensive to solve. Imagine a shop. Good merchant knows personal preferences of customers. Her/His high quality recommendations make customers satisfied and increase profits. In case of online marketing and shopping, personal recommendations can be generated by an artificial merchant: the recommender system. To build a recommender system, you need a dataset of items and users and ideally also interactions of users with items. There are many application domains — typically, users are customers, items products and interactions are individual purchases. In this Figure, users are card holders, items are card terminals and interactions are transactions. From such dataset, rules can be generated showing how users interact with items. In this case, rules based on card transactions in the Czech Republic can be used to recommend shops to visit. Knowledge based recommender systems Both users and items have attributes. The more you know about your users and items, the better results can be expected. Below, I give an example of item attributes relevant for recommendation: Item: TV2305 { "name": "Television TV2305", "short description": "HD resolution LED TV", "long description": " Enjoy a movie with your family on the weekend with this HD television from Micromax. With an 81cm (32) display, you can view every single detail with rich detail and clarity. This LED TV produces a resolution of 1366 x 768 pixels with a refresh rate of 60Hz to display crisper images and fluid picture movement. Play HD videos and enjoy a 178 degree viewing angle so that everyone in the family, even those at the sides, can see. Connect HD devices such as BluRay players, PlayStations or HD Set Top Boxes to this television as it has an HDMI port. You can also connect an HDD or USB device to this TV via its USB port. Get a surround sound effect in your living room as this TV comes with two 8W speakers to deliver crisp sound across all your media. With a 5 band equalizer and an auto volume leveler feature, you can enjoy a movie's soundtrack or the latest hit single the way it was meant to be heard.", "price": 250, "categories": ["Electronics", "Televisions"]} Such attributes are very useful and data mining methods can be used to extract knowledge in forms of rules and patterns that are subsequently used for recommendation. For example, the item above is represented by several attributes that can be used to measure similarity of items. Even the long text description can be processed by advanced NLP tools. Then, recommendations are generated based on item similarity. When users are also described by similar attributes (e.g. text extracted from CVs of job applicants), you can recommend items based on user-item attributes similarities. Note that in this case we do not use past user interactions at all. This approach is therefore very efficient for so called “cold start” users and items. Those are typically new users and new items. Content based recommender systems Such systems are recommending items similar to those a given user has liked in the past, regardless of the preferences of other users. Basically, there are two different types of feedback. Explicit feedback is intentionally provided by users in form of clicking the “like”/”dislike” buttons, rating an item by number of stars, etc. In many cases, it is hard to obtain explicit feedback data, simply because the users are not willing to provide it. Instead of clicking “dislike” for an item which the user does not consider interesting, he/she will rather leave the web page or switch to another TV channel. Implicit feedback data, such as “user viewed an item”, “user finished reading the article” or “user ordered a product”, however, are often much easier to collect and can also help us to compute good recommendations. Various types of implicit feedback may include: Interactions (implicit feedback): - user viewed an item - user viewed item's details - user added an item to cart - user purchased an item - user have read an article up to the end Again, you can expect better performance of recommender system, when the feedback is rich. Content based recommenders work solely with the past interactions of a given user and do not take other users into consideration. The prevailing approach is to compute attribute similarity of recent items and recommend similar items. Here I need to point out one interesting observation from our business. Recommending recent items themselves is often very successful strategy, which of course works just in certain domains and for certain positions. Collaborative filtering Last group of recommendation algorithms is based on past interactions of the whole user-base. These algorithms are far more accurate than the algorithms described in previous sections, when a “neighborhood” is well defined and the interactions data are clean. Very simple and popular is a neighborhood-based algorithm (K-NN) described above. To construct a recommendation for a user, k-nearest neighbor users (with most similar ranked items) are examined. Then, top N extra items (non-overlapping with items ranked by the user) are recommended. This approach works perfectly fine not only for mainstream users and popular items, but also for “long-tail” users. By controlling how many neighbors are taken into the consideration for recommendation, one can optimize the algorithm and find a balance between recommending bestsellers and niche items. Good balance is crucial for performance of the system, which will be discussed in the second part of this article. There are two major variants of neighborhood algorithms. The item-based and user-based collaborative filtering. Both algorithms operate on a matrix of user-item ratings. In the user-based approach, for user u, a score for an unrated item is produced by combining the ratings of users similar to u. In the item-based approach a rating (u,i) is produced by looking at the set of items similar to i (interaction similarity), then the ratings by u of similar items are combined into a predicted rating. The advantage of the item-based approach is that item similarity is more stable and can be efficiently pre-computed. From our experience, user-based algorithms outperform item-based algorithms in most of the scenarios and databases. The only exception can be databases with significantly lower number of items than users and low number of interactions. K-nearest neighbor algorithm is not only solution to collaborative filtering problem. The rule-based algorithm above uses APRIORI to generate set of rules from the interaction matrix. The rules with a sufficient support are subsequently used to generate candidate items for recommendations. Important difference between K-NN and rule based algorithms is the speed of learning and recall. Machine learning models operate in two phases. In the learning phase, model is constructed and in the recall phase, model is applied to new data. Rule based algorithms are expensive to train but their recall is fast. K-NN algorithms are just the opposite — therefore they are also called lazy learners. In recommender systems, it is important to update model frequently (after each user interaction), to be able to generate new recommendations instantly. Whereas lazy learners are easy to update, rule based models have to be retrained, which is particularly challenging in large production environments. In Recombee, we designed a lazy variant of rule based recommendation allowing us to mine rules on the fly and update the model instantly with incoming interactions. Rules generated from interactions of users with items. Rules can be visualized and it is great tool to inspect quality of data and problems in your database. The Figure shows rules with a sufficient support in the interaction matrix. Each arrow is a rule (or implication) that there are enough users that interacted with the source item and subsequently with the target item. Strength of the connection is the confidence. Detail view of the rules above. Each rule is represented by an arrow. Size of arrow is the confidence of the rule. These particular rules were generated from card transaction data provided by a bank. Items are “card terminals” and users are “card holders”. Interactions are individual transactions. We omit labels because data are confidential and a lot can be derived from rules. In the first image, clusters of rules are apparent. These are apparently card terminals close by their geographical location. There are interesting rules showing shopping habits of users. When you upload your data to our recommender (directly or using our Keboola app, that is even faster), we can generate such rules for you and you can inspect interesting rules in your own data. But the primary purpose of the above rules is not data analytics but recommendations. One can generate recommendations for individual card holders based on their recent transactions (e.g. people who withdraw money from this ATM typically spend them in following shops). A bank can build smart data products on top of such recommendations (e.g. offering bonus for shopping in recommended places). Such data products can be generated almost everywhere. Do you have an idea, how data recommendation-powered data products can improve your business? Let us know and we can help you to validate it. . The last, and probably most interesting class of collaborative filtering algorithms described here are the factorization based approaches. Above, matrix of interactions is factorized into two small matrices one for users and one for items with certain number of latent components (typically several hundreds). The (u,i) rating is obtained by multiplying of these two small matrices. There are several approaches how to decompose matrices and train them. The one showed above is a simple gradient descent technique. The error can be minimized by Stochastic Gradient Descent, Alternating Least Squares or Coordinate Descent Algorithm. There are also SVD based approaches, where the ranking matrix is decomposed into three matrices. Generally, this is very interesting and mature field of machine learning. Here is some further reading, if you are interested: Facebook approach to scalable recommendation based on matrix factorization, dealing with implicit ratings or various metrics. As you can see, there are plenty of algorithms and each algorithm has parameters that help us to find good plasticity of models. One of my further posts will discuss ensembles of recommendation models that can further improve quality of recommendations. How to measure the quality of algorithms? This is another complex issue. “Bad” recommendations are hard to detect and prevent in general. They are often domain specific and have to be filtered out. Offline evaluation of recommendation algorithms showing how the ALS based Matrix Factorization can outperform user based K-NN. Details in the next post. Our next post will be about evaluation of recommender systems. Online evaluation of quality and optimization towards better performing recommendations. Details in the next post. There are several strategies how to evaluate recommenders both offline and online. In Recombee accurate quality estimates help us to optimize parameters of the system automatically and improve the performance of the recommender for all scenarios. You can find out which combination of algorithms is most efficient for your data. We prepared a free instant account for such purposes so you can experiment using our API or clients. Here is how to start and build your recommender system in hours. You can continue with our recent presentation on Machine Learning for Recommender Systems.
https://medium.com/recombee-blog/recommender-systems-explained-d98e8221f468
['Pavel Kordík']
2018-06-04 08:34:54.573000+00:00
['Data Science', 'Machine Learning', 'Recommendation System']
I’m Officially Breaking Up With Organized Christianity
I’m Officially Breaking Up With Organized Christianity The three things that may have soured me on religion — but not faith — for good Photo by Diana Simumpande on Unsplash Although I was raised in a Protestant household, I told my parents when I was about 10 that when I grew up, I was either going to be Jewish or Catholic. I’ve talked about this elsewhere, but the church that we attended when I was little, the main Presbyterian behemoth in town, never did much for me. The minister who helmed the place when I was a kid was of the fire-and-brimstone variety, which just never felt right to me; then, when I was a bit older and in a confirmation class, I had a run-in that left me in tears when I asked how we knew our version of belief was “right” and, if I remember correctly, “not a cult.” Both Judaism and Catholicism seemed warmer to me and less tied up with the person behind the pulpit. I’ve lived my entire adult life in the Deep South, and I’ve grown progressively angry at and embittered with organized Christianity the longer I’ve lived here. This week, things have just finally come to the breaking point: I’m officially disgusted with organized Christianity past the point that I can even consider participating in it in any real way at any time in the foreseeable future. I realize that this is a vast generalization, but I heard the venerable Jon Meacham on television yesterday talking about the separation of church and state, a concept enshrined by our Framers, and something just clicked for me. He pointed out, quite correctly, the fact that this cornerstone of our democracy was intended by Jefferson and all else to go both ways: In other words, not only did they intend to keep Christianity, and all other faiths, separate from the three branches of American government for the government’s sake, but as Christians themselves, they also intended to put space between faith and democracy for the sake of Christianity. To entwine the two would sully the faith and hinder the followers’ progress in regaining unity with the divine. In many places in the Bible, Christ and the apostles lay down the challenge to be in the world but not of the world. This was the other side of the Framers’ goal that Meacham highlighted on TV. It was with that that I realized that I’m officially done. But more on that in a minute. It was a slow boil. Here’s a little more on how I arrived at this place. The whole Catholic thing: Shortly after graduating from college, I made true on my word and converted to Catholicism. I love the rites of Catholicism — they’re very comforting to me — and the liturgy that is so focused on God’s word. But the Church does not cohere with my beliefs about the equality of women and God’s love for all His people. Here’s what I’ve learned in turning 50, and stick with me, because I know this is going to sound a little convoluted. I believe God loves me just as I am, with my beliefs and all. As such, part of loving God is loving who I am, and I’ve arrived at a point in my life where in order to honor myself, I have to stand by what I believe in. So I can’t move forward in good conscience at this point in a church that does not honor women as equals, and that does not love all God’s children and honor them as equals. No rite, no matter how comforting, is worth scorning one of God’s children, telling a woman she is unequal to a man or a gay person that he or she is unequal to someone who does not find his or her expression of love in that same manner. Gay ministers and gay marriage: My ex-husband happens to work in a Protestant church. His church is warm and welcoming, and I think of them as fairly progressive, although they are not necessarily typical of the entire denomination. They recently had their nationwide annual meeting, the centerpiece of which was a discussion of gay marriage in the church and gay ministers, and these proposals were hotly contested and shot down in favor of sticking to traditional doctrine relating to both. My ex was keeping me posted on the progress of the discussion and how divisive it had been, and I also had a brief conversation with the former minister at his church, a woman with whom I’d become somewhat close while she was there. She told me that it had been a heartbreaking time for many people. Again, I have so much difficulty understanding how such good-intentioned people can put so much energy into actively denying an equality in Christ’s love to their brothers and sisters. It just makes no sense to me. Christianity in government and Christian Zionism: Most disturbing of all, however, is the outsized influence that Christianity is having on the United States government, and that Christian Zionism wields on our foreign policy. This country was not created to be run as a Christian state, and yet it is increasingly being operated exactly as such, remarkably by a man who is on the books as having paid hush money to both his mistress and to a porn star to keep word of his affairs with quiet prior to his election to the highest office in the land. The people at the controls of our country are using their personal, faith-based ideology to dictate the policy of our government as opposed to a didactic assessment of what is actually best for the United States and her people. Our Secretary of State sits in Israel being interviewed by the Christian Broadcasting Network, drawing parallels between Queen Esther and Trump and stumping for his ultimate campaign costs. Altogether, this is both an inappropriate way to develop and execute public policy and a ridiculous way to suck up to the base, hiding policy within platitudes. I don’t want to elect a Savior. I already have one of those. I’d also like for my government to keep its religion off of my country’s policies, thank you very much. I’d simply like to elect a President and let everyone decide how to run their own lives according to their own particular decisions on faith. Would it be possible to just do that next time? Please let me know the answer to that — but don’t put it in the bulletin. I won’t be in church. I’m starting a mailing list to stay in touch with folks about upcoming developments with my writing and I’d really, really love it if you’d join by signing up here. There’ll be some cool free stuff and giveaways very soon and you’ll be the first to know about it if you do!
https://medium.com/the-ascent/im-officially-breaking-up-with-organized-christianity-370dd1d873f1
['Julie Mcclung Peck']
2019-03-25 16:31:21.327000+00:00
['Current Events', 'Christianity', 'Politics', 'Religion']
3 Product Trends From The Blurring Intersection of Fashion + Technology
The sewing machine may have been one of fashion’s foremost tech-inspired revolutions. Reducing production costs with its efficiency, it enabled the majority of people to own multiple outfits and engage in more stylistic expression. The advent of the internet and other emerging technologies are similarly contributing to a new cultural dynamic — not only influencing, but even integrating with one of our oldest creative mediums. Below are 3 observations of product trends around the blurring intersection of fashion and technology that companies can potentially learn from. Messaging Through Materials / Manufacturing Japanese denim and the lore surrounding its quality created its own cult following. Knowledge of its history, construction and character had often been reserved for insiders who cared enough to know. However, with the internet enabling enthusiast communities, and social media encouraging brands to become publishers, a new platform for attention to detail has emerged. Today, materials and manufacturing can champion a compelling brand story to a much larger audience of interested consumers. For instance, consumers concerned with sustainability might find materials made out of ocean garbage like Bionic Yarn (creative directed by Pharrell Williams) speaking to their unique values and interests beyond just aesthetics. Adidas Futurecraft is an example of how innovative manufacturing (3D printing) can also be a focus around which a bigger brand story is built. Background is increasingly finding a place in the foreground and in the minds of consumers. 2. Product “Hacking” As Brand-Building A visionary article in Fast Co Design speculates that in the near future, “products will no longer be bought off the shelf. Rather, individuals will create personalized versions, developing their own ‘brands within brands’ in the process.” In the similar spirit of how a software engineer might build a creative application utilizing code from an existing API, maker-minded individuals and communities are already creating their own physical products and accessories. Clothing companies are beginning to incorporate more open-sourced possibilities in the form of creative influencer partnerships, DIY enablement, and customizable offerings. Brands have always been built by companies in collaboration with their communities, but today’s technological enablement is allowing consumers to play a more hands-on role in the actual product-making process. 3. More Seamless Form & Function Just as we customize our smartphones to fit our individual needs, versatile clothing is becoming increasingly popular in our on demand, on the go lifestyles. This is evident in the rise of athleisure and outdoor technical performance fabrics being adapted for multi-purpose needs. Initiatives like Project Jacquard by Google hint at the next wave of how our clothing may become more interactive and dynamic in terms of its versatility. Examples ranging from haptic technology integration like Nadi X’s yoga pants that correct your posture to bioengineered possibilities like bacteria-coated sportswear that allow for night-glowing and actuation (for breathability), suggest a new potential — where what we wear may ultimately function more like a self-aware, 2nd skin. While technology is enabling exciting new possibilities, there is still an uncanny valley to cross so to speak. As Benedict Evans eloquently concluded in Fashion, Maslow and Facebook’s control of social, “Facebook writes algorithms, and designers cut the cloth, but that doesn’t mean they control what people look at or what people wear.” The trends that gain traction and ultimately become a part of our culture will likely be the most authentic reflections of who we are, at least at that particular moment in time. Thank you to @leanne_luce @stevenpurvis @maddiewest and @mallorywatkins for your feedback, conversations, and inspiration.
https://medium.com/tradecraft-traction/3-product-trends-from-the-blurring-intersection-of-fashion-technology-dcf653e808a0
[]
2019-06-09 18:34:18.332000+00:00
['Wearables', 'Fashion', 'Technology', '3D Printing', 'Startup']
For Effective UX Design Workshops, Don’t Be a Lone Wolf
(Based on a presentation given at Workday Design Week 2018) Have you run great, effective UX design workshops that you felt were full of fantastic ideas and energetic conversations, only to later find yourself alone in a room full of post-its, wondering how to keep the ideas alive? You synthesize the findings and present exciting new concepts to your team, but despite the good intentions and effort of everyone involved, little of it ends up in the product, and the ideas are never seen or heard of again. This is a common experience we have as designers. Why? To find some answers, let’s take a look at how we can make workshops — and the design process as a whole — more effective and impactful. UX Workshops: Expectations vs. Reality Design workshops are exciting. The energy levels before, during, and after a workshop often look like this: Image by Stella Zubeck Before the workshop, expectations are high as you anticipate what you’ll achieve. You’ll work to make the workshop happen, which takes some energy. The workshop itself is a peak, where everyone participates, generating new ideas. At the end of the workshop, the team is energized and aligned. The aftermath of workshops is often different. After the workshop, you’re full of ideas, but the energy drops and momentum slows. In the worst-case scenario, little of the output is used in the long run. The UX Workshop Design Process Let’s consider the process for designing and running a workshop. The phases look like this: You make a plan, run the workshop, synthesize what the team generates, and then find ways to share and use the output. Typically, we focus all our energy and thought on the workshop. Instead, let’s take a look at something a little different. A lot of what makes a design workshop successful is how you bring others in before and after the workshop. Planning and communication early in the process is crucial. After the workshop, there’s magic in the synthesis, sharing, and discussion. Keeping the energy up after a workshop requires thoughtfully bringing in your team. Continuing to involve the team in the design process will help retain some of that energy and vision. You can even make it a personal goal to actively include others in your process, especially after design workshops. Although there can be an image of the designer as a lone wolf, heroically defending user experience, there’s no good way to “lone wolf” a workshop. As designers, it’s important that we consider ourselves as part of a team, and it’s on us to change the idea of the “lone wolf” designer. Let’s start by thinking about ways to improve collaboration and inclusion — before running a workshop. Pre-Workshop: Planning Workshop planning is the time to ask lots of questions. Make sure sure you have a clear reason to do a workshop and actionable goals in mind. These can be as broad or as specific as you like (for example, get to know the team, sketch design concepts or generate solutions for a problem). Here are some key questions that can clarify early thinking before you start building a UX workshop agenda: What are the goals of the workshop? What is its purpose? What is the scope? What will success look like? Are there specific UX workshop techniques you want to use? What artifacts do you want to make during and after the workshop? For more questions to guide you, here’s a UX workshop planning reference guide. UX Workshop Planning Reference Guide After collecting your thoughts, you can choose activities to target the gaps in your knowledge and generate the artifacts you want. Then it’s time to bring in your team and share your vision for the workshop. To include others in a workshop plan early in the process, make the ideas: Tangible. It helps to get your thoughts out, on a whiteboard or on paper. It helps to get your thoughts out, on a whiteboard or on paper. Editable. Make sure you create something you can edit collaboratively — a place to collect notes, links, sketches, images, or anything else you might want to refer to later. Make sure you create something you can edit collaboratively — a place to collect notes, links, sketches, images, or anything else you might want to refer to later. Shareable. While brainstorming, it’s useful to generate something you can use to communicate and get feedback from other designers, PMs and your team. An outline is a fast and easy way to include others in your earliest workshop thinking. It may seem obvious, but for any workshop you’re planning, it’s helpful to outline: Goals. What do you want to achieve during the workshop? What do you want to achieve during the workshop? Deliverables. What artifacts do you expect to generate? What artifacts do you expect to generate? Attendance. Who, and how many people should attend? Will they be divided into groups? Who, and how many people should attend? Will they be divided into groups? UX workshop agenda. What activities or methods do you want to use? How much time will you need for each? Pre-Workshop, Continued: Getting Buy-in Before building your activities, it’s a good idea to make sure you and your stakeholders are aligned. You’ll want to communicate that workshops take time to build. Get buy-in from your stakeholders on: Activities. Is everyone on board with the goals of the workshop? Is everyone on board with the goals of the workshop? Communication, expectations, and participation. Make sure there is a communication plan for the workshop. Set expectations with participants early on. Let them know their participation is valued, and whether their attendance is optional or required. Make sure there is a communication plan for the workshop. Set expectations with participants early on. Let them know their participation is valued, and whether their attendance is optional or required. Logistics, partnership, and ownership. Make sure there is a clear plan for logistics. Who will do what? Who is facilitating activities? Who is gathering materials? Do you need to order food, or reserve a space? Make sure all roles are covered and documented. How do you get this buy-in? Use an outline to go over your proposals for each of these areas. You can set partnership expectations with PMs, get feedback on activities, figure out who will participate, and even align on logistics — all before starting to build any workshop materials. Building and Running the Workshop Choosing methods, sequencing activities and building workshops is an art in itself, and a meaty topic. There are many resources available online for advice on planning and designing workshop activities. Some of our favorite resources on design thinking workshops are available from Ideo’s Design Kit and Stanford University’s D.School. Nielsen Norman Group’s articles page is also worth exploring, with lots of inspirational content and information on user-centered design methods and workshops. Workshops are fun for the whole team — they’re a peak moment for inclusion and team participation and, therefore, a peak of energy and momentum. While planning and running a workshop takes time and effort, more heavy lifting comes after, when you start to synthesize and share the output. Illustration by Matt Kistler Post-Workshop: Synthesis, Refining and Sharing After a workshop, there will probably be lots of notes, ideas to clarify and sketches to refine. You’ll have more decisions to make — about what to focus on and what to validate moving forward. It isn’t always clear how and when to bring stakeholders into this part of the process. Many people think a workshop is over after participating in the event. As a designer, you provide structure for what happens after your workshop, so it’s up to you to keep your team involved. Designers often hesitate to ask others to offer more time after a workshop. This is a huge missed opportunity for shared understanding and collaboration. Don’t synthesize by yourself, synthesize as a team! Synthesis methods are another art form. While we can’t cover the world of synthesis in detail here, check out these great methods to help you synthesize and turn data into insights. Here are some thoughts on how to make synthesis activities more inclusive and interactive for your team: Schedule time for synthesis, and do it with others. Don’t be a lone wolf. Don’t be a lone wolf. Be open to what the team generates and learns. Document new ideas and developments. to what the team generates and learns. Document new ideas and developments. Digitize, digitize, digitize. Be sure to translate findings, concepts or sketches into a shareable format. Be sure to translate findings, concepts or sketches into a shareable format. Share the artifacts produced with participants and stakeholders. the artifacts produced with participants and stakeholders. Repeat. Don’t be afraid to share out more than once, and via multiple channels. People lose links, miss Slack messages, and don’t always pay attention to their email. Post-Workshop: Using the Output and More Sharing So you ran a great workshop and took the time to work through what the team produced. Now it’s time to put that output to work and start refining, reviewing and designing. Have a point of view about how you envision using the output of your workshop, or using it as fodder for design. As with any design process, review and feedback on workshop output should be part of the process. Some of the most effective sharing after workshops is: Cross-functional. Share widely, not just within design. Tell the story in whatever way you can. Share widely, not just within design. Tell the story in whatever way you can. Iterative, and interactive. After a workshop is a good time to post deliverables up on the wall or online, and make them accessible to others. Make it a team effort to iterate, comment and edit. This will help clarify what you’re trying to generate, be it a story, experience map, user flow, sketches or vision statement. After a workshop is a good time to post deliverables up on the wall or online, and make them accessible to others. Make it a team effort to iterate, comment and edit. This will help clarify what you’re trying to generate, be it a story, experience map, user flow, sketches or vision statement. Sketchy, not precious. Working at low-to-mid fidelity is a great way to include others in the post-workshop design process. Non-designers feel more comfortable providing feedback without feeling like the work is “done.” It also helps emphasize that the team’s feedback on the designs is an important part of the process — that design isn’t done until it’s been shared. Working at low-to-mid fidelity is a great way to include others in the post-workshop design process. Non-designers feel more comfortable providing feedback without feeling like the work is “done.” It also helps emphasize that the team’s feedback on the designs is an important part of the process — that design isn’t done until it’s been shared. Informal. The less things feel like a formal design review, the better. More people will speak up and share good ideas. Post-Workshop: Wrap and Recap After synthesis is complete, it’s time to do a final polish on deliverables and wrap everything up with a bow. Try to round up everyone who participated in the workshop and get them together again. Maybe there’s a wider audience that can benefit from learning about the workshop. This is also a great time to pause, look back and celebrate with everyone who participated in the workshop. Ideas for wrapping up a workshop: Do a formal recap. Depending on how much time has passed since the workshop, you may want to provide a refresher on the workshop activities and deliverables. When was it? Who was there? What did the team do? Why? Depending on how much time has passed since the workshop, you may want to provide a refresher on the workshop activities and deliverables. When was it? Who was there? What did the team do? Why? Look back and celebrate the work. Take the time to thank everyone who participated, and show them how their ideas will be used moving forward. Take the time to thank everyone who participated, and show them how their ideas will be used moving forward. Reflect and share learnings, takeaways or opportunities. Were there any surprises? Reflect back on what the team created and, of course, take the time to share design’s unique point of view. Were there any surprises? Reflect back on what the team created and, of course, take the time to share design’s unique point of view. Share deliverables both formally and informally. Slack the deliverables to the team, make a folder, or make a quick website or video. Plan to follow up on the work at a later date and let the team know what happened to their ideas. They’ll be excited to share in the ownership and see the forward progress. Closing Thoughts As designers, much of the burden of follow-through on new ideas is on us. This means not only sharing ideas right after a workshop, but much later on. It means keeping concepts alive and accessible. It’s our responsibility to keep ideas on the table, to validate, and to share them. This is one of our powers as designers. We make new possibilities come alive through our skills as problem solvers, storytellers, collaborators and prototypers. It’s also our job to understand when to include others in the design process, and when to move quickly to keep good ideas from fading. How we decide to include others, how we facilitate design, and how we communicate design — ultimately strengthens our output. Actively including others in our design work amplifies our impact, and by doing so, improves the products we work on. When you share, and when you include others in design, have confidence that what you’re doing adds value. It’s up to you to keep your stakeholders involved, and to make great design come to life. To learn more about the collaborative design methods we’re rolling out across Workday, check out the Workday Design Playbook.
https://medium.com/workday-design/for-effective-ux-design-workshops-dont-be-a-lone-wolf-8cbd4a8a733c
['Workday Design']
2020-10-12 22:17:03.826000+00:00
['Product Design', 'Design Thinking', 'Design Workshop', 'Collaboration', 'UX']
Lions and Gazelles
As the frat boys roll into the Lower East Side––their collars popped and their Docksiders worn with salt water from the deck of Daddy’s boat––I stay close to the walls hoping to go unseen. Their girlfriends are impossibly tall, their legs going all the way up, with boots that cover their knees and skirts that go nowhere. They look foreign to me as if sometime a few thousand years ago we split off in separate directions down the evolutionary road. They are gazelles and lions while I’m a fisher cat slinking through the shadows. But back at my apartment, with the music switching been Lana Del Rey and Richard Thompson, there are limbs and whiskey that have come from a million different directions. We’ve come from old families and broken ones. We’ve come from black sands and swamps, and we’ve come from towering buildings with doormen who raised us as much as anyone else. We’ve come from trailers and mansions, our bodies and minds as varied as the changing streets that crawl off into the hidden places we don’t yet know. Sometimes I wonder if our kissing and undressing is simply another way to cope with the swirling mess outside our windows. If our naked bodies, slick with sweat and beautifully bruised, let us melt into the night as much as the heels and backwards hats do. We laugh loudly and often, even as thighs part and lips becomes wet with anticipation. We move between staring in awe and drifting off behind closed eyes while the world holds us without thought. The elegant animals on the streets howl into the evening as we pull sounds from our own lips, drowning out the noise from below.
https://medium.com/tales-from-new-york/lions-and-gazelles-100ddcdea88e
['Ben Goodwin']
2020-08-27 15:19:01.534000+00:00
['Prose', 'Love Letters', 'Short Story', 'Fiction', 'New York City']
The Breonna Taylor Verdict has Broken our Hearts
The Breonna Taylor Verdict has Broken our Hearts It may go down as one of the most devastating verdicts of our time Photo Cred: Breonna Taylor’s family It may go down as one of the most devastating verdicts of our time. On the chilly afternoon of Wednesday 23rd September 2020, we did not receive justice for Breonna Taylor. We anticipated it, when we saw that Daniel Cameron, the Attorney General of Louisville, Kentucky, was a proud Trump supporter, and was given the opportunity to speak at the Republican National Convention. We expected it when we saw that it took several months to get to this decision. It was clear when they barricaded the streets days ahead of the announcement, and when they unleashed swarms of riot-gear clad police hours before the verdict was read. And we remember the colossal national and global outcry it took to get here. We remember how even the influence of the most powerful black celebrities, coupled with the voices of millions of average citizens was not enough to give Breonna’s life the justice she deserved. For the first time in 20 years, Oprah Winfrey did not appear on the cover of her “O” magazine; the September 2020 issue featured an image of Breonna Taylor instead. From Alicia Keys to Lebron James, from young sports champ Naomi Osaka to the most recognizable global superstar Beyoncé - brown-skinned people with more wealth than most of us could ever hope to see, used their platform and power to bring light to Breonna Taylor’s case. Renowned activist Tamika Mallory organized sit-in protests at the AG’s house in an attempt to urge him to stop dragging his feet on arresting and prosecuting the murderous cops. He had over 80 of the protesters arrested and charged. In contrast, only one of the three officers deemed responsible for Breonna Taylor’s murder will be charged, not for manslaughter as the family of the deceased wanted, not even for anything related to this young woman’s death — Brett Hankison will face 3 counts of “Wanton Endangerment”, for the bullets he fired having struck the walls of neighboring apartments, carrying a maximum of 5 years in prison. There are simply no words to describe the collective indignation we are feeling. This outcome does not stay in the realm of world news. It seems to be a reinforcement to each person, no matter their views, that black life is not to be valued or respected. In the midst of such a critical moment in history, when we hold our breaths for the revelation of who the next President will be, this outcome is a cruel reminder that those in power will do whatever it takes to keep us subjugated. Oppressive white landlords have seen this. Racist white bosses and supervisors have seen this. Every white person in this country, in whatever position of power they hold, big or small, real or imagined, has just been told that their supremacy reigns uncontested and supported by the worlds’ biggest empire. Every person of color who has decided to align with white supremacy has also seen this, and is encouraged to keep assimilating to the fullest extent possible. All this, after we have exhausted our energy trying to ignore Trump, rally voters, survive a pandemic, fulfil work and school requirements, resist oppression at our jobs, press through ordinary racist encounters, and be hopeful for small victories that will keep us going. Small victories like the one we have now been shamelessly denied. While prominent anti-racist leaders assert that the fight is not over, that we will regroup and justice will be served, I must humbly admit that my faith has faltered. I am absolutely crushed by the disappointment, perhaps a little mortified by the entire situation. Millions of people rallied and cried out and resisted in a way that has never been done before, but the storm raged on willfully deaf ears. It was Breonna Taylor today, but tomorrow it could be any one of us. The State will always protect itself and its agents of destruction. Justice is being held hostage by an inherently evil system, and at this point, only God can deliver it.
https://medium.com/an-injustice/the-breonna-taylor-verdict-has-broken-our-hearts-cff4ff85bb0
['Anastasia Reesa Tomkin']
2020-09-25 18:39:20.666000+00:00
['Protest', 'Racism', 'Justice', 'Police Brutality', 'BlackLivesMatter']
Make Python Hundreds of Times Faster With a C-Extension
Photo by Michael Dziedzic on Unsplash Python is one of the most popular programming languages. It’s learned and used by students, teachers, and professionals around the world. Python provides a simple, straight forward, interpreted language that fosters creativity and freedom. Programmers have access to a community of hundreds of thousands of developers that provides an immense selection of open source packages for Python. The language manages garbage collection. memory allocation, pathnames, file descriptors, and much more that a programmer would normally need to worry about in a lower level language. Yet, that’s both an advantage and disadvantage. Python sometimes takes care of too many things. It blurs the fine details of whats really happening under the hood. If you feel that way, this post is for you. We will go over the basics and fundamentals of making a C-extension to the Python interpreter. Why make a C extension? C extensions are fast, performant python libraries that can serve several purposes. Those include: High Performance: C extensions can perform hundreds of times faster than equivalent code written in Python. This is because c functions are natively compiled, and just a thin layer over assembly code. Additionally, some tasks can be slower to perform in Python, such as string processing. Python has no concept of a character, just strings of different lengths. While C, has a very raw and efficient string composed purely of a block of memory terminated with a \0 character. Overall, C extensions provide a way to gain a powerhouse of performance in Python. Wrapping: Lots of widely used software libraries are written in C. However, many application level systems, like web development frameworks, or mobile development frameworks, are written in languages like Java or Python. C functions can’t be called directly from Python, because Python does not understand C types without converting them to Python types. However, extensions can be used to wrap C code to make it callable from Python. The building and parsing of Python types will be explained later. Low Level Tools: In Python, the degree to which one can utilize low level and operating system level utilities is quite limited. Python uses a Global Interpreter Lock (GIL), that allows only one thread at a time to execute Python byte code. This means that although some I/O bound tasks like file writes or network requests can happen concurrently, access to Python objects and functions cannot. With C, a program has complete and unrestricted freedom to any resources it can load and use. In a C extension, the GIL can be released, allowing for multi-threaded python work flows. The Python C API The Python language provides an extensive C API that allows you to compile and build C functions that can accept and process Python typed objects. This is done through writing a special form of a C library, that is not only linked with the Python libraries, but creates a module object the Python interpreter imports like a regular Python module. Before we get into the building steps, lets understand how a C function can process Python objects as input and return Python objects as output. Let’s look at the function below: #include <Python.h> static PyObject* print_message(PyObject* self, PyObject* args) { const char* str_arg; if(!PyArg_ParseTuple(args, "s", &str_arg)) { puts("Could not parse the python arg!"); return NULL; } printf("msg %s ", str_arg); // This can also be done with Py_RETURN_NONE Py_INCREF(Py_None); return Py_None; } The type, PyObject* , is the dynamic type that represents any Python object. You can think of it like a base class, where every other Python object, like PyBool or PyTuple inherits from PyObject . The C language has no true concept of classes. Yet, there are some tricks to implement an inheritance, polymorphic like system. The details of this are beyond the scope of this guide, but one way to think about it is this: #define TYPE_INFO int type; \ size_t size struct a_t { TYPE_INFO; }; struct b_t { TYPE_INFO; char buf[20]; }; struct b_t foo; // Fields are always ordered, this will work ((struct a_t*)&foo)->type In the above example, both a_t and b_t share the same fields at the beginning of their definitions. This means, casting struct b_t* to struct a_t* works because the fields of a_t compose the same, prefixed portion of b_t . Parsing Arguments The function has two parameters, self and args . For now, think of self as the object at which the function is called from. As stated in the beginning, we will be writing our function to be called from the scope of the module. The function parses the objects within args in this statement: if(!PyArg_ParseTuple(args, "s", &str_arg)) { Here, the args parameter is actually a PyTuple , the same thing as a tuple in Python, such as x = (1, 2) . In the case of a normal function call in Python, with no keyword args, the arguments are packed as a tuple and passed into the corresponding C function being called. The "s" string is a format specifier. It indicates we expect and want to extract one const char* as the first and only argument to our function. More information on parsing Python C arguments. Returning Values In the last part of the function, we have the following statements Py_INCREF(Py_None); return Py_None; In the Python C API, the None type is represented as a singleton. Yet, like any other PyObject , we have to obey it's reference counting rules and accurately adjust those as we use it. Other C Python functions may build and return other values. For more info on building values, see here This particular function is only meant to print, by convention those usually return None . C Extensions Structure Now, we can explore the structure of how we compose the extensions that Python will actually be able to import and use within the Python runtime. To do that, we need three things. First is the definition of all the methods the extension offers. This is an array of PyMethodDef , terminated by an empty version of the struct. Next is the module definition. This basically titles the module, describes it, and points to our list of method definitions. Just like in pure Python, everything in an Extension is really an object. Lastly, we have a PyInit_ method that initializes our module when it's imported and creates the module object: static PyMethodDef myMethods[] = { { "print_message", print_message, METH_VARARGS, "Prints a called string" }, { NULL, NULL, 0, NULL } }; // Our Module Definition struct static struct PyModuleDef myModule = { PyModuleDef_HEAD_INIT, "DemoPackage", "A demo module for python c extensions", -1, myMethods }; // Initializes our module using our above struct PyMODINIT_FUNC PyInit_DemoPackage(void) { return PyModule_Create(&myModule); } Note: The name in the PyInit_ function and the name in the module definition MUST match. This code, along with our previous print_message function should be placed in a single C file. That C file can be built into a C Extension with a special setup.py file. Below is an example, which is also included in this repo: from distutils.core import setup, Extension # A Python package may have multiple extensions, but this # template has one. module1 = Extension('DemoPackage', define_macros = [('USE_PRINTER', '1')], include_dirs = ['include'], sources = ['src/demo.c']) setup (name = 'DemoPackage', version = '1.0', description = 'This is a demo package', author = '<first> <last>', author_email = '[email protected]', url = 'https://docs.python.org/extending/building', long_description = open('README.md').read(), ext_modules = [module1]) This setup file uses the Extension class from distutils.core to specify the option, such as definitions for the C preprocessor, or an include dir to use when invoking the compiler. C extensions are always built with the compiler from which the running instance of the python interpreter was built with. The Extension class is very similar to a CMake setup, specifying a target, and the options to build that target with. In this repo, you will also find a MANIFEST.in file. This is to specify other files we want packaged in the distribution of our Python package. But this is not required, this is only if publishing a C extension is desired. Building and Installing You can then build and install the extension with the following commands.
https://medium.com/swlh/make-python-hundreds-of-times-faster-with-a-c-extension-9d0a5180063e
['Joshua Weinstein']
2020-07-26 19:36:13.491000+00:00
['Coding', 'Software Development', 'Programming', 'Python', 'Technology']
Lessons for 2021: Glass Half Full?
Photo by Nolan Simmons on Unsplash One of the things about being a “grown-up” is that we have choices. Choices to act, or not. Choices to speak up, or stay quiet. Which path to take, which career to pursue, who to befriend, shaken or stirred, dark chocolate or milk, well done or rare, swipe left or right, whether to be kind or cruel. The older I get, the more I see the choices, and increasingly I recognize I’ve often made choices in the past without doing very much thinking about my alternatives. It’s probably natural: automatic decisions are efficient, time-saving, even comforting. Yet this year has been a real lesson. There have been a lot of things that made me feel like my choices were very limited, and there has been more time to think and to notice, and to consider alternatives. I didn’t have much choice when it came to public health orders to wear a mask, protect others. Those choices were made for me, and I found them easy to agree with, even reassuring: someone who knows more than me is helping us all make the right choices. None the less, when it came to foregoing a European vacation, not seeing family and friends in person, doing way too much work over Zoom — those choices were also made for me, and sat less comfortably. Yet, they were also the right thing to do. So like (mostly) everyone else, I sucked it up, and hoped for better times ahead. With increasing frustration, I also watched other people make choices that didn’t sit well: ignoring the safety of others in the pursuit of “rights”; issuing pardons to convicted criminal co-conspirators; ignoring the climate crisis. So often these choices involved putting personal gain above public interest, even in the case of those who have sworn oaths to serve. It’s enough to make a person depressed. Still, I’ve made a choice. I’m choosing to notice the things that give me hope. The front line workers who put their lives on the line every day for the benefit of the rest of us. The brilliant scientists who worked tirelessly to create novel vaccines in record time. The public servants like Anthony Fauci and Bonnie Henry who demonstrate their commitment to us every day, in spite of personal attacks and cheap criticism from armchair experts. The small acts of kindness that seem to be everywhere: paying it forward at the drive through, record donations to the food bank, a thoughtful holiday gift from someone you weren’t expecting to hear from. We don’t see the smiles through our masks, but we can recognize the smiles in the eyes. Against all odds, here’s to a better year in 2021. Let’s choose hope.
https://medium.com/@corkscrewannie/lessons-for-2021-glass-half-full-2af844d006c6
[]
2020-12-24 19:28:41.631000+00:00
['New Years Resolutions', 'Covid 19', 'Choices Matter', 'New Years Reflections']
Countries where the future has already come
– I wonder what the future holds for us in 10, 20 or even 50 years? Great! I do either! Here is my foreign passport, some money, a suitcase… Let me embark on a journey to the future starting from Domodedovo… Top 5 countries of the future Switzerland I am starting my journey to the future from a country well-known among the self-respecting officials. I mean Switzerland. Notwithstanding its small territory, this is a world-known country due to gigantic investments parked in the technological development and research. The average annual capital withdrawn for innovative projects is estimated at 16 bn CHF (108 006 998 875 RUB). While domestic utility workers sticking to the old Russian traditions lay asphalt only when the first November snow falls, 3D printers and robots build houses and bridges in Switzerland within several days. Moreover, in spite of all the transparency and incorruptibility of the election system, inhabitants of this amazing country still improve anticorruption methods. Besides, the first blockchain-based municipal elections were held here. The election was based on the state system eID, an element of Swiss digital infrastructure that allows voting by mobile phones. No need to be present at the elections, just click on this or that candidate — your voice is recorded. Imagine you order a pizza but these are elections) Apart from eID, such digital services as eGovernment, eVoting, eBanking, eHealth, eEducation, and eCommerce are planned to be developed and introduced. A fully transparent system of elections and state government appeared to be real in the future that has already come in Switzerland. Japan What is the list of innovative countries without Japan worth for? It’s been a long time since the Land of the Rising Sun became synonyms to such words as innovations, robotics, computer technologies, sushi, anime, hentai, oops, that’s for another blog, let’s dwell on innovations. What is the secret of the Japanese technological wonder? It’s pretty simple: innovations are generated due to the active scientific research and technological development, then get commercialized (e.g. bring profit) turning to prospective startups that are a base of the country continuous development. “Psst! Dude, wanna some Japanese innovations?” “Pigeon post has one doubtless advantage over the Russian Post: pigeons don’t steal smartphones from packages”. Meanwhile, the Japanese distrust pigeons. What if they used to work for the Russian Post? That is why they created a special driverless car to deliver correspondence directly to an addressee sending SMS notifications about the arrival. Green technologies are also well-known in Japan. Recently, the local company Eco Marine presented the project of ships that would use both solar and wind energy to move. This will make shipping cheaper and protect the World Ocean waters from contamination. Isn’t that genius? The USA Of course, it will take us a long time to forgive the Americans for what President Obama did at the Russian entrance halls. However, give or take, the Yankees keep the finger on the pulse of modern technologies. Moreover, they are among the technological heavyweights. Microsoft, Apple, IBM, and other giants of the tech universe are creations of skilled and progressive specialists of the U. S. A new hypersonic means of transport Hyperloop can undoubtedly be called the latest hyped ongoing project. The American authorities have already approved the building of Hyperloop in Washington D.C. and New York. The planned time for moving from one city to another (330 km) — 29 min, flight duration — 55–75 min, by a high-speed train — almost 3 hours, by highway — 4–5 hours. 330 km within less an hour? At last, you can visit the grandma in Yakutsk not taking a month’s vacation on your own expense! Many thanks, Elon Musk! Sweden A country with a comparatively small territory Sweden is always ranked among top 5 of the prestigious world’s ratings — Top Global Innovation Index. It’s a list of the most thriving and innovative countries. Unfortunately, Russia holds only the 48th place. So, what did these white-haired Vikings do? They invested a lot of useful things: matches, a cardiostimulator, a PC mouse, Tetra Pak food packaging, Spotify music service, Skype, Bluetooth, and IKEA, for example. My girlfriend would put the last mentioned to the top… Meanwhile, Sweden is one of the most progressive countries by environmental protection, renewable energy sources, waste disposal, and water cleanness. For the Swedes, it’s very important to save resources during manufacturing, exploitation and utilization as well as reduce the exhaust volume and protect the nature. Sweden is a Scandinavian country where no one knows what corruption is. However, they do all the possible to keep officials far from this. Thus, the blockchain-based technology for real estate registration was tested within two years. The results of the test showed the efficiency of the technology regular use. It should be noted that blockchain accelerated the process of registration that could take from 3 to 6 months. Now, it will take several hours, and both a buyer and a seller don’t need to be on the territory of the country. Israel Israel is considered to be one of the most developed countries of Southwest Asia by economic and industrial development notwithstanding the never-ending war on its territory. Even Israeli girls are in the service, and even local top models! A nice try, military office) Besides, Israel is a world’s leader in the technologies of water resources protection and thermal energy. It is an open secret that the climate of this country is not ideal for agriculture. However, it is one of the best agricultural states first of all due to domestic innovations in this sphere. ROOTS Sustainable Agricultural Technologies is the brightest example. Its technology lies in the placement of water fill tubes in the ground and reaching the optimal temperature. Put it otherwise, if the ground is very warm, ROOTS can cool it off and vice versa. This technology can considerably boost yield. Tubes can fill the roots with water and fertilizers, which allows to grow plants under almost any conditions — from the Arctic to the Sahara. Conclusion: Strange to say but not a powerful army and mass volumes of mineral deposits, which is undoubtedly important, but the absence of corruption, strong investments in modern developments and the efficiency drive both in manufacturing and everyday life are an underlying condition of country’s thriving. What is more, please, stop producing YotaPhones and LADA Kalina. Let us stop blushing when seeing the achievements of foreign colleagues.
https://medium.com/smile-expo/countries-where-the-future-has-already-come-f9513432cd96
[]
2018-08-29 09:31:01.224000+00:00
['Future', 'Blockchain', 'Innovation', '3D Printing', 'Switzerland']
What is the best way to get leads online?
What is the best way to get leads online? How do you get more sales leads online? Getting your business into the minds of potential customers, especially if you’re an e-commerce company, can be tricky — but not impossible. Here are 3 ways you can generate more sales leads online. 1) E-mail Marketing: Create an email list of people who want to hear from you, and start sending out emails on a regular basis with tips and advice that are relevant to their needs and interests. Just be sure not to abuse their inboxes! 1) LinkedIn LinkedIn is a great place to network and connect with other professionals. While you can use LinkedIn’s network to find new leads, as a marketing tool, it still leaves something to be desired in terms of ease of use. If you already have a personal profile on LinkedIn, there are a few tricks that you can use to make networking easier. Use these tips to get more out of your time on LinkedIn 2) Google+ While many businesses might think of social media as just a place to share pictures and updates, Google+ is so much more. Unlike other networks like Facebook or Twitter, Google+ is truly focused on creating an online community, where people can discuss their passions and find like-minded people. To create your ideal sales leads online, you must create content that’s valuable to these users and connect with them in ways that let them know you care about what they have to say. 3) Facebook If you have a Facebook page for your business, it’s a great idea to post frequently so that people stay up-to-date on new promotions or products. People will be more likely to buy from you if they trust that you are always offering new and exciting things, and since Facebook has become one of the most popular social media platforms in recent years, it’s a great way to gain exposure. Give-Aways For All The Affiliate Marketer:-|| $100 CPA For EVERY Person Who Enrolls In The One Funnel Away Challenge- That means, offering you 100% of the revenue when people upgrade to the challenge after registering. The summit is totally free, and will ‘sticky cookie’ the registrant to get your link Here ||
https://medium.com/@somchandramong1/what-is-the-best-way-to-get-leads-online-8fa1b0e4b6b1
['Somchandra Gurumayum']
2021-12-23 04:20:10.386000+00:00
['Linkedin Marketing', 'Google Marketing Tips', 'Sales', 'Leads Generation', 'Facebook Marketing']
100 Words On….. My Blog Rules
Photo by Glenn Carstens-Peters on Unsplash There are some basic yet simple rules for creating these blogs. First, the blog must be exactly one hundred words long, no more and no less. Second, no contractions are used so I always write the full words instead of their abbreviated forms such as “cannot”, “would not”, and “you are”. Third, hyphenated or compound words are considered one word, such as “brother-in-law” Fourth, whole and decimal numbers are treated as one word such as “1,234,567” or “3.14”. Fifth, I try to avoid repeating words to diversify my vocabulary. Finally, I really try to have fun when writing these blogs!
https://medium.com/the-100-words-project/100-words-on-my-blog-rules-8a2ed9246d06
['Digitally Vicarious']
2020-12-16 23:18:02.990000+00:00
['Blogging', 'Publishing', 'Content Creation', 'Rules', 'Writing']
When Civilizations Collapse
The survivors walked across the desert, their heads hung in sorrow and their hearts drained of hope. Some wondered if those who died were actually the lucky ones. The group of around one hundred refugees stopped every few days to bury another one of their lot who succumbed to the harsh conditions. The desert offered little sustenance but plenty of danger. Burned by the hot desert sun, the people had to fend off jackals and snakes and poisonous insects. Vultures followed them from above. Whenever civilizations collapse there have always been survivors. It is these survivors who plant the seed for new civilizations. These seeds carry the genetic imprint of the trauma experienced in the collapse of the previous civilization. History repeats itself over and over, providing endless opportunities to learn from and heal from that trauma. Their numbers dwindling, the refugees walked for months across the hills and valleys of the desert wasteland. One day they reached the top of a rise in the land and what they saw stopped everyone in their tracks. In the distance there were magnificent snow-capped mountains. In the little valley immediately below them was a wide river, its waters flowing from those distant mountains. Great joy spread through the people. Some began dancing and singing. All they had to do was follow that river and they would arrive at those mountains in just a matter of days. The glaciers atop those mountains could provide all the water the people would need for their bodies and their crops. The forests could provide the wood to build new homes and the game to further nourish them. The people could finally end their long journey and build a brand new civilization. The possibilities for true peace and joy and happiness filled the people with euphoria.
https://medium.com/grab-a-slice/when-civilizations-collapse-b62caea2fd9b
['White Feather']
2020-12-09 19:13:12.764000+00:00
['Life', 'Spirituality', 'Society', 'Fiction', 'History']
Fresh Hells
by Kate Angus 1. Limbo Hello, Facebook. Hello, Facebook Silence. 2. Lust You do not want to go on a second date with that lawyer, just admit it; you only wonder if it’s possible to be fucked out of your loneliness. 3. Gluttony Is there kale? There is no kale; all the bins are empty. You know who else doesn’t have kale? All those people dying. What’s wrong with you that you aren’t grateful? Your cupboards are a cornucopia of all you have that you don’t want. 4. Greed Sandals with gold straps of a finer plastic, blue necklace, black dress. For your book to finally be published on some minor but still notable press. To love someone who loves you back. Kids, maybe, even if your fertility might be an hourglass as the sand runs out. An apartment where the faucets do not leak. 5. Anger The nail salon radio station plays an extended version of “You’ve Got a Friend” the week after one of your favorite friendships ends. The anxiety that comes from terrible music. How much more than stifling the slide towards crying you want to type out “James Taylor can go to hell,” but one hand is in the bowl of water, the other stilled motionless as Lucy paints meticulous red upon your nails. This is probably for the better — you’d have wanted to send the text to the one who isn’t speaking to you now anyway. 6. Heresy “The soul dies when the body dies” — so say the Epicureans It took so long to love the body; hating spiders, having spider veins. Sometimes the body outlives the soul as any late night bar might show you. I want to believe that something will outlast us, but I’m so often wrong. There is an actual website called godhelpmeplease.com 7. Violence Dante counts violence against the self: profligates, suicides A pack of cigarettes at the bodega costs $14.50. The night of drinking will run another $35. 8. Fraud Fraud involves intentional deception: a willfully false representation that harms the other for personal gain. In this it is different from mistakes or false cognates/friends; for example, “fast” means “speedy” or, in German, “almost”; as in, we were swift and almost friends. 9. Treachery Those who betray fidelity, confidence or trust are found star-fished and limbs pin-wheeling beneath the frozen lake; eyes open, they can stare, but cannot speak. So saith Dante, who with his Virgil, climbed down Satan’s ragged fur to escape. There are so many different paths to what either person could call betrayal. I would trade being right for an ice-ax or a bonfire to loosen the ice-clogged rivers of our throats. Kate Angus’s work has appeared in Indiana Review, Barrow Street, Subtropics, The Awl, Gulf Coast, Court Green, Third Coast, Verse Daily and Best New Poets 2010, among other places. She lives in New York where she is a founding editor of Augury Books and teaches at Gotham Writers Workshop. Photo via Brian Auer/flickr.
https://medium.com/the-hairpin/fresh-hells-6a1e9380f8a0
['The Hairpin']
2016-06-02 02:30:02.253000+00:00
['Dante', 'Circles', 'Poetry']
How I Made an Indie Feature Film on $12500 Budget?
IT ALL STARTED WITH A DREAM Most of the filmmakers, I’ve met, dream of making a feature film. A film that would, and could, be the stepping-stone in their filmmaking careers. However, before that happens, filmmakers usually spend years making shorts, developing their skills in hope to be noticed for their unique artistic taste. Filmmaking is an expensive art form. Regardless of how long, or short, your project is, it’s always going to cost money. Nowadays, equipment is relatively affordable, but you still need to find people, pay for their services, feed them, and pay for travel and locations at the very least. So the low budget filmmaking is never truly low-budget as there is always going to be some money invested in the project. In 2011, after two years of successful film festival runs with my two short films, I decided that I was ready to make a feature film. In the beginning, I tried to go the traditional way and applied, though the standard routes for funding. However, I had no recognisable names attached to my project; I hadn’t won any of the A-list film festivals, nor did I have any connections in the industry whatsoever. Unfortunately, it turned out it was impossible to break through the brick wall with just my will and desire. At that point, I knew I had to find an alternative way if I wanted to make my dream a reality. But the main question was still, HOW? In 2011 (seems like such a long time ago) crowdfunding was becoming pretty “hot” and people were trying to raise funds for a myriad of projects using IndieGoGo or Kickstarter. Since I’ve always been an early adapter and self-doer, I set my mind on using crowdfunding to get my production budget. In a nutshell, it took me six months to properly research crowdfunding and build my online following. Without that initial commitment, I don’t think I would have made my feature. THE PROJECT When I decided to raise funding for my feature film using crowdfunding, I had a completely different project in mind than I ended up making. I hoped to make a romantic comedy but ended up making a docudrama about sex and human trafficking. I guess it’s a perfect example of how creativity can’t be controlled or harnessed in any way. I made Anna & Modern Day Slavery on a $12500 production budget. After the production was finished, I received another $2000, which I invested in editing equipment. In 2015 I ran one more crowdfunding campaign for the post-production, and I managed to raise over $2000, which went straight into the sound and music post-production. We only had nine shooting days and $12500 to make the film happen. That was made possible just because everyone who worked on my project worked solely for the return of their expenses. The idea behind making “Anna & Modern Day Slavery” was to increase awareness of sex and human trafficking as well as raise funding for charities/s that help sex and human trafficking victims. The movie isn’t a documentary, nor is a violently graphic film; it’s a fictional docudrama about one woman’s quest to solve the mystery of a European trafficking ring. “Anna and Modern Day Slavery” is what it is and it won’t be any different. I’m very proud of this project, despite all of the limitations we had and the time it took me to complete the post-production. DEVELOPMENT I started researching sex and human trafficking in the fall of 2011. Soon enough, I knew I wanted to create something that could make a difference in the lives of trafficking victims. At first, I thought I was planning just to be the screenwriter and the producer of a short film. However, the more I researched and wrote, the more I wanted to know who my main character, Anna, was and why she was doing what she was doing. The story took over, and I ended up with 80 pages of a script. As always I went through several re-writes, and the script we shot had a tiny resemblance to my first draft. Things moved pretty fast from the initial development stage, which I began in Nov. 2011, through crowdfunding (March 2012) to production (May/June 2012) During the campaign, I was still polishing off the script and making changes here and there while fiercely working on getting ready to go into production.
https://medium.com/indie-filmmaking-school/how-i-made-an-indie-feature-film-on-12500-budget-14d6fbca2514
['M. Olchawska']
2020-12-15 20:54:35.901000+00:00
['Indie Filmmaking', 'Trafficking', 'Women in Film', 'Filmmaking', 'Filmmaker']
Security, Obscurity, Openness
What I think is really fascinating about security is the duality that I tried to capture in the title between openness and obscurity. Borrowing a definition from cryptography: A (crypto)system should be secure even if everything about the system, except the key, is public knowledge. Thus, security calls for openness. And the viceversa is also true, openness means trust. This is likely why we all love open source projects, because instinctively we tend to trust them more. Let’s take the classic example of iOS vs Android. If I were to ask which one is more open? Of course everybody agrees it’s Android. Even though, if we zoom into Android, we know that not everything is open source. Some parts are proprietary, typically the closer we go to the hardware. And if we ask why? and we dig deeper, the reason for obscurity is often… security! This is pretty ironic and paradoxical. Security calls for openness, openness means trust, some parts of the systems can’t be open and why is that!? Because of security. This cycle clearly has one good part that we want to perpetuate, and another one that we should interrupt. As a community, we should strive to make open source alternatives. Can you imagine a world without an open source operating system? And I’m not saying that everything should be open, certainly that can’t be. I’m saying that for everything there should be an open alternative, for users to choose from. This, to me, is especially important in the field I care about — security — and as we mentioned before this isn’t the case the closer we go to the hardware, for example for hardware security keys. In August 2018, together with a group of friends, we founded SoloKeys with the goal to make open source hardware for secure applications, starting from user login. We made Solo. Solo is the first security key to be open source and implement the newest standard FIDO2, that offers the strongest level of security for two-factor authentication, and works great with Google, Facebook, Github, and many more. In October we launched a Kickstarter, raising $123K from about $3K backers, which is amazing for a security product, while almost all other similar campaigns unfortunately failed. In November we participated in the FIDO Alliance Interoperability Testing Event is Seul, passing all tests. We’re morally FIDO2 certified, pending paying the certification fee. In December we shipped our first batch, and at least in the US many people got their Solo key by Christmas. Finally, in January we were at Shmoocon, presenting our journey for the first time. In addition to Solo, the consumer security key, we also offer Solo Hacker, a key with the same open source hardware, and an unlocked firmware, so you can reprogram it whether you want to learn about embedded devices or explore our security features. A big thank you to the two event hosts: HackerNoon which is where we launched Solo for the first time, and GitHub which is where we host our open source firmware and hardware. And a final invitation to join our community, and help us making security more open, in hardware like in software.
https://medium.com/hackernoon/security-obscurity-openness-55c14f7e9cc1
['Emanuele Cesena']
2019-04-05 07:18:15.966000+00:00
['Hackernoon Top Story', 'Authentication', 'Security', 'Developer', 'Open Source']
Changing Mind
A member of Mutrack and Inthentic. I lead, learn, and build with vision, love and care. https://piyorot.com Follow
https://medium.com/people-development/changing-mind-f4c5f2dc716b
[]
2016-10-18 13:19:20.808000+00:00
['Decision Making', 'Life', 'Work', 'Self-awareness']
Neurotech or Neurotic?
To properly begin, then, a disarmingly simple question: what is neuroscience? The unified, age-old study of the brain, as begun by Erasistratus and Herophilos in ancient Alexandria? Well, no. The OED’s entry for the singular ‘neuroscience’ reveals youth, ambiguity and plurality — the ‘brain’ isn’t even mentioned, and the earliest non-plural usage is from 1970. When the brain is finally mentioned, there is still plurality and instability: neuroscience is not a science but comprised of sciences; it refers not only to anatomy, but also behaviour; its very demarcation already involves ‘differentiating, integrating [and] regrouping’; and finally, neuroscience is the study not of the brain, but the whole nervous system. Why then do experienced neuroscientists still make concrete, narrow statements, like Swaab’s 2014 book We are our Brains? Vidal and Ortega: ‘The belief that human beings are essentially their brain […] has become extremely powerful in contemporary culture. Some scientists have, at least by their public pronouncements, contributed to reduce to the brain the range of determinants of human existence.’ Shouldn’t they say ‘we are our nervous systems’, or ‘we are our brains plus our behaviours plus anything that surfaces through differentiating, integrating and regrouping’? Historians and sociologists of science like Vidal and Ortega emphasize this, warning of the widening public acceptance of the human subject being reduced to the ‘cerebral subject’, a reduction of personhood to ‘brainhood’. The philosophical response to such an ontological quandary has been patchy, with (in)famous examples like Churchland’s Neurophilosophy only exacerbating ‘brainhood’. However, as self-styled anti-neurophilosopher Raymond Tallis rightly points out, tackling cerebral subjection on philosophical grounds, though necessary, only prompts equally necessary questions which need to bypass the reactionary dismissal of neuroscience’s genuine benefits to society. Centuries apart in their attempts, neither philosophy nor neuroscience offers a satisfactory account of human consciousness. On the contrary, as writer David Lodge highlights in a diplomatic gloss on behaviourism, science is only just catching up to philosophy. In fact, Lodge claims literature has succeeded far more than either in truly reflecting consciousness, an important point steadily being recognized by luminaries in the cognitive and neuro-sciences, such as Chomsky, Edelman, Damasio and Dennett.
https://medium.com/@seedomir/neurotech-or-neurotic-64008118cba7
['Seedomir Jeden']
2019-06-28 16:30:07.302000+00:00
['Flatiron School', 'Philosophy', 'Neuroscience', 'Programming', 'Literature']
An Artist’s Opinion on Procreate 🔍
An Artist’s Opinion on Procreate 🔍 Procreate is a professional-grade drawing app that allows you limitless customization options so you can receive the most desirable illustration experience. Whether you are a beginner or advanced artist, Procreate will work for you. Allow me to explain why. 🧐 Pros To start, it only costs a one-time payment of $10. In my opinion, Procreate is worth $50. Yes, it is that good. It’s by far the best drawing app I have ever encountered in my life, and it’s only $10? I am appalled in the best way possible. Oh, and don’t forget- there are no advertisements within the app, so you don’t have to worry about losing focus while drawing! The whole application has a very minimalist aesthetic, from the homepage to the canvas. Artwork illustrated by the author Another famous feature of Procreate is the infinite amount of customization. You can create palettes, brushes, canvases, collections, and so much more! The entire outcome of your artwork is in your hands. There is a comprehensive variety of all the essential drawing tools like brushes, colors, and blenders, not to mention they are all notably realistic and plentiful. Are 150 pre-made brushes enough for you? 😉 Cons Though I’d like to say otherwise, even Procreate is not unconditionally perfect. First of all, it is not compatible with Google devices. If you do not use Apple software, then you need to buy an iPad (Pro or regular) to install and use the Procreate app. Another flaw is that there is no free trial before purchase. This means that you cannot test out the app before buying it, which can be a barrier. Thank goodness it’s cheap! Artwork illustrated by the author Finally, Procreate lacks some cool features that would be useful in many situations. Tools like an outliner, graph/diagram assist, or brush filter would make using Procreate easier and less confusing. Conclusion All in all, I say that Procreate is great for beginners but better for advanced or professional artists. Beginners don’t require as many options or as much focus, and it may be overwhelming for complete newbies. Professionals would benefit especially from the variety, focus, and peaceful aesthetic. Procreate works excellently with any kind of digital art, be it lettering, cartooning, doodling, hyperrealism, or even animation! So if you need a professional digital art app but you’re on a budget, Procreate is perfect for you! 🖌️
https://medium.com/@zairakhemani/an-artists-opinion-on-procreate-c0a8446ec529
['Zaira Khemani']
2020-12-14 06:22:43.248000+00:00
['Digital Drawing', 'Procreate', 'Drawing', 'Digital Art', 'Illustration']
Enemy Waves — first attempt. Today I did a lot of white boarding on…
Today I did a lot of white boarding on the tasks for Phase II. Here is the basic logic for the next 3 tasks: Aggressive Enemies Once this type of enemy comes within a certain close distance to the player, this enemy attempts to “ram” the player. Pseudo code: if distance between enemy and player is x move towards player Powerup Enemies If a powerup is in front of one of these enemies, this enemy will shoot and destroy the powerup. Pseudo code: if powerup is in front of enemy (same x value) shoot to destroy powerup Enemy Wave System Spawn enemies in waves, with each wave spawning more enemies than the prior. Pseudo code: initiate spawning after asteroid is hit display wave count across screen via UI spawn x number of enemies pause spawning display wave count across screen via UI unpause spawning spawn x++ number of enemies pause spawning display wave count across screen… etc. (LOOP) After getting this all out of my head and down on paper, I felt organized enough to start tackling the wave spawning system. I started by adding a wave manager to the Hierarchy and attached a wave manager script. I determined the most efficient way to accomplish this task would be to loop through the same loop while resetting the number of enemies spawned as well as incrementing the threshold of number of enemies spawned to advance to the next wave. I’d also need to “announce” the upcoming wave of enemies via the UI (almost to act like different levels and give the player a sense of advancement). I then had to modify my Asteroid code. Instead of telling the spawning manager to start spawning immediately after the asteroid is hit with a laser, I call a method on my new wave manager script to initialize spawning. I also had to add a spawned-enemy counter to my enemy script in order to keep track of how many enemies are spawned. Once this count reaches a certain threshold, the player advances to the next wave of enemies. Check out what I have so far. As you can see, there are TONS of enemies and TONS of powerups leading me to believe that I’m not stopping and restarting spawning correctly. At first glance it seems that there may be multiple instances of the same spawning script running at the same time. Stay tuned for the actual diagnosis and solution!
https://medium.com/@kristintreder/enemy-waves-first-attempt-618353f58392
['Kristin Treder']
2020-12-17 19:02:28.334000+00:00
['Unity', 'Unity3d', 'Learning To Code']
How to config Git first time in the machine
I think you know what is Git & GitHub. Git serves a big role in any developer’s life. There is the first step of usage of Git and GitHub. There are some basic steps which are given below:- Get a GitHub account. account. Download and install git . . Set up git with your user name and email. Open a terminal/shell and type: $ git config --global user.name "Your name here" $ git config --global user.email "[email protected]" “Your name here” means which name you want to show in contributor. “[email protected]” means which email is registered on GitHub. Note:- (Don’t type the $ ; that just indicates that you’re doing this at the command line.) This will enable colored output in the terminal $ git config --global color.ui true That’s it for this time! I hope you enjoyed this post. As always, I welcome questions, notes, comments and requests for posts on topics you’d like to read. See you next time! Happy Coding !!!!!
https://medium.com/@rajputankit22/how-to-config-git-first-time-in-the-machine-3896e93731c1
['Ankit Kumar Rajpoot']
2020-12-20 09:53:21.101000+00:00
['Configuration', 'Local', 'PC', 'Github', 'Git']
Help! I Am Hooked On External Validation!
Help! I Am Hooked On External Validation! A psychologist’s guide to valuing your own needs properly—and gaining new confidence in yourself Annika Lindberg Apr 27·12 min read Image credit: tadamichi ‘But why can’t I just be happy with what I have….On the surface everybody thinks my life is perfect. Even I feel like I have it all ….but still I feel empty inside. Will I ever be able to feel happy and satisfied?’ My client, Monique, looked at me with despair. Her perfected exterior provided a powerful camouflage for her fragile sense of self. The realisation had finally hit home. No amount of beautiful clothes, admiration from men, jealous colleagues, or successful ventures at work were able to hit the spot. She still did not feel good about herself. Something deep inside of her was calling for her to change… When External Validation Leads to Abandonment of the Self As a psychologist, there is not a day that goes past without someone in clinic presenting with a lack of self-love and a deep-seated wish to feel better about themselves. To not feel so dependent on others. To not have to feel so insecure all the time. To stop comparing oneself to other people and envy their achievements. If you are someone struggling with low self-worth, you might already have noticed that the ‘work’ you put into getting liked by others — is not paying off. You might even resent those who do less of ‘pleasing others,’ yet end up getting all the rewards. With or without realising it, you might be stuck in an unhelpful pattern of people-pleasing, self-editing, and a perpetual ‘chase’ for things outside of yourself. Meanwhile, your emotional needs may be neglected or suppressed. The boundaries that should be in place to protect your personal needs may be non-existent, weak, and contact with the self and your heartfelt values diminished. You might not even know who you are. If this is you, please do not worry. While feeling depleted and emotionally drained makes for some terrible feelings — it is important to understand that it is you and only you that is keeping yourself stuck. Even if it sounds harsh at first, there is also a sense of empowerment in knowing that you have what it takes to change and that you don’t need to wait for anyone or anything else to get it started. So if this resonates, start by making a firm decision that you have had enough pain and that you need to change. Poor Consideration of Your Own Needs Reinforces a State of Low Self-Worth Being ‘needless’ might appear to make life easy at first. It does, of course, often make life easy for people around you. Too easy in fact. Sadly, operating without expressing or fulfilling your personal needs will inevitably lead to feeling as though you have let yourself down. It also goes without saying that other people will seem to disappoint you since their actions are unlikely to match your hidden needs. Your true and authentic self cannot gain its full expression if you are not prepared to be honest and upfront about what you need in order to be happy. For most people, some of the habits that are based on low self-worth are barely operating consciously, and they may require that you put yourself under the microscope and open up to learn about yourself and your habits. Excessive External Validation Seeking: A Bad Habit That Gets Acquired Early in Life Excessive external validation goes hand in hand with low self-worth and a feeling that lacking the approval of others means something important about one’s own value. The perpetual chase for other peoples’ approval may be a response that was acquired early in life. Growing up in a family with overly critical, emotionally volatile, addicted, or unavailable parents—or a lack of unconditional love—a child might learn early to adjust themselves, to suppress their own needs, and to ‘hyper-monitor’ their environment for other peoples’ feelings and opinions about them. Constant appeasing of parental needs may become a way to keep unpleasant situations unfolding or simply to feel loved and ‘good enough.’ Early in life, the behaviours may even have made sense, if they helped you survive in your environment. In some circumstances (and also to avoid pathologising or passing blame on others) there is a totally benign reason for why emotional needs went unmet, such as many siblings, parental illness, or absence due to work. There could, of course, be other reasons why some people start becoming overly dependent on external validation. Being keen to be liked by others and achieving approval and admiration from those around us is a completely normal need that most people have. But when we try to gain approval from others at the expense of our own internal validation, the balance has definitely tipped over. We should never have to abandon ourselves to be liked by others! Whilst the effects of being hooked on external validation may seem fairly innocent on the surface, the impact is often more far-reaching than people would even dare to imagine. By continuing to operate as though you are a person who does not place a high value on yourself, you can be sure other people will follow suit and treat you the same. At the end of the day, we have to teach other people how to treat us. External Validation and Its Many Disguises. Which Ones Resonate With You? Below is a small list of behaviours that can keep you stuck in a dependency on other peoples’ opinions of you. Are any familiar? Checking for admiration online Like a pigeon in a ‘Skinner-box’ pecking at a lever until the reward comes in, you check frantically on your phone for any sort of attention or approval. Could be “likes” on your latest picture on social media, a response to something catchy you posted, or any signs that you are being noticed and admired. The disappointment if you don’t receive any is huge, but that doesn’t stop you from checking again and again. When something finally comes in, you feel like you hit the jackpot and delude yourself into thinking it was all worthwhile. When in a state of deprivation, even small drops of adoration can feel powerful and get addictive. In reality, the shortage in supply has nothing to do with them, and everything to do with you not liking yourself enough to start with. The internet can be a slippery slope even for those who start out on social media with reasonable self-esteem. Over-giving You do lots of things for other people — sometimes way more than you actually want to. While much of it feels honest and as a true reflection of ‘who you are,’ from time to time you can’t help but feel bitter and resentful that others don’t do as much for you. You feel a bit taken for granted in general and cannot understand why those who are less giving and saying ‘no’ seem to be getting all the props. You watch them grudgingly as they ‘cash in’ on favours, attention, promotions, and ‘the best,’ most loving partners. This makes you doubt yourself further and leads to more over-giving, in a desperate attempt to win people over. You often find yourself in ‘performance mode’ You have become so skilled at acting the chameleon with others, that you actually don’t know who you really are underneath. You tell yourself that there is no way you can be honest about who you are, or what your needs are, since previous attempts to show the world the true you — have never quite seemed to impress others. For the record: often, the real reason for failed attempts has to do with the choice of audience rather than an actual flaw or lack in you. You engage heavily in people-pleasing Being too nice, too understanding, too accommodating, and constantly feeling a bit frightened to upset others or gain disapproval. You operate with a preoccupation of what others will think or feel about you, even if logically you can understand that their opinion should not be all that important. You are known to say YES when really you would like to say NO Saying NO feels almost impossible for you. Yet, you feel a little jealous of other people who comfortably decline things that don’t suit them. Sometimes you even bitch a bit about them… but inside, you wonder why they are able to ‘get away’ with it when you feel like you wouldn’t. The real reason they are getting away with it is that healthy boundaries send out to the world a message of worthiness and value. The boundaries communicate: ‘I am not going to accept being treated badly, dumped on, taken advantage of, or anything else unpleasant. If people don’t treat me with respect, I am not going to stand for it, and I will walk away from it.’ You overshare ideas, views, opinions, etc., constantly You do this not so much because you find it engaging, but merely because you are thirsting for approval, peoples’ agreement, and other forms of interest that might temporarily boost your feelings about yourself. The trouble is that baring your soul for people who have not earned your respect or trust will make you feel vulnerable and ‘in need’ of a particular response in order to have your shared material validated. When this doesn’t happen, you feel twice as vulnerable and likely to think it must have been something you said. You have a habit of chasing Be it material things, academic accolades, money, or people, you often feel as though you have to work hard to get what you deserve. (Paradoxically, you still don’t feel you get what you deserve). You feel ‘hooked’ on the opinions of others, and you yearn for approval. You have little boundaries on what you would be prepared to throw in to get the desired effect, be it your time, money, efforts, or dignity. Anything goes. At the end of the day, you feel a distinct feeling of ‘empty hollow’ when you reflect back on actions. Even if the discrepancy between input and output is in your face, the habit of chasing is so compulsive that you struggle to stop. You compare yourself to others And you try to identify traits, themes, and behaviours in others that you quickly replicate in the hope that it will be successful for you also. If nothing else, at least you might feel like you are working on yourself. Rather than turning your attention to your amazing inner world and its creativity and uniqueness, you churn your energy into trying to ‘figure out’ what it is that others have done to succeed. Needless to say, this misallocation of your attentional resources will backfire badly. Not only are you making yourself dependent on other peoples’ paths, which aren’t necessarily suited for you, but you are also keeping your own abilities and skills ‘rusty’ and unused. Short-Term vs. Long-Term Emotional Consequences of Validation Seeking Behaviours In my work with clients, I often explain the difference between short-term vs. long-term emotional gains. Although it may be obvious for many of you, I find that this knowledge really is critical for change. Without this understanding, there is always a risk that the short-term effect gets interpreted as an accurate indication of whether a behaviour is useful or not. Our emotional brain is constantly seeking short-term gratification. It knows of nothing worse than discomfort of any kind — and certainly does not like the idea of rejection and disapproval. These are states that could have got our ancestors into seriously negative situations, as they depended on belonging to the herd in order to survive. The trouble is that this part of our brain does not have any real intelligence or reason, and hence it will allow itself to be ‘programmed’ by the feeling we achieve short-term from any given behaviour. One part that often gets overlooked is how the emotional brain warms not only to the things that feel good in the short-term but also to whatever action that results in feeling ‘less bad.’ At times of emotional struggle, it will gear you towards any actions that can relieve such feelings, for example by numbing, avoiding, or deflecting…even when the behaviours involved may be outright destructive in the longer term. Even if the emotional part of the brain does not care too much for how behaviours make us feel in the long term, our higher self does! Trading in our long-term happiness (and sense of worth) for some short-term boosts provides a guarantee for unfulfillment. It’s a bit like going to the gym. If we want to see results, we have to be prepared to stay with the discomfort and the pain. If we stop every time the going gets tough — nothing will ever change. 5 Ways of Breaking the Habit of External Validation Seeking That You Can Commit to Right Now 1. Have regular self-care days (or just an evening/hour). Taking good care of yourself is an act of self-love, and one that will make you feel treated and ‘honoured.’ By giving love and appreciation to yourself, your dependency on other people giving you their appreciation should gradually start to lessen. Even better, you might eventually be repelled by people who cannot value you properly. Take care of yourself from the inside and out. Eat nourishing foods, drink lots of water, get your rest, watch something stimulating, have a massage, put on decent clothes even when you are not seeing anyone — do what makes you feel good about yourself and dwell in the feeling of being there for yourself! This is the ultimate self-validation. 2. Stop chasing! Be it romantic partners, friends, jobs, or material things; the act of chasing has never helped any person feel good about themselves. It keeps you stuck in a perpetual feeling of neediness, unworthiness, and a desperation to be ‘chosen’ and liked. In relationships of any kind, chasing ascertains a dynamic in which you are establishing yourself on a back-foot. In order to feel high value, you need to act as if you have value. Even if you don’t feel it yet, you have to think and act as if you do regardless. Eventually, the feeling will follow. Does this mean you are no longer allowed to date, go on social media, or make bids for attention? Not at all. It simply means stopping yourself in the track when you can sense that your behaviour is driven by insecurity, neediness, or self-doubt. There is a huge difference between behaviours done without an expectation for a particular outcome — and those done with a feeling of ‘need’ luring in the background. If you tune in with yourself you will feel the difference. 3. Stop people-pleasing and ‘over-accommodating.’ Having no needs does not make you a better person! It just makes you far more likely to be taken advantage of, taken for granted, or viewed as someone who can easily be swayed, convinced, or is prepared to ‘shrink’ themselves to fit in with others. This is not who you want to be, and you have to recognise that it is OK to be nice to people without giving up on yourself in doing so. 4. Commit to build your own worth by choosing to put the attention on you. This requires a willingness to say NO to others and apply good boundaries by being prepared to let go of situations or relationships that no longer serve you well. Although this might sound straightforward theoretically, this process tends to be challenging when faced with the draw of compulsive pleasing and clinging. When you change, some challenging emotions will surface, so whenever pain or anxiety arises, do know that this is not a sign that you are doing things wrong, but rather a sign that you are changing a habit! When you pull away from the dependency on other peoples’ approval, you will notice a rise in anxiety. You might start doubting yourself and wonder if it is safer to return to the comfort zone of compliance with what others expect of you. Change requires you to take a leap of faith. You have to trust that you are worthy enough in yourself before you actually feel it. Have faith — and accept if you fall off the wagon from time to time. 5. Connect inwards and align your behaviour with your inner values. Identifying your values can be quite a big job. If you have no clue where to start, you can begin by mapping your reactions, triggers, likes, and dislikes in day-to-day life. Take notes and look for patterns. Are there times when you feel particularly engaged? Or happy? Likewise, there may be times when something makes you upset — ask yourself what happened and what it was about that situation that didn’t sit well with you. Our feelings can be powerful messengers of our inner values and preferences. Once you have started to build an idea of what your values and preferences are, try to make sure that you honour them by adjusting your behaviour according to what makes you feel like your authentic self. Some of these steps might sound difficult and laborious. I would be lying if I said they didn’t require effort, but I also want to emphasise how amazing it feels when you start changing the habits that keep you stuck in unworthiness and pain. Keep a log book, and start tracking your progress. Even if you take it bit by bit, the rewards from being honest with yourself and honouring your needs will soon enough be self-reinforcing.
https://betterhumans.pub/help-i-am-hooked-on-external-validation-21d418db7881
['Annika Lindberg']
2021-04-29 15:09:10.893000+00:00
['Mental Health', 'Codependency', 'Confidence', 'People Pleasing', 'Self Esteem']
A Complete Guide For Alexa Echo Dot Setup
Alexa is one of the most popular smart home gadgets today. To benefit from all the features of the device you first have to complete the Alexa Echo Dot Setup. This is why we have arranged all the methods to complete the setup. Here is the complete setup guide for the Echo device, so have some patience and start following this. Quick And Easy Methods For Alexa Echo Dot Setup For new Alexa users, it could be difficult to complete the Setup Alexa alone. This is why we have come here with the complete setup guide. Here are some of the easiest methods to complete Echo Dot Setup: Download the Alexa app. Plug the Echo device into the power outlet. With the help of the Alexa app, connect the Echo device to wifi. Use your Alexa device. Say the wake word of the Alexa device. Now, we are going to tell all the methods in a little detail. Download The Alexa App If you want to complete the Alexa Echo Dot Setup then the Alexa app is a mandatory thing that you must have downloaded. You can easily download the app to your smartphone via the play store or app store. You can only download the app with an ios device 11.0 or above, android device 6.0 or above, Fire OS 5.3.3 or above. Before downloading the Alexa app, make sure that you have connected with a good speed internet connection. 2. Plug-In The Alexa-Enabled Device This is one of the easiest steps from the whole Alexa Setup process. Let us tell you that you don’t require any batteries or other things to connect Echo devices to power sources. Connect the power adapter to the Echo device and then connect it to the power outlet. When you see the blue ring light on the Echo device, it means your device has been connected. Wait until the blue light ring on the Alexa device will turn to orange. The orange ring light means that your device is ready to set up Alexa Echo. You will hear the sound that your device is ready to set up. 3. Connect Alexa-enabled Device To Wifi Via Alexa App Now, it’s time to connect your Echo device to the available wifi network. If you have purchased the latest version of your Echo device, you will be guided through the whole process to connect the Echo device to the internet. When you will be asked to enter the password, enter the right password, and check twice. Tap on the “save” button so that you will no longer enter the password again. 4. Start Talking To Your Alexa Device When you have found that your device is properly connected to the internet, say the wake word of the Echo device. The default wake word of your device is “Alexa”, you can change it if you want. To change the wake word of Alexa, open the app and go to “help and feedback”. Click on “change the wake word”. 5. Use Your Alexa Device We are very happy that you have finally completed the Alexa Echo Dot Setup in the first go. Once you complete the setup, you can do a variety of things like playing music, setting reminders, controlling all smart home lights, etc. Apart from all the mentioned things, you can perform thousands of tasks with your Alexa device. 6. Connect Echo Device With Other External Speaker If you want to connect the Echo device with any other speaker, open the Alexa app and tap on “+”. From this option, you can easily select any device compatible with your Alexa. Last Words… We have arranged each and everything that is required to complete the Alexa Echo Dot Setup. We really hope that you have followed all the steps in the same order.
https://medium.com/technical-information-usa/a-complete-guide-for-alexa-echo-dot-setup-4c774f3f32bd
[]
2020-12-10 11:21:23.101000+00:00
['Tech', 'Services', 'Technews', 'Technology', 'Alexa']
How to Simulate a Pandemic in Python
Introduction What’s a better time to simulate the spread of a disease than during a global pandemic? I don’t have much more to say — let’s jump right into programming a simple disease simulation. In real life, there are hundreds of factors that affect how fast a contagion spreads, both from person to person and on a broader population-wide scale. I’m no epidemiologist but I’ve done my best to set up a fairly basic simulation that can mimic how a virus can infect people and spread throughout a population. In my program, I will be using object-based programming. With this method, we could theoretically customize individual people and add in more events and factors — such as more complicated social dynamics. Keep in mind that this is an introduction and serves as the most basic model that can be built on top of. Variables/Explanation Fundamentally, our program will function around a single concept: any given person who is infected by our simulation’s disease has the potential to spread it to whoever they meet. Each person in our “peopleDictionary” will have a set number of friends (gaussian randomization for accuracy) and they may meet any one or more of these friends on a day to day basis. For our starting round of simulations, we won’t implement face masks or lockdowns — we’ll just let the virus spread when people meet their friends and see if we can get that iconic pandemic “curve” which the news always talks about flattening. So, we’ll use a Person() class and add a few characteristics. Firstly, we’ll assume that some very tiny percentage of characters simulated will already have immunity to our disease from the get-go, for whatever reason. I’m setting that at 1% (in reality, it’d be far lower but because our simulation runs so fast, a large portion like this makes a bit more sense). At the start of the simulation, the user will be prompted to enter this percentage. Next, we have contagiousness, the all-important factor. When a person is not infected, this remains at 0. It also returns to 0 once a person ceases to be contagious and gains immunity. However, when a person is infected, this contagious value is somewhere between 0 and 100%, and it massively changes their chance of infecting a friend. Before we implement this factor, we need to understand Gaussian Distribution. This mathematical function allows us to more accurately calculate random values between 1 and 100. Rather than the values being distributed purely randomly across the spectrum, most of them cluster around the median average point, making for a more realistic output: As you can see, this bell-shaped function will be a lot better for our random characteristic variables because most people will have an average level of contagiousness, rather than a purely random percentage. I’ll show you how to implement this later. We then have the variables “mask” and “lockdown” which are both boolean variables. These will be used to add a little bit of variety to our simulation after it is running. Lastly, we have the “friends” variable for any given person. Just like contagiousness, this is a Gaussian Distribution that ends up with most people having about 5 friends that they regularly see. In our simulation, everyone lives in a super social society where on average a person meets with 2 people face to face every day. In real life, this is probably not as realistic but we’re using it because we don’t want a super slow simulation. Of course, you can make any modifications to the code that you like. There are also a couple of other variables that will be used actively in the simulation and I’ll get to those as we go on! Step-by-Step Walkthrough So let’s get coding this simulation! First, there are three imports we have to do: from scipy.stats import norm import random import time SciPy will allow us to calculate values within the Gaussian Distribution we talked about. The random library will be for any variables we need that should be purely random, and the time library is just for convenience if we want to run the simulation slowly and watch the spread of the disease. Next, we create our Person() class: # simulation of a single person class Person(): def __init__(self, startingImmunity): if random.randint(0,100)<startingImmunity: self.immunity = True else: self.immunity = False self.contagiousness = 0 self.mask = False self.contagiousDays = 0 #use gaussian distribution for number of friends; average is 5 friends self.friends = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)) def wearMask(self): self.contagiousness /= 2 Why are we passing the variable startingImmunity to this class exactly? Remember how we could enter what percentage of the population would have natural immunity from day 1? When the user gives this percentage, for every person “spawned” into our simulation we’ll use random to find out if they’re one of those lucky few to already be immune — in which case the self.immunity boolean is set to True, protecting them from all infection down the line. The remaining class variables are self-explanatory, except self.friends, which is the Gaussian Distribution we talked about. It’s definitely worth reading the documentation to get a better idea of how this works! def initiateSim(): numPeople = int(input("Population: ")) startingImmunity = int(input("Percentage of people with natural immunity: ")) startingInfecters = int(input("How many people will be infectious at t=0: ")) for x in range(0,numPeople): peopleDictionary.append(Person(startingImmunity)) for x in range(0,startingInfecters): peopleDictionary[random.randint(0,len(peopleDictionary)-1)].contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10) daysContagious = int(input("How many days contagious: ")) lockdownDay = int(input("Day for lockdown to be enforced: ")) maskDay = int(input("Day for masks to be used: ")) return daysContagious, lockdownDay, maskDay After setting up our class, we need a function to initiate the simulation. I’m calling this initiateSim() and it’ll prompt the user for four inputs — population, natural immunity population, contagious people at day 0, and how many days a person will stay contagious for. This daysContagious variable should actually be random — or even better, dependent on any number of personal health conditions, such as immune compromisation — but let’s keep it like this for a basic simulation. I found from testing that it is most interesting to run the simulation with a 4–9 day contagious period. We spawn the inputted number of people into the simulation. To start the disease, we pick people at random to be our “startingInfecters”. As you can see, we’re assigning a Gaussian variable to each one for their level of contagiousness! (Any time a person is made contagious in the simulation we’ll repeat this process.) We return the number of days someone will stay contagious for, like mentioned. Now, this simulation will be done day by day, so let’s set up a function: def runDay(daysContagious, lockdown): #this section simulates the spread, so it only operates on contagious people, thus: for person in [person for person in peopleDictionary if person.contagiousness>0 and person.friends>0]: peopleCouldMeetToday = int(person.friends/2) if peopleCouldMeetToday > 0: peopleMetToday = random.randint(0,peopleCouldMeetToday) else: peopleMetToday = 0 if lockdown == True: peopleMetToday= 0 for x in range(0,peopleMetToday): friendInQuestion = peopleDictionary[random.randint(0,len(peopleDictionary)-1)] if random.randint(0,100)<person.contagiousness and friendInQuestion.contagiousness == 0 and friendInQuestion.immunity==False: friendInQuestion.contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10) print(peopleDictionary.index(person), " >>> ", peopleDictionary.index(friendInQuestion)) The runDay function takes daysContagious for reasons explained later. In our first for loop, we’re using a list comprehension to find the people who are capable of spreading the disease — that is, they are contagious and have friends. We’re then calculating the number of people they could meet on that day. The maximum is 50% of their friends, and then we’re using a standard random.randint() to generate how many they actually do meet on that day. Then we use another embedded for loop to randomly select each friend that was met from the peopleDictionary[]. For the friend to have a chance of being infected, they can’t be immune to the disease. They also have to have a contagiousness of 0 — if they’re already infected, the encounter won’t influence them. We then use the infecter’s contagiousness percentage in a random function to find out if the friendInQuestion will be infected. Finally, if they do get infected, we go ahead and assign them a Gaussian Distribution variable for their contagiousness! I added in a simple print statement as a marker which will allow us to follow the simulation in the console as it is running. At the end of our program, we’ll add functionality to save the results to a text file anyway, but it’s cool to see little tags that tell you who is infecting who. Next part of our runDay() function: for person in [person for person in peopleDictionary if person.contagiousness>0]: person.contagiousDays += 1 if person.contagiousDays > daysContagious: person.immunity = True person.contagiousness = 0 print("|||", peopleDictionary.index(person), " |||") Basically, all we’re doing here is finding all the people who are contagious and incrementing their contagiousDays variable by 1. If they’ve been contagious for more days than the daysContagious time the user selected, they will become immune and hence their contagiousness drops to 0. (Again, another print marker to show that the given person has gained immunity.) I know I could have put this in the previous for loop but not to make my programming too dense, I separated it. Sue me. Finally, to tie it all together, we need to do a bit of admin: lockdown = False daysContagious, lockdownDay, maskDay = initiateSim() saveFile = open("pandemicsave3.txt", "a") for x in range(0,100): if x==lockdownDay: lockdown = True if x == maskDay: for person in peopleDictionary: person.wearMask() print("DAY ", x) runDay(daysContagious,lockdown) write = str(len([person for person in peopleDictionary if person.contagiousness>0])) + " " saveFile.write(write) print(len([person for person in peopleDictionary if person.contagiousness>0]), " people are contagious on this day.") saveFile.close() This is pretty self-explanatory. We get the daysContagious value by initiating the simulation, we open our save file, then cycle through the days up to day 100. Each day we use a list comprehension to get the number of people contagious and write it to our save file. I also added one final print statement so we can track the disease’s progression in the console. And that’s it! I only explained the basics of the code, but let’s talk about the extra variables that you may have noticed… Lockdown variable Adding a lockdown variable is quite simple. First, add this in before the section where we cycle through each of the friends a person meets (see code above): if lockdown == True: peopleMetToday = 0 for x in range(0, peopleMetToday): Now, you want to select when the lockdown is enforced? No problem. Add a user prompt tight inside your initiateSim() function. lockdownDay = int(input("Day for lockdown to be enforced: ")) return daysContagious return lockdownDay Return it, and update the function call. Then, we need to define our lockdown boolean, and set it to true when we reach the correct date: lockdown = False daysContagious, lockdownDay = initiateSim() saveFile = open("pandemicsave2.txt", "a") for x in range(0,100): if x == lockdownDay: lockdown = True print("DAY ", x) You can see that I just added 3 more lines into where we manage the simulation. Simple and easy, then you will want to pass the lockdown boolean to your runDay() function and make sure the runDay() function can accept it: runDay(daysContagious, lockdown) And: def runDay(daysContagious, lockdown): That’s the lockdown added. See the results section to find out how the implementation of a lockdown affected the spread of the disease! Facemasks Finally, we want to add facemasks. I could add all sorts of ways that this changes how a disease spreads, but for us, we’ll just use it to decrease each person’s contagiousness. All we have to do is give the Person() class a function that tells them to wear a face mask: def wearMask(self): self.contagiousness /= 2 Yep, just halving their contagiousness is they wear a mask. Update initiateSim() so we can ask the user for what date the masks should come into use: maskDay = int(input("Day for masks to be used: ")) return daysContagious, lockdownDay, maskDay And update our call: daysContagious, lockdownDay, maskDay = initiateSim() Finally, we’ll edit the section where we cycle through the days so that if the day reaches maskDay, then we tell every person to run their wearMask() function: if x == maskDay: for person in peopleDictionary: person.wearMask() If only it was this easy in real life, right? Well what do you know, we’ve created a simple pandemic simulation with the ability to simulate each individual person, change attributes of the virus, enforce lockdowns, and make people wear face masks. Let’s look at our results: Results I’m putting all the data gathered from my text save files into Excel. 5000 people, 1 starting infecter, 1% starting immunity, 7 days contagious, no lockdown or masks: As expected, a nice smooth curve — almost mathematically perfect. By the end of the simulation, every has gained immunity and the cases drop to a 0, which continues until all the days have completed. Now let’s see what happens to the previous result when you implement some countermeasures: Now what we have here is really interesting. Take the blue line. This is the simulation without any countermeasures, just like our previous result. However, when we implement a lockdown on day 15, it has a huge effect on the orange line; the spread of the disease is curbed before it can really take off, and look at that gradual curve back down again — that’s where there are no new cases and people are gradually becoming immune! We can then compare that to the gray line, where we implement lockdown just 5 days later than orange. It has a drastically lower effect because that five-day delay really made a difference to the number of cases. Finally, take a look at the yellow line. This is where we implement face masks, and it’s probably the most interesting simulation of all. You can see at day 15, there is a sudden change in the gradient of the line which affects how fast the disease spreads. It probably would have increased much more rapidly without the face masks! About day 21, there is a peak, and thanks to the masks, it is substantially less than the blue line, where there were no countermeasures! There is also a tiny secondary peak, and the overall summit of the curve lasts longer than any other simulation. Can you figure out why? Next Steps Just to clarify, this was supposed to be a simple simulation. It is, of course, very basic with very limited parameters and functionality. However, it is incredible to see how much we can learn from a simulation that takes up barely a hundred lines of code. It really puts into perspective the impact lockdowns and face masks had. I encourage anyone reading this with a programming mindset to go out and improve my code. I’d recommend the following features: Face masks randomly (Gaussian?) affect contagiousness Not everyone obeys lockdown, and even for those who do, there is a chance of an infection happening, say, during a grocery shopping trip A certain percentage of people wear face masks, and this varies on a day to day basis More social dynamics, or parameters in general. The idea of communities. If anyone does take on the challenge of upgrading this code, I’d love to see what results you get from playing around with the factors. Thanks for reading! Full code: from scipy.stats import norm import random import time peopleDictionary = [] #simulation of a single person class Person(): def __init__(self, startingImmunity): if random.randint(0,100)<startingImmunity: self.immunity = True else: self.immunity = False self.contagiousness = 0 self.mask = False self.contagiousDays = 0 #use gaussian distribution for number of friends; average is 5 friends self.friends = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)) def wearMask(self): self.contagiousness /= 2 def initiateSim(): numPeople = int(input("Population: ")) startingImmunity = int(input("Percentage of people with natural immunity: ")) startingInfecters = int(input("How many people will be infectious at t=0: ")) for x in range(0,numPeople): peopleDictionary.append(Person(startingImmunity)) for x in range(0,startingInfecters): peopleDictionary[random.randint(0,len(peopleDictionary)-1)].contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10) daysContagious = int(input("How many days contagious: ")) lockdownDay = int(input("Day for lockdown to be enforced: ")) maskDay = int(input("Day for masks to be used: ")) return daysContagious, lockdownDay, maskDay def runDay(daysContagious, lockdown): #this section simulates the spread, so it only operates on contagious people, thus: for person in [person for person in peopleDictionary if person.contagiousness>0 and person.friends>0]: peopleCouldMeetToday = int(person.friends/2) if peopleCouldMeetToday > 0: peopleMetToday = random.randint(0,peopleCouldMeetToday) else: peopleMetToday = 0 if lockdown == True: peopleMetToday= 0 for x in range(0,peopleMetToday): friendInQuestion = peopleDictionary[random.randint(0,len(peopleDictionary)-1)] if random.randint(0,100)<person.contagiousness and friendInQuestion.contagiousness == 0 and friendInQuestion.immunity==False: friendInQuestion.contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10) print(peopleDictionary.index(person), " >>> ", peopleDictionary.index(friendInQuestion)) for person in [person for person in peopleDictionary if person.contagiousness>0]: person.contagiousDays += 1 if person.contagiousDays > daysContagious: person.immunity = True person.contagiousness = 0 print("|||", peopleDictionary.index(person), " |||") lockdown = False daysContagious, lockdownDay, maskDay = initiateSim() saveFile = open("pandemicsave3.txt", "a") for x in range(0,100): if x==lockdownDay: lockdown = True if x == maskDay: for person in peopleDictionary: person.wearMask() print("DAY ", x) runDay(daysContagious,lockdown) write = str(len([person for person in peopleDictionary if person.contagiousness>0])) + " " saveFile.write(write) print(len([person for person in peopleDictionary if person.contagiousness>0]), " people are contagious on this day.") saveFile.close() Thanks for Reading! I hope you found this entertaining and possibly inspiring! There are so many ways that you can improve this model, so I encourage you to see what you can build and see if you can simulate real-life even closer. As always, I wish you the best in your endeavors! Not sure what to read next? I’ve picked another article for you: Terence Shin
https://towardsdatascience.com/simulating-the-pandemic-in-python-2aa8f7383b55
['Terence Shin']
2020-12-21 03:35:16.322000+00:00
['Data Science', 'Programming', 'Simulation', 'Python', 'Pandemic']
Setting up Backend (Part-2)
Welcome back! Or if you only just finished the first part, welcome to Part 2! In this second part, we will see how to handle the chunks we received from the client & play the video back in various formats depending on user bandwidth. I am hosting this on AWS; you can host wherever you want, but please ensure you have root access to the VM or permission to install the application. We will be installing applications for handling video, audio, and other multimedia files. Let’s launch an EC2 instance with Ubuntu OS (t2.micro) Please select Ubuntu Server (free tier) Just accept the defaults on every page till security groups where we need to allow a few ports below. Note we are running the node server on port 5000 & as we will be hosting the app on Netlify, We might get Mixed content error if the server (EC2) is on HTTP. Hence, we also need to open port 443 as we will be installing a free OpenSSL cert on the server that's right HTTPS for free. The final step will be to generate a key-value pair to connect to our instance through SSH. Generating Key-Value pair Please save the .pem file safely as you will be only able to download this once. Next, If you are on windows you can use Gitbash or putty I will be using the latter, please migrate to the folder containing the pem file and type ssh -i name_of_pem_file.pem ubuntu@ec2–14–265–896–75.ap-south-1.compute.amazonaws.com Note: all AWS EC2 ubuntu instance have ubuntu as a default user so the IP address you will connect to will be ubuntu@public IPv4 DNS address of instance . Once you are in we are going to install a few applications FFmpeg sudo apt update sudo apt install ffmpeg once this is installed you can verify by running ffmpeg -version it should return something like this ffmpeg version 3.4.8–0ubuntu0.2 Copyright © 2000–2020 the FFmpeg developers built with gcc 7 (Ubuntu 7.5.0–3ubuntu1~18.04) FFmpeg is the go-to open-source application able to decode, encode, transcode, mux, demux, stream, filter and play pretty much anything that humans and machines have created. We will use it to convert the incoming video to various formats and separate audio from video. 2.MP4Box sudo apt-get install subversion svn co https://svn.code.sf.net/p/gpac/code/trunk/gpac gpac cd gpac ./configure --disable-opengl --use-js=no --use-ft=no --use-jpeg=no --use-png=no --use-faad=no --use-mad=no --use-xvid=no --use-ffmpeg=no --use-ogg=no --use-vorbis=no --use-theora=no --use-openjpeg=no make make install cp bin/gcc/libgpac.so /usr/lib Verify Install MP4Box -version MP4Box - GPAC version 0.5.1-DEV-rev5619 GPAC Copyright (c) Telecom ParisTech 2000-2012 We need MP4Box to generate an MPD manifest file. A Media Presentation Description is a manifest file for MPEG-DASH streaming. For example, Youtube adjusts video quality based on users' bandwidth, well it requires an MPD manifest file and a custom video player & tons of cash but let get back to the topic, So the manifest file contains information regarding the format, resolution, codecs, etc. If you want to know more about it please check this link out. 3. Nginx The good folks at Nginx have a very Helpful page in Docs on how to install it on an EC2 Instance with pictorial representation. We require this to redirect all incoming traffic to the port running Node and limit the client’s upload size but mainly as a reverse proxy. What are we trying to do here? Let me explain in detail how this works? Client Uploads a Video file in chunks, we save all the chunks in a temp folder, and once the upload is complete, we will merge them all and get the original file back. Once we have the file, we will convert the video into various formats with the help of FFmpeg. For example, 360p, 520p & 720p formats also separate the audio file as well. After which, we will provide the file path of all those converted file formats to Mp4Box which will generate an MPD File like below. Let's dive deeper into an MPD (Media Presentation Description) file: <Period duration="PT30S"> <AdaptationSet segmentAlignment="true" maxWidth="320" maxHeight="240" maxFrameRate="11988/400" par="4:3" lang="und" startWithSAP="1" subsegmentAlignment="true" subsegmentStartsWithSAP="1"> <Representation id="2" mimeType="video/mp4" codecs="avc1.64000D" width="320" height="240" frameRate="11988/400" sar="1:1" bandwidth="171132"> <BaseURL>only240_test_dashinit.mp4</BaseURL> <SegmentBase indexRangeExact="true" indexRange="1033-1136"> <Initialization range="0-980"/> </SegmentBase> </Representation> Periods Periods — An MPD element is a part of content start time & end time. We can use multiple Periods for scenes or chapters, or to separate ads from program content. Adaptation Sets Adaptation sets are set of media streams. for example, a Period can have one Adaptation Sets & multiple audio sets Let’s say you are watching a movie on Netflix for simplicity sake lets just say the movie is available only in 720p format and in multiple languages So when you request a change in language the Adaptation set for the video remains the same but a different set of the audio file which matches your request is sent. Now we all know that Netflix will have more than just one format, but ultimately with various formats, it’s just a change in Adaptation sets for different video and audio requests by user-agent or user. Then there is Bandwidth, resolution, etc. Here’s a good article by Brendan Long with in-depth details of an MPD file syntax and its meaning. Now, once we are done with the setup, let’s set up a basic node server nothing too fancy. var express = require("express"); var app = express(); const cors = require("cors"); const session = require("express-session"); const key = require("./config").db; const connectMongo = require("connect-mongo"); const MongoStore = connectMongo(session); const fs = require("fs"); var resumable = require("./resumable-node.js")("./tmp"); const os = require("os"); const formData = require("express-form-data"); var bodyParser = require("body-parser"); const ffmpeg = require("./ffmpegArgs"); app.use(cors({ origin: "My URL", methods: ["GET", "POST", "PUT", "DELETE"], credentials: true, // enable set cookie exposedHeaders: ["Content-Disposition"], //this is very imp. })); //Managing sessions let sessionOptions = { name: "SESSID", secret: "mysecret", saveUninitialized: false, resave: false, cookie: { maxAge: 3600000, sameSite: true }, store: new MongoStore({ url: key }), }; app.use(session(sessionOptions)); // Host most stuff in the public folder app.use(express.static(__dirname + "/folfer_name _to_save_files")); //Creating a temp dir for all the uploaded chunks const options = { uploadDir: os.tmpdir(), autoClean: true, }; //We parse the form data with express-form-data app.use(formData.parse(options)); //all the uploaded chunks go to the below path app.post("/upload", function (req, res) { //Chunks come in req.body along with chunk number and chunk size & all details resumable.post(req.body, req.files, function ( status,filename,original_filename,identifier) { //If the status is done we merge in chunks and create a single video file if (status === "done") { var s = fs.createWriteStream("./folder_save/" + filename); s.on("finish", function () { //On finish event we clean the chunks from the drive resumable.clean(identifier); res.status(200).send(); }); } Please check the below code for the resumable js server-side. It takes care of everything from merging the incoming file, even to delete the chunks in the temp folder. Once we merge all files into one, the client sends multiple GET requests which act as a trigger to start various format conversions & also we can control if we don’t want a particular format. app.get("/", (req, res) => { //Just To Test res.send("ok"); }); //Now we hit various endpoints one by one using Aync & await on frontend just to keep things simple to start the process to convert into various formats // To extract audio only app.get("/onlyaudio", (req, res) => { ffmpeg.onlyaudio(filename).then((res)=>{ return res.status(200).send(response) }) } //Next we convert it 240p, 520p & 720p //please check the repo link at the bottom for detailed code. Lastly, we Generate an MPD file by giving all formats including audio to the MPD file function. To play the video we access the MPD file directly in the Public folder like URL/folder_where_files_saved/mpdfilename from a Video player that supports DASH streaming. Nginx Config server{ server_name Url with SSL; # managed by Certbot client_max_body_size 20M; proxy_read_timeout 3000; proxy_connect_timeout 3000; proxy_send_timeout 3000; # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. proxy_pass proxy_set_header Access-Control-Allow-Origin *; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; #try_files $uri $uri/ =404; location / {# First attempt to serve request as file, then# as directory, then fall back to displaying a 404.proxy_pass http://localhost:5000 proxy_set_header Access-Control-Allow-Origin *;proxy_http_version 1.1;proxy_set_header Upgrade $http_upgrade;proxy_set_header Connection 'upgrade';proxy_set_header Host $host;proxy_cache_bypass $http_upgrade;#try_files $uri $uri/ =404; } } Working Example Finally, to view the magic please visit https://dash-uploader.netlify.app/ a small app to test these concepts. *This link is no longer valid big server cost* While playing the video please open network tabs you should see something like this if you throttle down to slow 3G you can see the format changing from 720p to 240p or 540p. Please check below for the detailed repo For some reason, Safari has an issue playing this I guess as they support a different video codec. Feel free to let me know if I got something wrong.
https://medium.com/eoraa-co/setting-up-backend-part-2-38333fa62959
['Amit Rai']
2021-09-07 08:12:47.437000+00:00
['Ffmpeg', 'Node', 'Videostreamingapp', 'Videostream', 'Dash']
PR in Politeia: Process, Progress, and Pitching In
October marked a landmark for Decred: the launch of Politeia, its policy and budgetary proposal system. In the first week, six proposals were introduced, which the Decred community actively debated and voted upon. Voter participation topped 50%, a greater rate than we saw in the U.S. midterm elections. In the first round of proposals, the Decred community overwhelmingly passed an open research project that will enable investments of time into areas that would have otherwise been unlikely to find funding, as well a proposal for nomenclature changes to ticket voting. However, for this blog post, my focus will be two public relations (PR) proposals, one which was voted down, and one which passed by a narrow margin. With voting complete, I’ll take this opportunity to communicate to the Decred community why I believe we need a PR partner right now, and also explain the process that resulted in the submission of the two proposals in Politeia. Hint: they didn’t randomly show up on the date of launch. Additionally, I’ll reflect on what I saw of Politeia in action, detail some refinements we can make, then explain the next steps in the marketing and PR process and how you can contribute. Why We Need a PR Partner Decred is an amazing project with a unique story. Developers with real chops self-funded and rolled up their sleeves for two years, then launched the coin via airdrop. The concept of a better Bitcoin, one with less conflicts of interest, one with governance, and one that is truly decentralized, resonated with an audience and spawned an impassioned global community. Tanel August Lind and Sander Meentalo at Eeter developed Brand icons and a website, Kyle Chivers developed videos, Decred Jesus was born. Decred began appearing at conferences and events across the U.S and around the world; the Decred Jacket became a legend. The community grew organically, the price of Decred took off, and investment came from institutions like Placeholder VC and Blueyard Capital. Of course, these events were driven by the continued development of Decred. Decred has now taken the first two steps towards becoming a Decentralized Autonomous Entity through the implementation of consensus voting and the public proposal system, backed by Politeia. Decred also created tools to perform on-chain atomic swaps and added SPV wallet support. In the midst of the first version of Politeia, and with privacy enhancements and Lightning Network implementation forthcoming, I believe the time is right for Decred to take the next steps to communicate to the world just how far we’ve come in order to expand the community of contributors and developers. We will need a dramatically larger community and user base to grow the project by one or more orders of magnitude. We must take active, larger scale measures. The returns to community development are increasing. The more outreach we do, the more we’ll be able to grow the community, and a larger autonomous community means even more events, users, and developers. It’s a virtuous cycle we need to prime now. Crypto is new and exciting and Decred is funded for the long term, but there is a chance it loses relevance if it doesn’t grow significantly to reflect the quality of the project. This does not mean compromising our core principles. On the contrary, it means building upon our core principles, affirming those principles, messaging from them, and amplifying them in order to attract more of what we call “smart money” so we can continue to strengthen our project. The goal of Decred is to build our collective intelligence, and to do that, we need to attract the right stakeholders. We want community members who value their sovereignty, act as a fiduciary of the project, and propose and execute initiatives that will best build the cryptocurrency and its applications. Our treasury is currently worth roughly USD 20 million, but it could be worth USD 2 million or USD 200 million. The right PR firm will help achieve our communications goals, as well as protect and grow our Treasury. The firms that submitted proposals in Politeia have the experience, discipline, and domain expertise to get us quality media placements to increase awareness of Decred and generate more interest. Many people have commented positively about Decred’s presence at events — they’re certainly a forum around which to rally and build the community. These typically cost USD 50–100k per large scale event. Given the scale of investment, it makes sense to be more confident we’re attending the right events, planning them intelligently to coordinate speaking opportunities, and then optimizing our presence through media training and media outreach to schedule interviews for Decred community members on the right topics with the right media outlets. This requires real experience, full time attention, and active management. The right partner will know which events are the best fit, and they might even bring negotiating leverage through their roster of clients. Section I: What Happened My involvement with Decred began with discovery work: a user survey, key stakeholder questionnaire, and a competitive analysis. Through those efforts, I formulated proposals for positioning and messaging, but have yet to share them with the community. When the former events manager left the project, Jake Yocom-Piatt asked that I assume responsibility for planning events, of which two were already in the works. When I discovered that hundreds of events take place each year across the world, I queried key members of the Decred team to see if there was consensus on which ones to attend, what form our presence should take, and what metrics could be used to make attendance decisions. There was no consensus on any of these questions, so I considered external resources who would have specific expertise in these areas. Given the plan to bring on a PR partner, I postponed the positioning and messaging work until they started in order to gain their alignment. I began the search for a PR firm by casting a wide net, reviewing in depth more than forty firms that fit into three categories: bulge bracket, B2B with fintech expertise, and pure play crypto firms. I quickly determined that only a pure play crypto firm would work because of their centrality. Decred exists deep in the cryptoverse, and it would have taken an outside firm the majority of a year simply to understand Decred, let alone be effective with media placements and messaging. Crypto-specific PR firms come with a solid base of knowledge, as well as a strong list of media contacts and event experience in the space. With the focus on pure play crypto firms, three main issues immediately made themselves apparent. First, due to the newness of cryptocurrencies, few firms exist today. As an offshoot of the narrowness of the market, most of the firms under consideration worked with or had worked with coins that we would consider to be competition. Finally, due to the nature of the 2017 market, the majority of the firms had focused extensively on supporting ICOs and other projects that run counter to the Decred ethos. After reviewing more than a dozen crypto specific firms and phone screening six, I narrowed the focus to two: Wachsman and Ditto PR. To comment briefly on the aforementioned issues, Wachsman had handled many ICOs, and they perform ongoing work for DASH, a project that focuses on governance and has a similar autonomous treasury. Given the different positioning of Decred (Autonomous Digital Currency) and DASH (Digital Cash), I was not concerned with conflicts of interest. Decred focuses on attracting sophisticated, active users, whereas DASH focuses on simplifying usability and maximizing transactional use. I don’t believe there is much overlap in the user segments. Regardless, Wachsman explained that they employ a team of 110 people, that there would be no team member overlap, and that there would be an internal firewall preventing information from crossing lanes. Additionally, Wachsman immediately took to the Decred project and its ethos, expressing familiarity and enthusiasm for the process. Ditto had fewer issues than Wachsman. They had done work with Riccardo Spagni (fluffypony) for Monero, which is a project the Decred community tends to respect and appreciate. They also had a smaller team and a less global footprint. With the search reduced to Wachsman and Ditto, I invited key members of the Decred strategy, marketing, writing, design, and operations teams into the vetting process. At this point, I was informed that the Politeia launch was imminent and that a press release document had already been drafted. I took the internal work the Decred team had performed and asked that both Wachsman and Ditto PR review and edit the document, and also recommend a release strategy. This was an opportunity to see the teams in action, understand how they think and work, gauge the quality of their ideas, and judge their fit with Decred. Both teams agreed to perform this work without compensation. Within days, both shops presented their recommendations to our expanded team, then responded to our subsequent questions. Their work markedly improved our press release and informed our release strategy. The release of Politeia was picked up in several influential trade publications, and blogs were released by Jake Yocom-Piatt, Richard Red, and Dustin LeFebvre. As for a decision, we could not come to a consensus of which firm would be better, for they each brought different advantages. We did, however, all agree that both firms were qualified and would be a good fit for Decred. At this point, Richard Red suggested that we wait for Politeia and have both firms submit proposals there. Everyone wondered why they hadn’t thought of that idea, and we were off. I communicated plans for the next step in the process with the agencies and coordinated proposal submissions. Politeia in Action You don’t know whether an airplane will fly until it takes off for the first time, and eighteen months of development left many of us full of anticipation intermixed with nerves. Both Wachsman and Ditto downloaded Decrediton wallets, acquired decred, paid the proposal fee and submitted their proposals. The community exploded with comments, constructive criticism, and a vibrant discussion about the need for a PR firm and the appropriate scope of work. Discussions took place in Politeia, on Reddit, on social media, and in a room called #proposals that was created in Matrix. The online community seemed to coalesce around certain questions such as the denomination of payment, monthly cost, and the ability to apply metrics to each firm’s efforts. Wachsman and Ditto both actively participated in the discussion, sharing their stories, references, and answering questions about their proposals, services, and teams. Wachsman came out from the start and enthusiastically asked to be compensated in decred, whereas Ditto requested a combination of decred and USD weighted towards USD. The community was searing in their condemnation of that request and did not hold back with their comments. By my estimate, the battle looked to be over for Ditto. However, reviewing the discussion in the proposal room in Matrix and in Reddit, Ditto listened and revised their proposal to integrate feedback and direction they had received from the community. The collective intelligence of Decred was working and the process was alive, iterating in real time. When discussion seemed to have died down and the two firms were content with their proposals, they agreed to authorize for voting. On Monday, October 29, a Politeia administrator opened the voting, and Politeia entered its new chapter of adjudication. Discussion continued online, and votes poured in for and against each entry over the next week. At the beginning, there were concerns over the scenarios where either both proposals passed or both failed. If both were voted down, I would have considered it the will of the stakeholders. If both passed, it would have to be considered an unanticipated edge case in our sovereignty model. Fortunately, neither was the case here. Ditto raced out to a lead, and entering the weekend, their vote tally hovered comfortably around 75% Yes. However, a large number of No votes came in over the weekend, pushing Ditto down to 54% by Sunday. Sunday night and into Monday, the last votes trickled in to buoy Ditto just over the voting threshold of 60%, finishing at 62%. In the end, the process enabled the community to vet proposals in a way that facilitated competitive refinement. The community then voted upon the matter, resulting in a satisfactory outcome. It’s the first experiment in this process, but it was a fascinating, roller coaster, nail-biting journey with a positive outcome. The future of Politeia is promising. As an aside, Politeia is new, and it’s important to clarify voting thresholds and quorum numbers. For on-chain consensus votes, the threshold is 75% with a 10% quorum, whereas with Politeia voting, the threshold is 60% with a 20% quorum. Section II: What We Learned Proposal Structure When we asked two firms to make proposals for the same scope of work, we knew we were taking a risk that neither or both of them would be approved. However, Politeia only had one type of proposal. Upon reflection on the PR proposals, I believe the community has arrived at a consensus for a two-tiered vote when professional services need to be contracted. In the future, we intend to build out a system where the first proposal would include the standard What, Why, How, Who, and When information, including scope of work and a rough budget. This proposal can be made by anyone but should be actively managed by a DCC (Decred Contractor Clearance) holder. After the standard discussion and voting process is completed, if the measure is passed, a second layer of the proposal will come to exist. At this time, the Decred community has committed to the endeavor, and any number of potential contractors will be free to submit their specific proposals in Politeia, all under the original proposal that has been approved. There will be internal discussion and consensus on the way this vote would work, but I would guess that a plurality of the votes with a 20% quorum would suffice to pass on the second level of voting. Abandoned Proposals As of writing, four proposals exist pre-voting in Politeia and two of them are more than two weeks old. It could be that these proposals were greeted coldly by the community and the proposer thought there was little chance of the proposal passing, or someone could have simply fallen off the grid. Either way, after a certain amount of time, we should avoid clutter in Politeia and establish an “Abandoned” tab to store these proposals. I believe two weeks is a reasonable amount of time for this, but that’s simply my opinion and the community will decide. Progress Reports Once a budgetary proposal has passed, the Decred Treasury has agreed upon an outlay of funds. As most people know in life, no news is not good news. Lack of information tends to make people nervous, so we should establish a method for communicating progress on projects and activities. At the beginning of this system, the community will likely trial out a number of different methods, including posting information in relevant Matrix rooms, Medium, or simply pointing to Github. I would argue, long term, that a tab should be built within the Politeia proposal system to track this data, particularly when the project becomes a Decentralized Autonomous Entity and these proposals’ payouts are executed via smart contracts. It’s best to have all the data regarding the lifespan of a project in one place. Communication with Community When the PR proposals went live in Politeia, there was no advance notice to the majority of the community. As such, many within the community perceived the agencies as interlopers attempting to cash in on the launch of the Decred Treasury. This should have been expected, but was overlooked due to a focus on the communications strategy and execution of the Politeia launch. Upon reading some of the reasonably directed criticism in the proposals room in Matrix, I wrote a synopsis very similar to the early section of this blog detailing why a PR partner was needed, as well as the work that had been done to date to vet and qualify the parties. The tenor of the discussion in that channel changed instantly, and I was fairly certain the issue was clarified. However, it has come to my attention recently that all that discussion was taking place in a room occupied by only 90 community members. I had quelled the concerns of a small minority of the Decred community. Going forward, I think that continuing to work with a small group of core community members to align on a strategy is a good approach. I don’t think anything of consequence should be done on behalf of Decred without the input of the community. However, once discussed and agreed upon by committee, I would make certain to clearly articulate the strategy to the wider Decred community through channels such as Matrix, Medium, Reddit, Twitter, and more. While one component of the lack of communication was ignorance, the other was intentional. Once the proposals were introduced, I contributed to the discussion by answering questions about the process and who would manage the PR firm if one was hired, but I avoided actively campaigning for the concept or for one particular firm over the other. This was the initial launch of Politeia, and I wanted the voice of the community to be heard. I, and others, were very careful to avoid endorsements that could have been perceived as anything close to representative or nation state politics. I believe that this ethos can be sustained going forward through the aforementioned communications that would properly introduce the proposal to the community. Communication: Reporting Given that 38% of the cast tickets voted against Ditto’s proposal, and that 100% of the community is interested in tracking Politeia’s first large scale budgetary decision, ongoing communication regarding this process and our collective efforts and results is vital. In Politeia and in the proposal room in Matrix, many of the reservations expressed about a PR firm centered around the lack of metrics surrounding the deliverables. These concerns are valid, and they will always exist when contracting professional services. It’s also true that one can only judge what one sees, further highlighting the need for the marketing and PR team to integrate into the community and actively share information. That means soliciting input and feedback on a regular basis in chat rooms and publishing monthly synopses detailing the work performed. The community can judge me and the work of the PR team based upon the work we do as measured against the plan that I detail in the “What Happens Next” section further down. The first step in the plan centers around internal alignment of the team on positioning and messaging, and representing that in an updated website. Once those blocks are set, we’ll generate an integrated plan to create awareness and drive people to inquire further about Decred. One of the major tactics we’ll use to achieve that will be media relations, where we’ll try to get Decred contributors opining as thought leaders on relevant topics, or even get feature articles written on Decred or its governance system. Our targets will be top crypto publications and mainstream outlets such as Forbes, Fortune, Bloomberg, and TechCrunch. These types of stories will generate interest and further investigation into Decred, and we’ll have the website updated and the community trained in messaging to capitalize on the opportunities. Once we build more awareness, our goal will be to capitalize on the interest by attracting more people and institutions to the Decred project. We’ll see proof of this working as new contributors and developers join Matrix. Decred’s event presence will be a place where progress will reveal itself in visible ways. We will bring organization, a unified message of Decred, and we will plan well in advance to maximize our impact. The PR team will plan on-site interviews ahead of the conference with key personnel on stories that are specifically relevant to the project and its event presence. The number and prominence of our speaking opportunities will also be a good proxy for the progress we make. We’re looking for better stages with larger audiences and panels with other high quality projects. At events down the line, we’ll be able to assemble local teams, which will lower our costs and allow us to better connect with attendees. We’re also looking to grow our presence in strategic markets; key areas to watch include Mexico, Brazil, Europe, and Asia Pacific. We’re actively cultivating communities in order to increase our reach and grow our event presence. This is a general overview of the plan, and it will all take time. Alignment will likely take into 2019, which means true outreach will not happen before that because pieces will not be in place to capitalize upon the opportunities. Building of communities is something that will take years, but look for Decred to appear in new places in 2019 and 2020. Ongoing Iteration Politeia is a massively powerful tool, and we’ve just seen its power with the approval of Ditto PR’s proposal. Decred now has a partner with which it can establish and execute a plan to spread the word and grow the community. It’s also a MVP — a minimum viable product. There is robust discussion throughout the chat platforms about whether voting should be transparent, whether and when the proposal submission fee should increase, whether there should be a cost to comment on a proposal, etc. The initial success of the platform has energized our community, brought it together through common experience, and sparked ideas that will make the platform better over time. We all have sovereignty, and we all have the power to change or improve the process through chat discussion and then Politeia. Small issues are already being addressed, and others are being discussed online. Join the proposals room in Matrix and if you believe strongly that certain changes should be made and consensus has been reached, take action by submitting a proposal in Politeia. The system will become what the community imagines and codes. Budgeting Responsible decision making requires information. While many of us know that the Decred Treasury currently holds approximately DCR 555,000, and that current rate of Treasury minting is DCR 1.954 per 5 minute block, it’s asking a lot of the community to extrapolate that into a budgetary concept in which an informed decision can be made upon a USD 20–25k/month expenditure, particularly when the mining rate is reduced by a factor of 100/101 approximately every 21.3 days. With that in mind, I and others are working on a budgetary model that incorporates three variables: the exchange rate of Decred, annual budgetary spend (as a % of Treasury), and the annual increase in price of Decred. This tool should give me and others a healthy understanding of the various pictures of our financial wellness based upon those three variables. I intend to use this model to build consensus around an annual marketing and communications budget to grow the Decred community. This model will be widely available and can be used by other parties, as well. If the community is entrusted with important decision making, it must have relevant data from which to base those decisions. Section III: What Happens Next? Ditto and Decred have already publicly agreed to a scope of work and terms via the approved Politeia proposal. I (Dustorf) am actively working with the Ditto team to share information, and the target date for formal onboarding is December 1. I and Ditto will arrange for onboarding meetings to determine how we will work together and who will be involved from the Decred side. Planning will involve execution across multiple channels, including marketing, events, social media, design, and the writers. Members of those groups will need to be involved in the planning process, and we’ll need to determine how to best integrate communications into our existing platforms. In order to realize our objectives of growing Decred across the world, we will need all the active involvement we can facilitate. We will definitely have weekly calls to discuss ongoing issues, and the Ditto team will actively participate in Matrix rooms. The core concepts of Decred will be upheld: decentralization, evolving stakeholder decision making, and deliverables before hype. Once onboarding is complete, I plan to achieve the following: 1. Gain consensus on positioning and messaging of Decred As a community, we need alignment on this basic issue in order to build the Brand uniformly across the world and to maximize the effectiveness of our communications. Once agreed upon, content will be packaged and made available to enable community members to activate. 2. Launch a user survey Last done in April 2018, I’d like to gauge the community’s views on Decred, and query them on what issues they find important and what tactics they believe would be effective. 3. Update the website The website is the first place most people and institutions go to learn about Decred. Leveraging the agreed upon messaging, I would like to make the website easier to navigate, enable various groups access to critical information, and to automate processes such as institutional investor relations and community organizing. 4. Integrated Marketing Plan Based upon the feedback from the user survey and brainstorming done within the channels, we’ll develop an integrated plan to activate Decred across the world in 2019, including community building, events, and quality video and written content. This plan will build awareness and drive people and institutions to the website and events to learn more about Decred and hopefully join the community. How Can I Contribute? We have a ton of work ahead (as you can see from the list above). We’ll need contributors of every sort in various jurisdictions to help realize these objectives. We’ll first need people to help flesh out and agree upon the positioning and messaging of Decred. That discussion will likely take place in the marketing room in Matrix in early December. We’re looking for community organizers to introduce and educate others on Decred at local meetups to increase awareness and build the community. We’re also looking to recruit others to introduce Decred in new markets, people who are educated, experienced, trusted, and respected in crypto. If you’re looking to be an active community organizer, join us in the marketing room in Matrix and express your interest to me or others. We will share information, establish best practices, and determine which appeals of Decred are universal, and which ones need to be customized for different countries or cultures. We’re also looking for contributors to help generate content in the writers room. Website updates will likely take place in both the writers room and the marketing room, and communications planning will take place in the marketing room. I’ll also be making a proposal in the research channel for a competitive analysis of various projects of interest. This work will help us formulate arguments for Decred relative to other projects and help us with positioning. If you join those chat groups, you’ll learn more as we do in the weeks to come. If you have specific talents or ideas to share, join us. Conclusion Politeia is off to an amazing start. The quality of the discussion and the way the process evolved to refine the proposals based upon community feedback demonstrated the project’s premise in action. To me, it was like the Wright Brothers experiencing liftoff for the first time, knowing that their dream was possible and history would be written. Politeia has been released, it has succeeded in its first endeavors. We have identified ways to iterate and improve it and better ways to share more information with the community so they can make quality decisions. I could not be more excited about the current state of Decred, the direction things are going, and the ability of our community to band together to demonstrate a new type of currency, one where stakeholders have sovereignty. It offers powers that will continue to be conceived, developed and unleashed over the coming months and years. Join us and help design and realize these ideas.
https://medium.com/decred/pr-in-politeia-process-progress-and-pitching-in-d88771183dd4
['Dustin Lefebvre']
2018-11-17 19:52:45.061000+00:00
['Governance', 'Decred', 'Cryptocurrency', 'Public Relations', 'Blockchain']
Daily Horoscope: Moon in Taurus to Gemini
December 26, 2020 Moon in Taurus to Gemini, Saturday, December 26, brings two distinctly different moods to your day as the moon shifts from the fixed earth energy of steady Taurus in the daytime hours to the mutable air energy of flighty Gemini in the evening. A supportive trine aspect between the Taurus moon and Pluto in Capricorn encourages you to stay calm, centered, and grounded, by doing the things that naturally give you those feelings. For some, that may be meditation or spending time in nature, but there are other mundane rituals — like putting on your sexiest outfit, organizing your workspace, or having an intense workout — that can also give you that feeling of control and success. Do aim for that centered state because the inspiration and ideas that come from within, or the new information or events that come from without, are better accepted and integrated if you feel in control. As the moon enters Gemini this evening, it forms a trine with Saturn conjunct Jupiter in Aquarius, and with the sun conjunct Mercury in Capricorn trine to Uranus in Taurus, your world might seem like an episode of Monty Python’s Flying Circus — “and now for something completely different!” A mutable T-square of the North Node in Gemini, Neptune in Pisces, and Venus in Sagittarius makes anything possible, but also challenges you to choose from multiple options, any of which could influence your destiny. Being centered helps you see clearly and choose wisely. Dunnea Rae Aloha Astro
https://medium.com/@alohaastro/daily-horoscope-moon-in-taurus-to-gemini-cb0d3ebe859f
['Dunnea Rae']
2020-12-26 18:24:28.873000+00:00
['Life', 'Spirituality', 'Culture', 'Astrology', 'Horoscopes']
Best No-Code Platforms in 2021 — What to Expect
2020 saw a significant boost in the popularity of No-Code platforms, as many businesses concluded the best way to grow out of an economic down-turn (resulting from the pandemic) was to innovate. Will 2021 be the year that no-code applications development takes hold? If so, what new innovations can enterprise buyers expect from state-of-the-art platforms like Encanvas. Overcoming Bias Few people who’ve been involved in the enterprise applications development market would be unaware of the strong bias towards coding and coding skills. The industry has been run by people who themselves trained as coder, and who believe coding offers unlimited versatility while any form of abstraction layer will inevitably lead to inflexibilities; if not in the functionality they can build into apps, then in the ongoing platform architecture — making it more difficult to protect data, integrate with other systems, scale apps or manage User and Group permissions. This has led some IT leadership teams to focus their cultures and behaviors around coders and coding, not speed-to-market and business outcomes. It was assumed, through this professional bias, that any product claiming to be able to produce enterprise apps without coding was intended for ‘citizen developers’ — which has become industry speak for ‘amateur.’ The great thing about working with teams of awesomely clever and passionate people wanting to solve a problem, is they don’t see things like ‘bias’ as being an insurmountable obstacle; but rather just another bridge to cross.
https://medium.com/@ian-tomlin/best-no-code-platforms-in-2021-what-to-expect-979a5a1ca0a4
['Ian Tomlin']
2020-12-24 12:22:31.262000+00:00
['No Code', 'Digital Transformation', 'Fusion Team', 'Low Code', 'DevOps']
Unfinished Business
Today we’re delighted to announce funding and introduce the world to Scenery, a collaborative, intelligent video creation platform. We’re excited to bring on Ashot Petrosian as head of product and co-founder and to partner with Dave Samuel at Freestyle VC as well as other amazing investors to bring Scenery to life. In 2005 this team helped build Jumpcut.com. Jumpcut was the first web based video editor and one of the first fully functional web creativity tools. We didn’t use the word collaboration at the time, but Jumpcut allowed groups to share and edit source files and even allowed for published videos to be re-mixed. By simply pressing a button you went from viewing a video to being in your copy of the edit. We learned how creators such as iJustine built a community of artists that congregated around the craft of video editing. Photo credit to @weibel our amazing Jumpcut Creative Director and designer Today, most major productivity software categories have made the leap from desktop software to the web. Despite significant progress in web technology, professional video tools have not made this leap. With WebGL, WASM, and other advancements this will change. We can now build performant, feature rich applications integrated with the fabric of the web. That’s what Scenery is. We’re rethinking from scratch what video creation should look like, feel like, and be like today. We want to help make video creation into a team sport and build a platform for creators to create, learn, grow and teach each other (read a more about Scenery here). We’re excited to continue getting feedback from editors and teams to help us further understand problems we can help them solve. Please sign up at http://scenery.video to try early releases and give us your thoughts. This team has learned a lot from building products at Jumpcut, SnappyTV, Facebook Stories and Events, Yahoo! Video, Flickr and Twitter. Ryan and I helped lead SnappyTV, a live video post-production editor used by media companies such as the NFL and NBA which was acquired by Twitter. At Twitter we helped lead media publishing, live video and Periscope. Ashot started several companies then was instrumental in building Facebook Stories and Events. Chris Martin, who leads Scenery’s platform engineering, previously led Flickr’s platform engineering. We are excited to continue to learn how to better serve creators and their audiences through web technologies. We are hiring an amazing engineering team to join Ryan, Ashot, Chris, Kavi and Chet to crank out the amazing designs that Alyssa is dreaming up. We have made good headway in this direction, but more remains to be built, designed and imagined. If that is something that peaks your interest, let’s talk. Select Investment funds: FreeStyle VC, Precursor Ventures, Wireframe Ventures, Transmedia Capital, Uphonest Capital, Rembrandt Venture Partners Select Operators and Individuals: Kayvon Beykpour, Kevin Weil, Elizabeth Weil, Russ Fradin, Joe Bernstein, Keith Coleman, Ross Walker, Bobby Jaros, Dong Min, Brian Parker. Special thanks to David Pidwell, Ryan Peirce and Don Ryan for investing in our 3rd company Team: Alyssa, Chris, Kavi, Chet, Ashot, Ryan and Mike
https://medium.com/scenery-blog/unfinished-business-98e5fe532df3
['Mike Folgner']
2020-12-19 04:53:09.659000+00:00
['Collaboration', 'Remote Working', 'Vidéo', 'Tools', 'Video Editing']
Full-stack application development with AngularJS 11 and Asp.Net MVC Core 5.0 — Creating Combined Project [part1/4]
This is the first article of the series I am going to write for a complete full-stack application using AngularJS as front-end and Asp.Net MVC Core as back-end. This series will be a comprehensive tour of features of both technologies and we will end up with a complete full-stack project. Below are links to Each Part: This article is about using Angular and ASP.NET Core MVC together to create rich applications. Individually, each of these frameworks is powerful and feature-rich, but using them together combines the dynamic flexibility of Angular with the solid infrastructure of ASP.NET Core MVC. Before you start 1) Install .NET Core 5.0 https://dotnet.microsoft.com/download 2) Install Node.js https://nodejs.org/en/ 3) Install Visual Studio 2019 https://visualstudio.microsoft.com/vs/ 4) Install Visual Studio Code [optional] https://code.visualstudio.com/ 5) Install SQL Server Express edition https://www.microsoft.com/en-in/sql-server/sql-server-downloads Creating the Project There are several different ways to create a project that combines Angular and ASP.NET Core MVC. The approach that I use in this article relies on the @angular/cli package, used in conjunction with the .NET tools for creating a new MVC project. Installing the @angular/cli Package Go to Windows PowerShell and execute the following command: npm install --global @angular/cli angular/cli 11.0.3 installed angular/[email protected] has been successfully installed. Creating the Angular Part of the Project Open the new PowerShell prompt and locate the convenient location and run the following command to create an angular app: ng new FullStackApp --directory FullStackApp/ClientApp --routing true --style css --skip-tests true --skip-git true When the setup is complete, the result is a folder called FullStackApp/ClientApp that contains the tools and configuration files for an Angular project, along with some placeholder code to help jump-start development and check that the development tools are working. Starting the Angular Development Tools cd d: cd FullStackApp/ClientApp npm start It can take a moment for the development tools to start and compile the project for its first use. Once the “Compiled successfully” message is shown, open a new browser window and navigate to http:// localhost:4200 to see the placeholder content that is added to new Angular projects Creating the ASP.NET Core MVC Part of the Project Once the Angular project has been set up, the next step is to create an ASP.NET Core project. Use a PowerShell command prompt to run the commands shown below in the FullStackApp folder. cd d: cd FullStackApp mkdir ServerApp cd ServerApp dotnet new mvc --language C# --auth None The dotnet new command adds all the files required for a basic ASP.NET Core MVC project to the FullStackApp/ServerApp folder, alongside the Angular project: Preparing the Project for Visual Studio 1) Goto > Open a project or solution 2) Navigate to the FullStackApp/ServerApp folder, and select the ServerApp.csproj file 3) Visual Studio will open the project in .NET mode as shown below: 4) Right-click the Solution item at the top of the Solution Explorer window and select Add > Existing Web Site from the popup menu. 5) Navigate to the FullStackApp folder, select the ClientApp folder, and click the Open button. Visual Studio will add the ClientApp folder to the Solution Explorer so that you can see the contents of the Angular project, as shown below: Solution explorer after opening both projects will look like as shown below: Right-click the ClientApp item, select Property Pages from the popup menu, and navigate to the Build section. Make sure that the “Build Web site as part of the solution” option is unchecked, as shown in Figure below: Select File > Save All; Visual Studio will prompt you to save the solution file, which can be used to open the Angular and ASP.NET Core MVC projects when you want to pick up a development session. Save the file in the FullStackApp folder using the name FullStackApp.sln. When you need to open the project again, open the FullStackApp.sln file; both parts of the project will be opened and displayed in the Solution Explorer window. The final look of the FullStackApp folder will be as follows: Preparing to Build the ASP.NET Core MVC Application To configure the ports used when the application is started using Visual Studio, make the changes shown in the figure below to the launchSettings.json file in the ServerApp/Properties folder. Regenerating the Development HTTPS Certificates The final preparatory step is to regenerate the development HTTPS certificates by running the commands shown below: dotnet dev-certs https --clean dotnet dev-certs https --trust Select the YES option for each prompt that Windows presents. Building and Running the ASP.NET Core MVC Application The ASP.NET Core MVC part of the project can be compiled and executed from the command line or using the code editor. To build and run the project from the command line, run the command shown in the figure below in the ServerApp folder: dotnet watch run Open a browser window and navigate to https://localhost:5001; you will see the placeholder content shown in Figure below. Connecting the Angular and ASP.NET Core Applications The Angular and ASP.NET Core MVC applications share the same parent folder but are not connected in any way. It is possible to develop applications this way, but it is awkward. A more useful approach is to connect the two toolchains so that HTTP requests are received by ASP.NET Core and passed on to either the MVC framework or the Angular development tools based on the request URL. There are two ways to connect the toolchains, and each is useful during a different phase of the project lifecycle: Managing the Angular Server Through ASP.NET Core Using the ASP.NET Core MVC Proxy Feature Both approaches rely on an additional .NET package. Open a new PowerShell command prompt, navigate to the FullStackApp/ServerApp folder, and execute the following command: dotnet add package Microsoft.AspNetCore.SpaServices.Extensions You will find the highlighted package added Packages of ServerApp Connecting both Apps Using the ASP.NET Core MVC ProxyFeature This technique is to start the Angular development tools separately from the ASP.NET Core runtime. In this approach, restarts are faster and each part of the project responds independently so that a change to a C# class, for example, doesn’t affect the Angular development server. The drawback of this approach is that you need to run two commands and monitor two streams of output to see messages and errors. Add the statements shown below to the Startup class to configure ASP.NET Core MVC to forward requests to the Angular development server, along with statements that select the connection technique based on configuration settings. app.UseSpa(spa => { string strategy = Configuration .GetValue<string>("DevTools:ConnectionStrategy"); if (strategy == "proxy") { spa.UseProxyToSpaDevelopmentServer("http://127.0.0.1:4200"); } else if (strategy == "managed") { spa.Options.SourcePath = "../ClientApp"; spa.UseAngularCliServer("start"); } }); Notes: Do not forget to use the following directive: using Microsoft.AspNetCore.SpaServices.AngularCli; Adding Configuration Settings in the appsettings.Development.json File in the ServerApp Folder "DevTools":{ "ConnectionStrategy":"proxy" } Starting the Angular Development Server To start the Angular development server, open a new PowerShell command prompt, navigate to the FullStackApp/ClientApp folder: npm start Starting the ASP.NET Core Server To start the ASP.NET Core server, open a second PowerShell prompt, navigate to the FullStackApp/ServerApp folder, and run the following command : dotnet watch run Open both applications on browsers Open a new browser and navigate to https://localhost:5001 for ServerApp. and then https://localhost:5001/app for ClientApp Summary We saw how to create a project that combines Angular and ASP.NET Core MVC. The process is a little complicated, but the result is a solid foundation that allows the Angular and MVC parts of an application to work together while preserving the toolchain used by each of them. The full code of this article can be downloaded from the Github repository. https://github.com/gulraizgulshan2k18/angular-mvc-fullstack
https://medium.com/@gulraezgulshan/part-1-full-stack-application-development-with-angularjs-11-and-mvc-core-5-0-cc9b898acc02
['Gul Raeez Gulshan']
2020-12-18 10:21:06.598000+00:00
['Angularjs', 'Full Stack', 'Aspnetcore']
Declassified CIA Files Showing Hitler’s Presence in Colombia During 1954
I know that many of you as myself believe that Hitler has died in his bunker located inside Berlin. I myself will always believe this theory however, the past week I have stumbled upon this declassified CIA file stating that Hitler has been seen roaming around in Colombia many years after World War II has ended. The CIA has investigated a possible appearance of Adolf Hitler in Colombia after World War II, according to information that has surfaced in the context of declassifying documents relating to the assassination of US President John F. Kennedy. The US wanting revenge for WWII The United States had investigated rumors that Adolf Hitler survived World War II and lived in Colombia in the 1950s. Documents regarding the investigation have been public for years, but have returned to public attention these days, in the context of the publication of approximately 2,800 secret documents regarding the assassination of the former American president John F. Kennedy. According to these declassified documents, CIA agents received information from a former SS officer that in the 1950s Adolf Hitler lived in Colombia, in a community of former Nazis. Initially, the agents did not take the information seriously but later sent a report to their superiors referring to a picture. According to this note, former SS officer Phillip Citroen contacted CIA agents and informed them that he had met a man who claimed to be the former Nazi dictator. By the time agents decided to verify the information, the suspect, Adolf Schuttlemayer, had already fled to Argentina. However, the CIA was very skeptical about the information, so it gave up investigating. According to CIA documents, the former SS officer claimed that the man he met “looked very much like Adolf Hitler and claimed to be him”. Citroen said it had met the Nazi dictator at a place called Residencies Coloniales in Tunja, Colombia, which he described as overcrowded with former German Nazis. the picture of the CIA file with the image attached “According to Citroen, the Germans living in Tanja followed the alleged Adolf Hitler out of an idolatry of the past, addressed him with ‘Elder Fuhrer’ and greeted him with the Nazi salute,” the CIA documents read. To prove the truth, Citroen showed agents a photo of him, along with a man who looks like a former Nazi leader. The CIA did not take the “evidence” seriously, but in 1955 a second man, codenamed Cimelody-3, told officers the same story, specifying that on repeated visits to Colombia, Phillip Citroen encountered a once a month with the man suspected of being Adolf Hitler. The latter also offered a picture to the agents, in which Citroen appears together with the man supposed to be Hitler, who on the back was identified as Adolf Schuttlemayer. According to Cimelody-3, in January 1955, Hitler fled Colombia and settled in Argentina. According to the new information from Cimelody, the CIA drew up a report which they sent to their superiors, but they suggested giving up a possible investigation because they could not verify what the two had said and the financial effort would have been too great.
https://medium.com/history-of-yesterday/declassified-cia-files-showing-hitlers-presence-in-columbia-during-1954-304da469e169
['Andrei Tapalaga']
2020-05-27 13:16:05.925000+00:00
['Hitler', 'Mystery', 'History', 'World War II', 'CIA']
Takeaways from the Enterprise UX 2015 Conference
Let’s face it: enterprise software historically has not successfully integrated meaning, emotion, and identity into UX designs. As an industry, we have a lot of work to catch up to the best of our consumer counterparts. At Salesforce, we complement traditional user testing with rapid ethnography to increase our understanding of our users’ worlds. However, it’s easy to get caught up in the what when sharing our designs with stakeholders. As a response to these presentations, Mary and our research team have focused more on the emotional side of our users when preparing for trips, debriefing stakeholders, and sharing data with other team members. Crafting design systems Enterprise UX was a fantastic opportunity to see how other teams are tackling the challenges of creating and scaling design systems. David Cronin, Executive Design Director at GE, discussed the evolution and craft behind GE’s Industrial Internet Design System, a comprehensive tool for aligning visual language, interactions patterns, and technology across numerous products, teams, and users. GE, maker of everything from aircraft engines to microwaves, has in many ways become a software company as complex as any in the world. Likewise, Phil Gilbert, General Manager at IBM Design, described the massive design effort underway at IBM. They are building a team of 1,500 designers — including many college recruits — in studios across the globe, with an IBM Design Language to support them. This gorgeous resource is the shared vocabulary for a design organization at unprecedented scale. Here at Salesforce, we have a growing Design Systems team that is busy crafting a design system not just for our own products and teams, but for the vibrant ecosystem of partners and customers that build on our platform. Through our Salesforce1 Style Guide, we’ve found that many external developers, administrators, and users are looking to our UX team for design guidance and direction. We’re working hard on tools like the style guide to support consistency and quality across our services. Fostering cross-functional collaboration Another common theme was the importance of cross-functional collaboration to success in enterprise software development. We heard this constantly from presenters at companies like Citrix, Paypal, GE, and IBM, as well as from fellow attendees on the floor and in the official conference Slack. The so-called “Three-in-a-Box” model, describing the required partnership across Design, Product Management, and Engineering in particular, certainly reflects our experience here at Salesforce. We’ve found that bringing product teams into our process early on is extremely rewarding in the long run. We include product owners and engineers in brainstorming and sketching sessions. We invite them to watch (and ask questions) during user research. Early cross-functional involvement means that everyone, not just UX, has a hand in our designs. @jarber shares stories of bringing design culture across functions at Citrix. Photo credit: Shannon Johnson After the conference, we’re inspired to take this collaboration further. We want our designers and researchers to work closely with product managers to learn about the business side of our products. We should work even more with engineers to better understand Lightning and our overall development architecture. We’ll keep bringing PMs, developers, and quality engineers to our customer site visits. We’ll also continue to encourage our design team to gain Salesforce certification and get a true look at the experience of our administrators. Designing for experimentation How might enterprises embrace the fail fast, fail often mantra? Lean methodology needs an organization willing to adapt and learn. Bill Scott, VP of Business Engineering at Paypal, emphasized the value of designing for “throwaway-ability” and being open to constant change. In large organizations, it is easy to fall into the trap of building for delivery versus innovation. To reinvent their checkout experience, Paypal formed lean UX/engineering teams to rapidly prototype and test new ideas. These teams stressed the importance of constant learning. Scott also presented Netflix as an example of a company that embraces experimentation by focusing on the customer and strategically carving a path to build-measure-learn, from the structure of their teams to their choice in programming languages. Netflix uses its UI as an experimentation layer. Scott ended his presentation with an open question:
https://medium.com/salesforce-ux/takeaways-from-the-enterprise-ux-2015-conference-c7d70dbd03f4
['Arthur Che']
2015-12-18 18:23:58.084000+00:00
['Events', 'Enterprise', 'Conference']
So, has anyone else ever had the world’s most unlucky Dad?
So, has anyone else ever had the world’s most unlucky Dad? My Dad had a string of relentlessly stupid accidents and injuries. Here is the short list of the winners: Fell into the holding tank of raw sewage at the sewage treatment plant when trying to look at a valve. After swimming in 20 feet of crap he lost his hard hat. He realized this after he got himself out. He then jumped back in to get the hard hat so that people would not think someone was still in there. How considerate! I would have left the hat. While standing on a ladder and using a giant drill to make holes in the wood beams in the ceiling, the drill bit broke and smashed through my dad’s lip and broke his bottom 2 teeth. My sister had her grade 12 grad that night so my Dad elected not to go to the doctor til the next day. My Dad was welding and started his pants on fire. My Dad was carrying a steel beam down a hill in the freezing prairie winter. He slipped on ice, fell, and when he landed there just happened to be a board with a nail sticking up. Well that nail went into his ankle. My Dad (poor dumb fella) did not go to the doctor and developed tetanus…so once he realized something was wrong he went in. Well, he got his shots and medication, but he ended up being allergic to the medication and developed hives and non-stop hiccups. My Dad in all of his infinite wisdom decided he was fine, until he wasn’t. My brother happened to be playing hooky from school when my Dad got up to change the tv channel and then promptly fell on his face because his throat swelled up. My brother called our Grandpa who transported my Dad to the hospital. To say the least, my Dad has given me some great conversation starters!
https://medium.com/@jennifer.a.macmillan/so-has-anyone-else-ever-had-the-worlds-most-unlucky-dad-ef6067613812
['Jennifer A Macmillan']
2020-05-15 16:54:06.593000+00:00
['Badluck', 'Nodoctor', 'Dads']
Write For ROI Overload
Thanks for your interest in writing for ROI Overload! We’re always looking for new contributors, stories, ideas and insights. We want to cover all aspects of what successful commercialization and growth looks like. This means we cover marketing, sales, strategy, tech, demand generation and growth marketing. This publication is tailored to a community of growth focused operators, executives & entrepreneurs who share a love for growth, strategy, tech and business. We publish tutorials, case studies, tools & tech, advice, industry trends, strategy tips and expert insights. All articles are published as part of the Medium Partner Program, which means you will be paid for your work directly by Medium. We’re doing our best to give you a response within 3 business days, but if you don’t hear from us by then, you can assume we’ve passed on your article. What We Are Looking For Case studies Analysis of real-world examples How to’s / tutorials. Guides Ideas & opinions Trends Topic We Cover Marketing Sales Demand Gen Strategy & Planning Growth Marketing Social media Creative Paid advertising Branding Psychology Sales cycle Prospecting & Funnel Growth Growth Marketing Martech & Salestech SEO Copywriting Marketing data analytics Growth Marketing Writing Resources Want to right about something else? Just ask! 📧 [email protected] How to Submit an Article When you’re ready, you can submit your article using this form. We accept both drafts and already-published articles. If you are already an author with ROI Overload, submit articles via the Medium article submission feature, not through the form. Just a few housekeeping notes. If you have published with us before, that does not immediately mean that future posts will be automatically published. Each post is reviewed by our team to ensure that it meets quality guidelines we hold ourselves and our writers accountable to. This ensure the best possible experience & content for our readers. If you don’t hear back from us in three business days, please assume we passed on your submission. We receive a large amount of submissions and although we try and respond to every submission, we can’t reach out to every single person who submits an article. That being said, if there are slight changes or modifications required for an article before it’s posted, we’ll definitely reach out. Articles that are off topic, plagiarized, or too promotional in nature (main theme focuses on promoting a specific brand from a stakeholder, or individual with a vested interest) will not be considered for publishing. Thanks for taking a moment to read through this guide. If you have any questions or anything isn’t clear, please feel free to reach out to [email protected] or leave a comment below this article, we’ll respond back asap!
https://medium.com/roi-overload/write-for-roi-overload-9a9f43f4c25e
['Scott D. Clary']
2020-11-08 00:41:08.432000+00:00
['Sales', 'Marketing', 'Write For Us', 'Business']
Will Future Aircraft Run on Hydrogen Fuel Cells or Batteries?
Aircraft is already being designed and tested to run on hydrogen fuel-cells or batteries. They work for small aircraft, but can they work for jets? Can aircraft run on hydrogen fuel cells or lithium ion batteries, like cars? Well yes and maybe. Yes, hydrogen fuel cells and lithium ion batteries are already being used to power lighter aircraft, but commercial jetliners may have to wait for a few advances, most likely “hybrids” initially (planes that take off using traditional fuels, but then “cruise” using battery or hydrogen fuel cell power). If hydrogen/hydrogen fuel cells could be cheaper, then they may be the dominant power-provider for jetliners, given their superior energy densities and energy to weight ratios. Lithium ion/Lithium metal batteries may be able to power lighter aircraft, as weight ratios are not as critical in that realm, plus they are currently cheaper technologies. Hydrogen has an energy density per unit mass that is three times greater than jet fuel (either unleaded kerosene, Jet A, or naphtha-kerosene blend, Jet B). However, jet fuel has four times the energy per unit volume, so a jet would have to carry significantly larger fuel tanks to run on hydrogen fuel. Since hydrogen fuel is more volatile (combustible), most designers do not think it can be carried in the wings, where jet fuel is currently stored. Instead, a larger fuselage is envisioned, where the larger hydrogen fuel tanks can be safely stored. This in turn would increase the size and drag coefficient of the fuselage, but since the overall weight of the hydrogen fuel would be significantly less, the performance of the jet would not be negatively affected. There are several hydrogen powered aircraft prototypes. The first ever built was the Russian TU155, which made its first flight in 1989, using liquid hydrogen. After test flights, no commercial version was built. Boeing Research and Technology Europe (BRTE) made a hydrogen fuel cell powered two seater called the DA20 in 2008, but this aircraft was extremely light and needed very little energy for take off. In 2010, Rapid 200-FC made an aircraft that ran on gaseous hydrogen, and recorded six test flights. In 2016, the HY4 became the first passenger aircraft to run on hydrogen fuel cells. (Hawkins,2019),(Robertson,Maniaci,Segal,Scholz,Tidey,et al, n.d.) The first electrically powered aircraft flight was the MB-E1 in 1971. Famously, the Solar Impulse 2 flew around the world using only solar powered electricity, and is now being considered for some commercial applications (mostly non-passenger oriented, due to the limitations of lithium battery technology). Due to weight considerations, batteries are currently limited for lighter aircraft, and range is modest (does this sound similar to electric car technology?). It is thought that a 20-fold increase in energy density will be needed to make batteries capable of making the jump to commercial passenger aircraft. The NASA X-57 Maxwell is another recent entry into electric aircraft, with a 4-seater designed to reduce fuel use, emissions, and noise. A hybrid concept has been proposed for commercial aircraft, and several start ups are working on the concept, including Zunum Aero, General Electric, Volt Aero, Ampaire, Cranfield Aerospace, and the Berlin Brandenburg Aerospace Alliance. Generally speaking, the batteries are used for take off and landing, while an engine is used during flight for cruising speeds, and to recharge the batteries (just like an auto hybrid). (Guy,2020) The potential “hydrogen-izing” of aircraft is held back by some of the same factors that inhibit implementation in cars and trucks. Though hydrogen fuel itself is currently comparable in price to fossil-fuel (and getting cheaper), fuel cells are currently expensive, but look to lower costs with increased productivity. There are also few hydrogen fueling stations at present, which limit a traveler’s options. Hydrogen fuel choices vary between CO2-free “green” varieties (typically produced by renewable-powered electrolysis and more expensive) and “grey” types (typically produced by steam-reforming natural gas, cheaper but with CO2 byproduct). (Van Hulst,2019) Obviously, advances in aircraft electrification are quite dependent on better (lighter) battery technology and commercialization of hydrogen technologies, plus economy of scale pricing. It would seem that government subsidies would be in order, as these technological advances would not only improve aircraft, but other tech sectors such as auto and rail transportation. Like this article? Read more in Vern Scott’s new book “Civil (Engineering) Disobedience”, available on Amazon.com
https://medium.com/@scottvern/will-future-aircraft-run-on-hydrogen-fuel-cells-or-batteries-3703e2281fef
['Vern Scott']
2021-03-25 17:42:43.490000+00:00
['Aircraft', 'Hydrogen Fuel', 'Global Warming', 'Renewable Energy', 'Lithium Ion Battery']
Enough promises on climate change — it’s time to pay up
Photo by Christine Roy on Unsplash Midnight Oil achieved global success in the ’80s with their single Beds Are Burning. You’d probably know it if you heard it. And you’d probably recognise the lead singer — Peter Garrett — with his characteristic shiny head, beanpole figure and wooden-push-puppet-esque dancing style. Up until then, the Aussie pub rock band had steadily been building a cult domestic following with their politically charged lyrics on environmental issues, consumerism and militarism. But it was ultimately a song about the treatment of indigenous Australians that the world connected with. The chorus ‘how can we sleep while the beds are burning?’ was universally relatable — it was a proxy for the guilt and anger people felt for continuing to their lives whilst injustices happened all around them. Today, this sentiment is more relevant than ever, as we face the prospect of runaway climate change. With each passing day, there seems to be a growing sense of unease; a feeling that we can no longer sit by as the world slowly burns. Awareness is building and protesters are becoming more fervent. We must take meaningful action to stop climate change — and fast. The problem is, stopping climate change will require substantial investment and even those who support action on climate change have proven fickle at the slightest hint of the bill arriving. In 2005, Jørgen Randers was asked by the Norwegian Government to chair a commission tasked with finding a way to reduce greenhouse gas (GHG) emissions by two-thirds by 2050. It was a tough brief, but the commission managed to produce a workable plan that would cost three hundred dollars, per person per year. Despite Norway being the world’s sixth richest nation, the plan received virtually no support from its citizens — people would rather go shopping. While this reaction may seem selfish, it is consistent with the behavioural economics concept of hyperbolic discounting, where people instinctively discount a future benefit heavily when comparing against an immediate one. The discount tends to be larger still where the future outcome is more uncertain. Hence, the Norwegians felt that avoiding a payment of $300 today was preferable to suffering larger (but uncertain) losses from climate change over the coming decades. When a more appropriate discount rate is applied, numerous studies — such as the landmark Stern Review — have shown that the benefits of mitigating climate change greatly outweigh the cost. The Norwegian example highlights the inherent challenges with implementing the measures that are required to tackle climate change — asking people to defer consumption is a hard sell. Setting a price on carbon The most efficient and least disruptive way to significantly reduce GHG emissions is to use market-based mechanisms, especially carbon taxes. Nobel Prize winning economist Michael Spence emphasised the importance of carbon pricing in an interview with Business Insider, ‘there are relatively few things that are almost unanimously agreed upon among economists, but this is surely one of them.’ The International Monetary Fund’s (IMF) October fiscal monitor report reiterated the efficiency of market-based mechanisms and cautioned that it would be difficult to achieve emissions reduction targets without them. The IMF concluded that global carbon taxes of $75 per tonne or similarly ambitious policy measures are needed to meet the Paris Agreement. While carbon taxes are lauded by experts, some reject the concept of market-based mechanisms on the grounds that capitalism and the quest for unlimited growth got us into this mess in the first place. Taking a more objective view, under a capitalist economy, markets are simply a collection of billions of actors making trillions of decisions to find optimal solutions within a set of constraints. Admittedly this can result in perverse and downright unjust outcomes. However, if we were to tweak capitalism to make it more inclusive, for example through adding new environmental constraints, then markets could be a powerful ally in the fight against climate change. Despite the benefits of market-based mechanisms, most countries either do not price carbon at all or they underprice it. The global average carbon price is $2 per tonne, well short of the IMF’s $75 minimum. This amounts to a global market failure — participants are continuing to make decisions that would be sub-optimal if they were made accountable for the environmental cost of their actions. How does a carbon tax work? A carbon tax is essentially a market constraint that incentivises businesses and consumers to change their behaviours to minimise CO­2 emissions. The tax is levied on heavy emitters, but the effects spread throughout the economy. For example, let’s assume that one tonne of CO­2 is emitted per tonne of cement produced. Now, let’s say that applying a new production method would enable a 50% reduction in CO2 emissions, but would increase production costs by $25 per tonne. In a competitive market with no carbon price and where cement is a homogeneous good, a producer would lose business if it went it alone and moved to the new method. However, if a $75 carbon tax were applied in this market, it would be optimal for all firms to adopt the new method — as they would save $12.50 per tonne — and the sector’s emissions would reduce. In practice, until a global carbon tax applies, schemes need be designed to minimise ‘carbon leakage’. There would be little point in taxing the cement producers from the example above if it would just cause them to move to countries without carbon taxes. Carbon tax designs to date have typically dealt with the problem of carbon leakage with exemptions or rebates, which does little to reduce emissions from the affected sectors. A new top EU climate official has a better idea — apply a carbon border adjustment tax. That is, imports from countries with a lower (or no) carbon tax would be subject to an adjustment tax equal to the difference. This ensures local producers have a level playing field, but are still incentivised to reduce emissions. Further, it encourages trading partners to implement their own taxes; if their goods are going to be taxed anyway, they might as well be the ones to collect the tax revenue. Now let’s consider a different example of how a carbon tax affects consumer behaviour. Let’s say you saw a great deal on a flight from London to New York for $225 one-way. By taking that flight you would be responsible for emitting about 1 tonne of CO­2. If a $75 per tonne carbon tax were applied to the aviation industry, your flight would rise to $300 (assuming the tax is fully passed on) — a 33% increase. After the tax, the price would seem less attractive, demand would reduce and airlines would cut flights and increase their focus on hybrid and electric aircraft. While the above example relates to discretionary spending, carbon taxes also increase the cost of essential goods, such as electricity and fuel prices. For this reason, there is a raft of practical challenges associated with implementing carbon taxes. Carbon tax challenges The IMF estimates that a carbon tax of $75 per tonne would increase energy bills by 45% and petrol prices by 15%. Unless carefully managed, such a sharp increase would be overwhelmingly rejected by the public and possibly cause riots. Governments are acutely aware of the public backlash that even a modest carbon tax could trigger. This prospect makes them reluctant to implement such measures. For example, the Gilets Jaunes protest movement was triggered by a planned diesel fuel tax increase of 6.5 cents per litre. The increase was a relatively modest 4% of the total cost per litre based on today’s prices, although this was in the context of much larger increase in fuel prices over the preceding 12-month period. It would be wrong though to jump to the conclusion that these protesters do not care about the environment. For example, 93% of French citizens support targets for the EU to become carbon neutral by 2050. Further, a communique issued by the Gilets Jaunes demanded a fairer climate change transition and made clear that they are not against carbon pricing in general. Governments should not avoid carbon taxes altogether, but rather they should be cautious as to how they design and implement them. Carbon taxes must include compensatory measures, to reduce the disproportionate burden on poorer households. These measures could include reducing taxes on lower income bands or providing rebates to households for energy efficiency improvements. The Australian experience — where did it all go wrong? The Prime Minister of Australia is a position that comes with all of the perks that you might expect — a large salary, two fully-staffed residences in prime locations, a limousine and an official aircraft. If that weren’t enough, the position also comes with the dubious honour of being immortalised in bust form on Prime Ministers Avenue at the Ballarat Botanical Gardens in Victoria. Or at least that was that plan. Australian politics has generally been fairly stable. Power has shifted every 10 years or so between the two big parties — the Liberals, who confusingly are actually conservatives, and Labor, who are liberals, but sometimes also act like conservatives. However, after an 11-year stint by Liberal Prime Minister John Howard, there was a volatile period, which saw the Prime Minister change 6 times over the next 11 years. This unprecedented period of change caused such a strain on the Ballarat bust tradition that the bequeathed funding dried up and both the current and former Prime Minister remain absent from the park. And it is all linked to climate change, specifically the the quest to to put a price on carbon. It all started when Kevin Rudd defeated long-standing Prime Minister John Howard in the 2007 election based on a platform of change. Rudd’s campaign promised action on climate change including ratifying the Kyoto Protocol and implementing an Emissions Trading Scheme (ETS). Midnight Oil’s Peter Garrett even won a seat and was made the Environment Minister. Rudd commissioned a comprehensive review of climate change and worked to set a carbon price via an ETS, but ultimately he didn’t have the numbers in the Senate. After the bill was voted down, the opposition leader — Malcolm Turnbull — announced that he would support the measure, ensuring its success. A vocal section of the Liberal party, however, was firmly against setting a carbon price and so they triggered a leadership challenge, which Tony Abbott — a self-professed climate sceptic — narrowly won. Abbott immediately withdrew support for the ETS and the bill was defeated a second time and subsequently shelved. Meanwhile, public opinion of Rudd soured and the Labor Party had their own leadership spill. Rudd was replaced with Julia Gillard in 2010 prior to another election, which Gillard won narrowly by forming a minority government. With a more favourable Senate position, Gillard was able to implement a simpler carbon tax, starting at AU$23 per tonne. However, there was a sustained lobbying campaign to undermine the tax, which successfully generated strong public opposition. Rudd replaced Gillard again, but it was a lost cause, as Tony Abbott was swept to power in the 2013 election with the promise to ‘axe the tax’. The carbon tax was revoked on 1 July 2014, just two years after it started. By 2015, Abbott was so universally disliked that the Liberals switched back to Turnbull who went on to win the 2015 election. However, Turnbull never quite managed to overcome resistance within his party to climate change action and was eventually replaced by Scott Morrison in 2018 — a man who once brought a lump of coal into parliament. The 2019 election again focussed on climate change. Labor promised stronger action, Liberals ran a fear campaign on what that action would cost. The latter approach was ultimately more successful — particularly in coal-rich regional Queensland — and Morrison won with an increased majority. Perhaps the one consolation to climate-conscious voters was schadenfreude from hearing that Tony Abbot had lost his seat. It shouldn’t have been like this. Australia has got a lot to lose from climate change, more than most. It faces more frequent and severe floods, heatwaves, droughts and bushfires and also faces the destruction of its national treasures like the Great Barrier Reef. While the carbon tax design wasn’t perfect, it was effective and well-considered. In its second year, CO2 emissions reduced by 1.4%, which was the largest annual decrease in a decade. Emissions have steadily risen since the tax was abolished in 2014. The initial design for an ETS was broadly based on recommendations from the comprehensive Garnaut Report and the subsequent carbon tax was a simplified design based on further advice from the Australian Productivity Commission. The carbon tax included a number of the compensatory measures, such as lowering income taxes, direct offset payments for low-to-middle income households and exemptions for sensitive industries. Labor’s fatal mistake, however, was underestimating the relative ease at which their opponents and lobby groups were able to influence public opinion. People soon forgot about the compensation they had received, but they were acutely aware of the increase in their energy bills. Meanwhile, they were subjected to a relentless fear campaign — the tax would bankrupt families, destroy jobs and kill the economy. The very idea of a carbon tax is now so toxic in Australia that even the Greens Party don’t refer to it in their climate change policy. World leaders use Australia’s experience as a textbook example of what not to do. Could global carbon taxes work in practice? Fortunately, other regions’ attempts to introduce carbon pricing have fared better than Australia’s. The World Bank reports that 57 different carbon pricing initiatives have either been implemented or are scheduled for implementation. Some of these initiatives have been in force for over a decade. For example, British Columbia’s carbon tax was introduced in 2008 and is widely regarded as a success story. Momentum seems to be building, which is encouraging, but there is clearly still a long way to go. Only Sweden and Switzerland have a carbon price higher than the IMF’s recommended minimum of $75 and their schemes have less than 40% coverage. For carbon taxes to have any chance of success at the level required, they must have strong public support. Advocates of climate change action have an important role to play here — support needs to move to the next level of maturity. It is not enough to demand that governments set ambitious emissions reductions targets, we must also challenge them on how they’ll achieve those targets. If carbon pricing is not a core element of their plans, we need to ask, why not? If governments try to introduce (or increase) carbon taxes, we mustn’t baulk at the first hurdle. By all means question the fairness of the scheme design, but it would be hypocritical and counter-productive to reject carbon taxes entirely. If we are genuinely committed to stopping climate change, we must be willing to pay our fair share to make it happen; we must be prepared to make sacrifices. Or revisiting Midnight Oil’s Beds are Burning: ‘the time has come to say fair’s fair; to pay the rent, to pay our share.’
https://traviselsum.medium.com/enough-promises-on-climate-change-its-time-to-pay-up-1f8cf3cf62b2
['Travis Elsum']
2019-10-31 08:17:27.923000+00:00
['Environment', 'Economics', 'Climate Change', 'Carbon Tax']