audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
null
2018-03-07
2018-03-07 02:53:31
2018-03-07
2018-03-07 02:57:19
1
false
en
2018-03-07
2018-03-07 02:57:19
0
141824166daa
3.535849
1
0
0
Artificial intelligence (AI) has transformed traditional industries such as retail, medical care, social interaction, among other sectors…
3
5 ways to transform education by applying artificial intelligence Artificial intelligence (AI) has transformed traditional industries such as retail, medical care, social interaction, among other sectors. While AI has yet to widely influenced the field of education, there is much reason to expect the field will come to benefit from AI and related technologies in the future. According to the Education Research Team at Meridian Capital China, the number of education-related financing cases reached 405 in 2017, most of which mainly focus on quality education, K-12 education, and informatization of education and knowledge consumption. AI, as a major trend in educational technology and with investors’ keen interest at this stage, will likely influence the direction of emerging companies in these areas. The following are five new directions in which AI is expected to lead education: individualized learning, virtual tutoring, educational robotics, programming-based technological education, and virtual-reality-based scenario education. I. Individualized learning can be realized by artificial intelligence. For teachers in classrooms, generally speaking, it is very hard to adopt multiple teaching methodologies with a large group of students in a single session. Educators often struggle to provide students with the opportunities to learn at learners’ own pace or in the learners’ own particular style. Recently however, some startups in the education area have developed adaptive-learning systems based on AI, which helps students reap the benefits of individualized learning. With this newly developed educational system, technology helps to collect data relating to a student’s study pace and method, and then adapts to the student’s study patterns. This technology is already in use. America and Australia-based company Smart Sparrow, founded in 2011, developed a multifunctional platform for teaching design, online study, data analysis, and so on. In virtually every aspect of teaching, educators can use this system to provide interactive activities for students. The system collects data simultaneously as students complete activities and tasks. II. Virtual tutors can help parents engage in children’s study. Many parents are stuck at work, meaning they often face obstacles to helping their children with their studies. Many, for instance, face a lack of time and energy. However, artificial intelligence is capable of empowering parents to support their children. A company named Whizz Education in London, for instance, promoted a product of ‘Maths Whizz’ which is an online math tutoring software. After installing this software on computers, students can interact with virtual tutors and receive answers to their questions. After the students finish studying, parents can receive software analysis reports from the program. III. Educational robots can play the role of educators in communicating with children. Some startups have been developing robots capable of interacting and even befriending children. In 2015, New York-based company “Cogni Toys” promoted their robot called “Dino” that can talk with children. When asked questions by children, Dino searches for information online. After conversing several times with a particular child, Dino will better understand the child based on interactional assessments of temper and personality. Though the range of topics that robots are capable of talking about remains limited, the trend of educational robots may likely influence the relationship between children and education. IIII. Technological education based on programming and robots will become a competitive area of the industry. America has adopted STEM education including science, technology, engineering, and maths. Meanwhile, Chinese parents who have been showing strong preferences in practical education, are also in the pursuit of STEM education. However, compared with old generation, young parents in China who were born in the 1980s are more concerned about quality education. In 2017, there have been 63 financing cases relating to STEAM education, showing an increasing market in China. Startup Primo Toys has produced a type of education for teaching programming. Children as young as 3 years old can play with Cubetto, the robot designed by Primo Toys, by designing a route for the robot. Through interaction with Cubetto, children learn through practice and are enlightened with education via programming. V. Virtual-reality-based scenario education is applicable to various contexts and kinds of education. Large companies like Google and Facebook are already devoted to researching how to best apply virtual reality to education. Ireland-based company Immersive VR Education, for instance, specializes in developing education approaches focused around virtual-reality technology. For example, children can virtually experience the Apollo 11 moon landing by wearing special glasses, an approach that might ultimately give students a better understanding of the world. Though the applications of AI in education paints a promising picture for the future, some remain doubtful of the value of AI and the supposed revolution it might bring about to education. This is partly due to the fact that there has yet to be defined a clear business model that successfully combines AI with education. Based on the necessary condition that artificial intelligence requires huge amounts of capital, the lack of business model makes the future relationship between AI technology and education unclear. On the other hand, to deal with the enormous amount of knowledge accumulated globally on a daily basis, developers of a broad range of products will need more and more AI specialists. Only time will tell how - — and how much — AI and education will intertwine in the days to come. (Top photo from Sohu.com)
5 ways to transform education by applying artificial intelligence
5
5-ways-to-transform-education-by-applying-artificial-intelligence-141824166daa
2018-04-09
2018-04-09 06:42:42
https://medium.com/s/story/5-ways-to-transform-education-by-applying-artificial-intelligence-141824166daa
false
884
null
null
null
null
null
null
null
null
null
Education
education
Education
211,342
All Tech Asia
AllTechAsia is a startup media platform dedicated to providing the hottest news, data service and analysis on the tech and startup scene of Asian markets
c691af389b79
actallchinatech
894
235
20,181,104
null
null
null
null
null
null
0
null
0
6f370c71f78f
2017-11-24
2017-11-24 01:57:02
2017-11-24
2017-11-24 01:58:42
3
false
pt
2017-11-24
2017-11-24 01:58:42
6
141915aeded
2.187736
0
0
0
Por Redação
2
China planeja construir delegacia de polícia alimentada por inteligência artificial (e sem humanos) Por Redação A instalação permanecerá aberta ao público 24 horas por dia e 7 dias por semana (Crédito: Caijing Neican) A República Popular da China anunciou planos para abrir uma delegacia de polícia inteiramente alimentada por inteligência artificial (AI). A iniciativa será construída na cidade de Wuhan, com a finalidade de lidar com questões de trânsito e demais problemas relacionados a veículos e motoristas. O anúncio revela a ambição do país em se tornar um líder mundial em AI até 2030. A delegacia futurista estará aberta ao público 24 horas por dia, 7 dias por semana e, segundo as estimativas, apresentará pontos de falha inferiores se comparados às soluções atuais (tais como as delegacias online). Delegacia de polícia sem humanos A delegacia desempenhará um papel análogo ao do Department of Motor Vehicles (DMV) americano, que administra as cartas de condução e matrículas de veículo, respectivamente Carteira Nacional de Habilitação (CNH) e Certificado de Registro e Licenciamento de Veículo (CRLV), no Brasil. A delegacia administrará as cartas de condução e matrículas dos veículos dos visitantes (Crédito: Caijing Neican) A tecnologia de reconhecimento facial está sendo desenvolvida e expandida em uma escala cada vez maior na China. Como não poderia ser diferente, a instalação em Wuhan também aproveitará o sistema para fazer varreduras biométricas nos visitantes e conectá-las com as fotografias dos banco de dados locais. Os cidadãos chineses serão identificados dentro da estação com a tecnologia desenvolvida pela empresa Tencent. Com este avançado sistema, os visitantes não mais precisarão se sentar nas delegacias por longos períodos, preencher complexos formulários ou mesmo baixar aplicativos. O reconhecimento facial acessará imediatamente todas as informações dos cidadãos assim que os avistar. Em síntese, os visitantes usarão seu rosto como um cartão de identificação. Qualquer pessoa poderá, então, renovar sua “CNH” sem preencher qualquer papelada ou conversar com quaisquer funcionários. O reconhecimento facial acessará imediatamente todas as informações dos visistantes (Crédito: Caijing Neican) Liderança no campo da inteligência artificial Para vincular com a realidade brasileira, imagine comparecer em uma Superintendência da Polícia Federal para renovar seu passaporte e não precisar selecionar uma senha de atendimento ou esperar em filas. O procedimento ocorreria de forma automatizada, por meio de sistemas de reconhecimento facial. Da mesma forma, a delegacia de polícia em Wuhan prestará serviços de forma mais rápida e eficiente e, acima de tudo, sem as burocracias de sempre. A iniciativa revela o potencial da AI para facilitar procedimentos repetitivos, sem desconsiderar, é claro, os desafios que envolvem implementar a nova realidade. Como se pode observar, a China não está medindo esforços para se tornar um líder no campo da AI. Da forma como o quadro está sendo emoldurado, a maioria dos setores da vida chinesa (como supermercados, postos de gasolina e até mesmo hotéis) estará inteiramente automatizada nos próximos anos. Fonte: Futuro Exponencial
China planeja construir delegacia de polícia alimentada por inteligência artificial (e sem humanos)
0
china-planeja-construir-delegacia-de-polícia-alimentada-por-inteligência-artificial-e-sem-humanos-141915aeded
2017-11-24
2017-11-24 01:58:44
https://medium.com/s/story/china-planeja-construir-delegacia-de-polícia-alimentada-por-inteligência-artificial-e-sem-humanos-141915aeded
false
434
Por um mundo de infinitas oportunidades
null
futuroexponencial
null
Futuro Exponencial
futuro-exponencial
FUTURE,TECHNOLOGY,INNOVATION
futuroexpo
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Futuro Exponencial
null
3ffd3c65a6a5
futuroexponencial
581
63
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-09
2018-02-09 18:11:22
2018-02-13
2018-02-13 15:31:22
0
false
en
2018-02-13
2018-02-13 15:32:42
2
141de9251eff
1.554717
1
0
0
A couple weeks ago I started thinking about the subject of my final degree project. I knew I wanted it to be about chess, but I didn’t know…
4
Coding journal #5 — Choosing topic A couple weeks ago I started thinking about the subject of my final degree project. I knew I wanted it to be about chess, but I didn’t know much about computer chess, so I didn’t have a clear idea what the thesis could be. Today, after some days of gathering information, I find myself in the same position: the more I read, the more difficulties I see to making something original and achievable in a 4 months period. My current ideas are: Implement an ANN similar to DeepChess, which would be used as evaluation function comparing two boards between them. It should be trained through tens or hundreds of thousands matches. Maybe I could implement or modify also/instead the autoencoders that DeepChess uses. The problem with this subject is feasibility of training a neural net this big in such a short space of time. Implementing a DBN similar to the one in DeepChess. Building a classical chess engine, implementing minimax algorithm, alpha-beta pruning, and heuristics designed by myself. It might include — but I don’t know exactly how — some Genetic Algorithms to optimize the search. The problem with this subject is that it is completely unoriginal. A comparison between different heuristics used in other state of the art chess engines. Maybe implementing a mix of them after having the results of the study. Studying the efficiency of alpha-beta pruning and minimax versus MCTS and other search algorithms. Give up the chess idea, but stick to neural networks. Lately I have been thinking about ANNs to predict, for instance, BTC price. Besides that, here are a couple articles that I want to read in the next few days. I was reading the AlphaZero paper and I had to stop because it uses the Monte Carlo Tree Search (MCTS) algorithm, which I haven’t studied yet: Introduction to Monte Carlo Tree Search — Jeff Bradberry The subject of game AI generally begins with so-called perfect information games. These are turn-based games where the…jeffbradberry.com An introduction to Monte Carlo Tree Search Monte Carlo Tree Search not only for game AI.appsilondatascience.com Notes: Perfect information games = turn-based games where the players have no information hidden from each other and there is no element of chance in the game mechanics Ply = mitigation method for minimax algorithm, searching only to a limited number of moves ahead. MCTS does well for games with a high branching factor.
Coding journal #5 — Choosing topic
25
coding-journal-5-choosing-topic-141de9251eff
2018-02-13
2018-02-13 15:37:44
https://medium.com/s/story/coding-journal-5-choosing-topic-141de9251eff
false
412
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
AlejandroGM
null
266ef34e6e1b
amgonzalez
3
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-22
2018-04-22 18:42:12
2018-06-22
2018-06-22 22:25:16
22
false
en
2018-06-26
2018-06-26 10:24:26
16
141fd5dc1142
10.283962
10
0
0
In continuation to our previous post: Part 1: काम/Work — Intro , below is Part 1: Chapter 1. Through a first person narrative by the team…
5
The Right Brain Revolution and the Future of Work in India through our First Exploration — Lazy Eight. In continuation to our previous post: Part 1: काम/Work — Intro , below is Part 1: Chapter 1. Through a first person narrative by the team members from our first exploration, we dig into how the team built a company that addresses: The symbiosis of AI with the modern day Indian worker, corporate structure and decision making, the office, the 8 hours per day / 5-day work week, compensation models, hiring, employee learning / development and most importantly, employee happiness for the new age Indian. Enjoy. “The last few decades have belonged to a certain kind of person with a certain kind of mind — computer programmers who could crank code, lawyers who could craft contracts, MBAs who could crunch numbers. But the keys of the kingdom are changing hands. The future belongs to a very different kind of person with a very different kind of mind — creators and empathizers, pattern recognizers and meaning makers. These people — artists, inventors, designers, storytellers, caregivers, consolers, big picture thinkers — will now reap society’s richest rewards and share its greatest joys.” — Daniel Pink The Right Brain Revolution Physiological and psychological studies have long debated whether the two hemispheres in our brain have specific functions to perform. Although there isn’t any conclusive evidence yet for either case, for the sake of this exploration we will assume it holds true. The left brain is assumed to drive analytical tasks involving logical thinking, quantitative analysis and objective assessment. In other words, tasks that require repetitive actions by following a set pattern or that which could be programmed using computer logic, are performed in there. This is where smarter machines, backed by artificial intelligence, have been making a dent. These machines, on the other hand, are not yet equipped to manage tasks which involve creativity, emotional intelligence, intuition and strategic leadership. Wherein machines could be great followers and excellent performers, but they will ultimately lag to emulate creative ideas and strategic notions, which the right hemisphere of the brain performs for us. Experts foresee that in the next 50 years, those who excel in creativity — big picture thinkers, artists, inventors, designers — will rise. A paradigm shift in labour market history is not very far as Artificial Intelligence is all set to automate jobs executed by left brain thinkers, while bringing about greater gains and recognitions for people with stronger right brain activities. “The illiterates of the future will not be those who cannot read and write or code, but those who cannot connect the dots and imagine a constellation.” — Auren Hoffman The mission. “Two years ago, as our first exploration, we started Lazy Eight. Our mission was to use software, data and AI to build the world’s first creative agency on the cloud. In a macro sense we wanted to build a nimble, inclusive and sustainable future for world class right brain thinkers and disruptive brands. We started this journey in India.“ — Principal #3399ff (uiux). Why the colour hex codes? More on this later. India and its rich history of Right Brain Thinkers. Yes, even historically India has been a sensory overload. With a cultural heritage spanning over 10,000+ years and a civilization older than many others in recorded history, India has always been a seat of creativity and innovation. For instance, the Indus Valley Civilization led the way in metallurgy, with copper, bronze and iron commonly used to produce utensils, tools, seals and coins. The Indus cities were also noted for their urban planning with elaborate drainage systems, house layouts and advanced irrigation methods. Over the centuries, with the rise of various empires — Mauryas, Guptas, Cholas, Pandyas, Mughals — different forms of art (painting, dance, sculpture, literature) and architecture (temples, palaces, forts) developed and prospered across India. Be it the Buddhist cave paintings in Ajanta and Ellora, the grand temples of North and South India, the folk and tribal art forms of central India or the musical and literary creations of royal bards and poets, India has never ceased to awe the world with its creativity. Modern India is no different. Riding on the creative wave of its golden past and a confluence of multiple cultures, Indian painters and paintings have etched a special niche, generating curiosity and interest from art lovers the world over. Authors and musicians of Indian origin have won global accolades — Booker, Pulitzer Prizes, Nobel Prizes, Grammy Awards- for their contributions. In the world of advertising, Indian admen and creative agencies have been making impactful contributions with their original and innovative content, for which they have been lauded with global honours at every leading ad-fests. “India has one of the broadest spectrums for one to work with. Whether that is sensory or societal. It provides a magical sandbox for one to test the tensibility of one’s idea, making it the perfect place to start and grow Lazy Eight.” — Principal #9933cc (uiux). Why call it Lazy Eight? “We get asked this a lot. No it’s not a team of 8 sloths (though they are our spirit animal). It’s simply another term for infinity. When working on the brand we liked the polar opposite nature of lazy and infinity. The comfort of lazy but the possibilities around infinity. Eight also works into our philosophy to break the archaic concept of 8-hour work days.“ — Principal #3399ff (uiux) Brand mark structure of Lazy Eight “The general brand language we wanted to portray was minimalistic with a bit of quirk.” — Principal #590066 (branding, illustration) The Lazy Eight 404 page. Following is their case study into the branding and design thought behind Lazy Eight. Case Study Corporate structure. “We hate titles. Everyone we bring onboard is at the top of their game, so everyone is a Principal. No one is pole positioning, which is surprisingly refreshing.” — Principal #ff33cc (growth) Below are the Principal roles currently at Lazy Eight (as of June 1, 2018). “Our hiring process has been a bit different. New Principals were brought in through a nomination process by existing Principals. So they needed to vouch for them first. We’ve now opened up the process a bit with our new website.” — Principal #33cc33 (operations) The careers page best explains why Lazy Eight is different and the type of people they are looking for (ie. care about and don’t care about). “We don’t believe in ‘Don Drapers’. So each Principal is assigned a hex colour. The client approaches us for our collective work and thinking and not because of a single member of the team.” — Principal #9f1c34 (engagements) The Secret Sauce. “Overheads have been destroying traditional creative agencies for the past 20 years. Madison Avenue Offices and fancy Italian leather chairs won’t cut it today.” — Principal #99ffff (operations) High overheads have been plaguing traditional creative agency models for sometime now. The chart below gives a good break down. The FTE (full time employee) multiplier is 100% — 130%. Source: aaaa.org Another great paper to dig into overhead calculations and how it’s added into FTE pricing can be found here. “We wanted to have AI and automation help us reduce overheads for Lazy Eight. As Lazy Eight has grown and evolved, so has our AI, and now supports a ton of tasks that would have normally been taken care of by internal Principals.” — Principal #4cc091 (development) Acht (8 in German) is the AI bot that Lazy Eight has developed internally. Acht helps Engagement Principals with account management and budgeting. Acht helps Principals record hours during their cycles (Lazy Eight analogy to sprints). “We dwell in a society where likes and follows have become an empty form of compensation. We decided to add a monetary value to it.”— Principal #4cc091 (development) Acht allows Principals to tip each other. On tipping a “beer”, equivalent amount of cash gets transferred from one Principal’s compensation to the other. Acht takes care of compensation. “Acht’s secret power is the amount of data it has collected. This data is immensely powerful and is allowing us to tread on brand new level of transparency this space has never seen.” — Principal #3399ff (uiux) Acht-pushed client reporting — Every Monday Acht sends a report of hourly usage to our clients. Lazy Eight has now worked with over 175 clients and Acht has the data. This enables them to better estimate projects and to increase transparency for future clients they have added them publicly to their case studies. Hyperloop One Case Study. “We value our time as much as we do our clients. But what is its true value? We just let data determine that for us.” — Principal #9f1c34 (engagements) Acht calculates a new monthly rate for Lazy Eight every month. This price and the progress of the pricing is public. This reflects on Principal compensation proportionately. The Office “When starting Lazy Eight, there wasn’t a single fully remote company here in India. We wanted to change that.” — Principal #3399ff (uiux) Lazy Eight has a “Sans cubicle” policy where Lazy Eight Principals are allowed to work from whereever they want. Currently their team of Principals are spread between India, Europe and the US. Below is a distribution of their Principals in India as of June 1, 2018. Lazy Eight Principals like to show off when they are “Sans cubicle”. Pictures share by Lazy Eight Principals internally. To maintain an in-person collaborative environment, Lazy Eight has associated themselves with Social. It is a great new concept here in India where the spaces are hybrid and operate as co-working space during the day and a live music / bar during the evenings. Currently there are 20 Socials around India. “There are many co-working spots in India now, and they are growing by the day thanks to international players jumping into the scene. But the team at Impresario (team behind Social), given their history, know what the new age Indian wants, and thus have created the best environment here in India for the remote / and in our case ‘sans-cubicle’ Principals.“ — Principal #3399ff (uiux) The 60-hour work month. “Forty hour workweeks are a relic of the Industrial Age. Knowledge workers function like athletes — train and sprint, then rest and reassess.” — Naval Ravikant “160 hours a month suck. We ask our Principals to commit a focused and efficient 15-hour week. There are other things to do in life — I’m currently building my own podcast network.“ — Principal #9f1c34 (engagements) Currently (as of June 1, 2018) a Lazy Eight Principal completing 60 hours per month is making: INR 17.3 Lac a year. (INR 1,730,000 / year) (this doesn’t include profit sharing). Below are the current national (India) averages for senior designers working 240–320 hours per month (with no profit sharing). The difference is staggering. Source: Glassdoor Source: Glassdoor Benefits. “We want to make sure that we take care of our Principals. They are the bedrock of Lazy Eight and their happiness is our top priority.”— Principal #99ffff (operations). Lazy Eight Principal Benefits: Lazy Eight as of June 1, 2018 Since inception a little over 2 years. Total number of clients till date: ~177 (year 1: 42 | year 2: 177) Some names include: Uber, Hyperloop One, Aditya Birla Group, etc. Total number of Principals: 24 Lazy Eight (#wip). Some of the things the team at Lazy Eight are currently working on: A hybrid concept of an office using deep work principles in India. A paid apprenticeship program for upcoming talent here in India, they are calling it Lazy Eight Academy Adding local macro economic factors into Principal compensation (eg. Big Mac Index). A ventures program (similar to Google’s 80/20 model) that works on dynamic equity based on Lazy Eight hours committed. Exploring the integration of DAO. “ We are still very much wet clay. What I love about the team here is that we are fortunate enough to work with nimble sponges that can soak in things as quick as they can react to them.” — Principal #3399ff (uiux) We asked the team at Lazy Eight to layout details of their “burns and learns”. This is a list that will be organic and uses a product they built in-house to “save paper”, they call it Kaagaz.io. Although the issues are real we asked them to not take names and keep it light. Enjoy. We’ll be checking back in 3 months to see how Lazy Eight is doing and add to this thread. The updates will reflect on our Twitter account via threads to this article. We are also testing an employment happiness bot with Lazy Eight, we’re calling it khushi.work (translates to happy work). Once we believe it’s stable and we are “khushi” with it we will be opening it up for all companies to use for free :) If you know of anyone that might like this exercise and perhaps learn something from it please share this with them. Our hardworking team will love you for it, and hopefully, those you share it with will love you for it. If you want to get in touch feel free to drop us a line at: [email protected]. We love meeting interesting, like-minded people. Up next we will be exploring left brain thinkers. Subscribe below to know when that comes out. Previous article was: Part 1: काम/Work — Intro 🙏 Jd, Vidur, T-dawg, Kobeer, Cray-berry, Bhikrum, Manku, Subho-smash, Adi, Abhinav, Sahil and Thane for taking the time to read drafts.
The Right Brain Revolution and the Future of Work in India through our First Exploration — Lazy…
264
part-1-chapter-1-the-right-brain-141fd5dc1142
2018-06-27
2018-06-27 21:18:25
https://medium.com/s/story/part-1-chapter-1-the-right-brain-141fd5dc1142
false
2,235
null
null
null
null
null
null
null
null
null
Future Of Work
future-of-work
Future Of Work
8,540
Khoob Foundation
Conversations through radical explorations.
bca97d34fae7
khoobfoundation
19
3
20,181,104
null
null
null
null
null
null
0
| Term | Term Count | |--------|------------| | this | 1 | | is | 1 | | a | 2 | | sample | 1 | | Term | Term Count | |---------|------------| | this | 1 | | is | 1 | | another | 2 | | example | 3 | TF(t) = (Número de vezes que o termo t aparece no documento) / (Número total de termos presentes no documento) TF('this', Documento 1) = 1/5 = 0.2 TF('example',Documento 1) = 0/5 = 0 TF('this', Documento 2) = 1/7 = 0.14 TF('example',Documento 2) = 3/7 = 0.43 IDF(t) = log_e(Número total de documentos / Número de documentos com o termo t presente) IDF('this', Documentos) = log(2/2) = 0 IDF('example',Documentos) = log(2/1) = 0.30 TF-IDF('this', Documento 1) = 0.2 x 0 = 0 TF-IDF('this', Documento 2) = 0.14 x 0 = 0 TF-IDF('example',Documento 2) = 0.43 x 0.30 = 0.13 c1: Human machine interface for ABC computer applications c2: A survey of user opinion of computer system response time c3: System and human system engineering testing of EPS m1: The generation of random, binary, ordered trees m2: The intersection graph of paths in trees m3: Graph minors: A survey | termo | c1 | c2 | c3 | m1 | m2 | m3 | |-----------|----|----|----|----|----|----| | human | 1 | 0 | 1 | 0 | 0 | 0 | | interface | 1 | 0 | 0 | 0 | 0 | 0 | | computer | 1 | 1 | 0 | 0 | 0 | 0 | | user | 0 | 1 | 0 | 0 | 0 | 0 | | system | 0 | 1 | 2 | 0 | 0 | 0 | | survey | 0 | 1 | 0 | 0 | 0 | 1 | | trees | 0 | 0 | 0 | 1 | 1 | 0 | | graph | 0 | 0 | 0 | 0 | 1 | 1 | | minors | 0 | 0 | 0 | 0 | 0 | 1 | human = (-1.031, 0.440) interface = (-0.318, 0.109) computer = (-0.922, -0.123) user = (-0.604, -0.232) system = (-2.031, -0.232) survey = (-0.759, -0.988) trees = (-0.035, -0.637) graph = (-0.184, -1.231) minors = (-0.152, -0.758) c1 = (-0.850, 0.214) c2 = (-1.614, -0.458) c3 = (-1.905, 0.658) m1 = (-0.013, -0.321) m2 = (-0.083, -0.942) m3 = (-0.409, -1.501) y topic: [('object', 0.29383227033104375), ('software', -0.22197520420133632), ('algorithm', 0.20537550622495102), ('robot', 0.18498675015157251), ('model', -0.17565360130127983), ('project', -0.164945961528315), ('busines', -0.15603883815175643), ('management', -0.15160458583774569), ('process', -0.13630070297362168), ('visual', 0.12762128292042879)]
16
33330f17bb4f
2017-05-10
2017-05-10 12:23:27
2017-12-12
2017-12-12 13:01:02
7
false
pt
2017-12-12
2017-12-12 13:01:02
23
14213b2d7fcd
8.253774
22
4
0
As pessoas tendem a considerar um trabalho produzido por um estudante de uma Universidade da Ivy League um trabalho muito mias bem…
5
Comparando monografias de uma Universidade brasileira e uma Universidade americana utilizando NLP As pessoas tendem a considerar um trabalho produzido por um estudante de uma Universidade da Ivy League um trabalho muito mias bem produzido do que um feito por um estudante de uma Universidade não tão conceituada. Mas de que forma esses trabalhos são diferentes? E o que os estudantes podem fazer para produzir trabalhos melhores e consequentemente se destacarem mais? Medir a qualidade dos trabalhos de uma Universidade é algo bastante complexo e portanto não é o objetivo aqui. Eu sempre tive curiosidade em investigar as diferenças entre trabalhos de instituições diferentes e para iniciar decidi explorar dois fatores envolvendo os trabalhos: a escolha dos temas e a natureza deles. Neste texto vamos abordar formas de analisar os trabalhos utilizando Processamento de Linguagem Natural. Vamos extrair palavras-chaves utilizando o algoritmo de tf-idf e classificar os trabalhos em clusters utilizando o Latent Semantic Indexing (LSI). Os Dados O conjunto de dados é composto por resumos (abstracts) de trabalhos de conclusão de curso de estudantes de uma Universidade brasileira (Universidade Federal de Pernambuco) e de uma Universidade americana (Carnegie Mellon). Todos os trabalhos foram feitos por estudantes de graduação em Ciência da Computação. Segundo o Times Higher Education World University Rankings a Carnegie Mellon tem o 6ª melhor programa de Ciência da Computação do mundo, enquanto a Universidade Federal de Pernambuco nem aparece nesse ranking. Já no ranking geral a Carnegie Mellon aparece na posição de número 23, enquanto a UFPE aparece na posição 801+. Todos os trabalhos foram produzidos entre os anos de 2002 e 2016 e cada linha do dataset possui as seguintes informações: título do trabalho resumo (abstract) do trabalho ano de publicação do trabalho universidade na qual o trabalho foi produzido Os trabalhos da Carnegie Mellon podem ser encontrados aqui e os trabalhos da Universidade Federal de Pernambuco podem ser encontrados aqui. Etapa 1 — Investigando os temas dos trabalhos Extraindo palavras-chaves (keywords) Para identificar os temas de cada trabalho a primeira estratégia que vamos utilizar é extrair as keywords. Para isso vamos utilizar o algoritmo de tf-idf. tf-idf O que o tf-idf faz é penalizar as palavras que aparecem muito em um documento e que também aparecem muito nos outros documentos. Ou seja, se uma palavra é muito recorrente em um texto mas também é muito recorrente em outros ela é penalizada; isto quer dizer que essa palavra não serve para caracterizar o determinado texto (já que ela também caracterizaria todos os outros). Vamos explorar um exemplo para visualizar isso melhor. Este exemplo aparece na página da Wikipedia sobre tf-idf. Para cada documento temos uma palavra e a quantidade de vezes que ela aparece no texto. Temos o Documento 1: E o Documento 2: Primeiro vamos usar a intuição. A palavra this aparece uma vez nos dois documentos. Por aparecer o mesmo número de vezes e em ambos os documentos, temos uma certa intuição de que ela é meio ‘neutra’. Ou seja, ela não serve para identificar os documentos. Já a palavra example aparece três vezes no Documento 2 e nenhuma vez no Documento 1. Isso é um indicativo de que esse palavra pode servir para caracterizar o Documento 2. Agora vamos para a parte matemática. Precisamos computar duas métricas: TF (Term Frequency) e IDF (Inverse Document Frequency). A forma do TF é definida por: Então para as palavras this e example em cada documento, temos: Já a fórmula do IDF é definida por: Por que a fórmula tem essa operação de logaritmo? Porque o tf-idf é uma heurística. A intuição é de que um termo presente em muitos documentos não é bom para discriminar o documento, e deve ter um peso menor do que um termo que ocorre em poucos documentos. A fórmula é uma implementação heurística desta intuição. — Stephen Robertson Como o usεr11852 explica no StackExchange: O aspecto enfatizado pelo logaritmo é que a relevância de um termo ou de um documento não aumenta proporcionalmente com a frequência do termo (ou do documento). Usando uma função sub-linear ajuda a diminuir esse efeito. Além disso a influência de valores muito altos ou valores muito baixos (palavras muito raras, por exemplo) também é amortizada. — Fonte Agora vamos aplicar a fórmula: Finalmente, no final nos temos: Para cada abstract dos trabalhos das Universidades eu usei o tf-idf para identificar as 4 palavras com os scores mais altos do tf-idf. Eu utilizei o CountVectorizer e o TfidfTransformer do scikit-learn. Você pode ver o notebook com o código aqui. Com as 4 keywords para cada trabalho, utilizei o WordCloud para visualizar as palavras, e os resultados foram: Keywords da UFPE Keywords da Carnegie Mellon Agrupando os trabalhos em clusters A segunda estratégia utilizada para explorar os temas dos trabalhos foi a modelagem de tópicos. Nesta etapa o algoritmo utilizado foi o Latent Semantic Indexing (LSI). Latent Semantic Indexing O algoritmo utiliza os dados do tf-idf e faz uma decomposição de matrizes para agrupar os textos em tópicos. Como Ian Soboroff mostra nos slides do curso de Information Retrieval: U é a matriz para transformar novos documentos D é a matriz que dá a importância de cada dimensão (vamos falar mais sobre essas dimensões daqui a pouco) V* é a matriz com a representação de M em k dimensões Para entender melhor vamos utilizar os seguintes títulos de documentos de dois domínios (Interação Humano Computador e Teoria dos Grafos). Esses exemplos foram retirados do artigo An Introduction to Latent Semantic Analysis. O primeiro passo é criar a matriz com a quantidade de vezes que cada termo aparece (já removemos as stop-words e utilizamos apenas os termos mais importantes de cada título): Após a aplicação dos cálculos da técnica nós calculamos as coordenadas de cada termo e cada documento. O resultado é: Usando o matplotlib para visualizar isso graficamente, temos: O resultado para os termos e documentos Legal né? Os vetores em vermelho são os documentos de Interação Humano Computador e os documento em azul são de Teoria dos Grafos. Para modelar os tópicos dos trabalhos das Universidades eu utilizei o gensim. Nos poucos testes que fiz variando as dimensões eu não encontrei muitas diferenças entre os trabalhos das Universidades (todos pareciam pertencer ao mesmo cluster). O tópico que mais diferenciou os trabalhos das Universidades foi esse: Visualmente, o resultado foi: Visualização da UFPE e da Carnegie Mellon para o tópico y Na imagem o tópico y é representado no eixo y. É possível ver que os trabalhos da Carnegie Mellon estão mais associados a ‘object’, ‘robot’ e ‘algorithm’ e os trabalhos da UFPE estão mais associados a ‘software’, ‘project’ e ‘business’. Você pode ver o netbook com o código aqui. Etapa 2 — Investigando a natureza dos trabalhos Eu sempre tive a impressão de que no Brasil eram produzidos muitos trabalhos sobre review de literatura, enquanto nas outras Universidades do mundo se produzia poucos trabalhos como esses. Para investigar isso resolvi analisar os títulos dos trabalhos. Normalmente quando um trabalho é um review de literatura a palavra ‘study’ aparece no título. Eu peguei então todos os títulos dos trabalhos e contabilizei as palavras que mais aparecem, para cada uma das Universidades. Os resultados foram: Palavras que mais aparecem nos títulos dos trabalhos da UFPE Palavras que mais aparecem nos títulos dos trabalhos da Carnegie Mellon Você pode ver o notebook com o código aqui. Resultados A partir da análise foi possível observar que os temas dos trabalhos não diferem muito, mas deu para visualizar o que parece ser as especialidades de cada instituição. A Universidade Federal de Pernambuco produz mais trabalhos relacionados a projetos e negócios e a Carnegie Mellon se produz mais trabalhos relacionados a robôs e algoritmos. Ao meu ver essa diferença de especialidades não é algo ruim, simplesmente cada universidade é especializada em determinadas áreas. Não foi possível observar de fato a qualidade dos trabalhos, mas um takeaway foi de que no Brasil a gente precisa produzir mais conhecimento em vez de só fazer review de literatura. Algo importante de ser pontuado é que apenas ter os melhores trabalhos não basta. O objetivo principal no início da análise era entender porque eles “são melhores” e o que nós podemos fazer para chegar lá, mas me toquei que talvez um caminho para isso seja simplesmente mostrar mais nosso trabalho e trocar mais conhecimentos com eles. Porque isso pode nos forçar a produzir coisas mais relevantes e até nos tornar melhores a partir do feedback deles. Acho que isso serve para todos, tanto para estudantes da universidades quanto para nós profissionais mesmo. Austin Klaeton tem uma frase que resume bem isso: It’s not enough to be good. In order to be found, you have to be findable. — Austin Kleon Eu apresentei essa análise na edição de 2017 da The Developer Conference em Florianópolis. Você pode ver todos os códigos e os slides aqui.
Comparando monografias de uma Universidade brasileira e uma Universidade americana utilizando NLP
207
comparando-monografias-de-uma-universidade-brasileira-e-uma-universidade-americana-utilizando-nlp-14213b2d7fcd
2018-06-13
2018-06-13 00:19:24
https://medium.com/s/story/comparando-monografias-de-uma-universidade-brasileira-e-uma-universidade-americana-utilizando-nlp-14213b2d7fcd
false
1,909
Compartilhando experiências sobre visualização de dados (dataviz) em português.
null
null
null
datavizbr
datavizbr
DATAVIZ,INFOGRAPHICS,INFOGRAFIA,DATA VISUALIZATION,BRAZIL
datavizbr
Data Science
data-science
Data Science
33,617
Déborah Mesquita
Award-winning Data Scientist 👩🏾‍💻 Loves to write and explain things in different ways✨ - http://deborahmesquita.com/
dd9e06a0a640
dehhmesquita
1,720
142
20,181,104
null
null
null
null
null
null
0
null
0
66dc44acfceb
2018-06-03
2018-06-03 13:22:35
2018-06-04
2018-06-04 08:38:02
2
false
en
2018-06-04
2018-06-04 08:40:01
8
1421852cd0a2
3.187107
12
0
0
In conjunction with our goal of reducing workplace accidents around the world by 20% before 2022, we realise the vital importance of…
5
Partnership Announcement: Scylla and Safeguard In conjunction with our goal of reducing workplace accidents around the world by 20% before 2022, we realise the vital importance of partnerships. To build an AI-powered system that will predict and prevent workplace accidents in real time, a large network of industry-partners, developers, advisors and fellow innovators is required. With this in mind, and having already formed several great partnerships, we’re happy to announce a new partnership between Safeguard and the machine-learning forerunners at Scylla. What is Scylla? Within the industry of safety-tech, Safeguard’s focus lies primarily on accident prevention, crisis communication and data-driven safety management. Scylla, on the other hand, has developed a human-behaviour detection system that is mainly applied in the law enforcement, defense and security sectors. Scylla’s AI-based platform recognizes visual activity within videos, and tunes into security cameras, surveillance systems or UAV’s (military drones). By applying machine-learning and object-classification technology, human behaviour can be detected and observed, providing highly accurate results in predicting and foreseeing dangerous activity. Specifically, Scylla’s human behaviour detection-system recognizes activities that preempt violence, crime or the use of weapons, enabling relevant action to be taken by the right authorities. What problems can Scylla solve? Currently, modern surveillance systems require constant human supervision and oversight. Investment in human resources is thus consistently required, whilst these systems are vulnerable to human fallibility. Factors such as fatigue, attention-loss or a lack of multitasking capabilities can thus play a crucial role in overseeing security breaches or dangerous behaviour within surveillanced areas. As an autonomous and ‘smart’ visual content analytical system, Scylla overcomes these issues, whilst addressing a substantial demand within the security sector for surveillance systems that are inherently more thorough. What does this partnership mean for Safeguard? In upscaling Safeguard’s platform and scope of interest into domains such as defense and security, this partnership enables us to combine Scylla’s surveillance system with Safeguard’s alarm and crisis-communication app. Apart from incorporating this technology into our upcoming accident prediction system, Safeguard will also be able to apply this surveillance-tech to more subtle and nuanced use cases. For example, cameras in a dangerous working environment would be able to detect if personnel and visitors are not wearing safety helmets and vests, enabling management staff to be notified thereof and act upon the situation. In discussing this partnership, Safeguard’s CEO Ingmar Vroege emphasises how: “Apart from enhancing our existing cooperation with Albert Stepanyan and his team, this partnership will provide us with an entry-point into a large network of international corporate clients. More importantly, however, the technological integration that we’re facilitating between Safeguard and Scylla means that our AI-system will be able to benefit from industry-leading, highly-advanced surveillance software. This eliminates the time it would have taken for our development team to create such software ourselves, thereby benefiting our development trajectory and timeline immensely.” In highlighting the importance of this technical integration, Scylla’s CEO Albert Stepanyan notes that: “Safeguard’s alarm and crisis-communication tool is a perfectly-suited extension to our surveillance platform. Instead of building our own alarm system from scratch, we can benefit from the three years of research, development and fine-tuning that have gone into the Safeguard product. Not to mention the vast scope of potential that exists within the safety-tech space”. What does this partnership mean for you? If you’re interested in what we’re doing over at Safeguard and how we’re changing the world of workplace safety, rest assured that this partnership is crucial in bringing us closer to our goal. In being aligned with AI-pioneers like Scylla, we’re confident that no matter how optimistic our vision might be, we’re on the right track. As a Safeguard community member, you might want to learn more about our upcoming token sale and the development an open-source safety protocol: something which our industry has not yet witnessed. You can do that here. As a prospective client or organization interested in using our safety-management solution within your organization, you might want to learn more about how we’re already able to make your workplace significantly safer. You can do that here. To be a part of our active community or learn more about what we do: Visit our website here. Join our Telegram community here. Follow us on Twitter here. Connect with us on Facebook here. Follow us on Medium here. Or email [email protected]
Partnership Announcement: Scylla and Safeguard
494
partnership-announcement-scylla-and-safeguard-1421852cd0a2
2018-06-15
2018-06-15 16:03:48
https://medium.com/s/story/partnership-announcement-scylla-and-safeguard-1421852cd0a2
false
743
News and announcement about the Safeguard Token
null
SafeguardToken
null
Safeguard Token
safeguardtoken
null
SafeguardToken
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Gertjan Leemans
Blockchain Enthusiast | Aiming to prevent work related accidents with Safeguardtoken.com
31eb2ed5a436
GertjanLeemans
146
54
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-18
2018-04-18 20:20:09
2018-04-18
2018-04-18 20:24:46
3
false
en
2018-04-18
2018-04-18 20:24:46
7
142196ed8402
3.312264
2
0
0
To celebrate World Water Day 2018, we partnered with Vizzuality to host WaterHack 2018. During this environmental and social datathon…
5
How our solutions team engineered WaterHack 2018 To celebrate World Water Day 2018, we partnered with Vizzuality to host WaterHack 2018. During this environmental and social datathon, participants analyzed and visualized survey data collected by ONGAWA, a NGO promoting human development and social change with the help of new data sharing technologies. Currently, ONGAWA leads a project supported through the European Union’s Global Climate Change Alliance tackling water insecurity in Tanzanian villages in the East Usambara Mountains. ONGAWA supports populations vulnerable to the effects of climate change with an eco-village model that (1) introduces efficient management of natural resources, (2) improves water supply and sanitation services, and (3) promotes the use of sustainable technologies. Recently, ONGAWA conducted a survey on living conditions in these 8 Tanzanian villages, but the data collected from the survey’s 340 respondents needed a more accessible structure in order to be useful. During WaterHack 2018, participants helped explore this survey data using programming languages like R and Python, interface tools like pandas and CARTOframes, and Javascript libraries like D3.js and CARTO.js. Let’s take a look at what they discovered! Building a vulnerability index ONGAWA’s efforts to reduce water insecurity also aims to reduce inequality. According to Ángel Fernandez, the president of ONGAWA, water insecurity increases inequality: Water scarcity disproportionately impacts select populations including women and girls, ethnic minorities, and the disabled Currently 1.8 billion people around the world access water from a contaminated source 1,000 children die each day due to water and sanitation-related diseases Children miss 443 million days of school each year because of water and sanitation-related illnesses The inequalities resulting from water insecurity, however, disproportionately impact women and girls. When women and girls travel long distances to collect water girls are more likely to miss school, which limits their access to education and places them at a disadvantage. When women and girls collect water that is contaminated this already vulnerable population faces even greater health risks. For our datathon, we decided to explore the spatial relationship between water insecurity and gender inequality. Participants were provided access to several CSV files containing survey data and in working groups were tasked with answering the following three questions: What village has the most gender equality in terms of water collection? Do women, men, or children travel farthest to collect water, and how many buckets do they carry? What households in each village are most vulnerable to the effects of climate change? Several groups began to answer these questions by first creating a data index to accelerate data retrieval operations across several CSV files. In addition, data indices can be arranged so that column fields with geographically related values appear next to one another, which can help identify missing values and/or highlight substitute values needed. In answering question three, for instance, one group created a vulnerability index to help relief organizations. The group augmented survey data with new variables and then aggregated the data in order to (1) identify vulnerable households in each village, and (2) allocate resources so the most vulnerable households are prioritized. Data visualizations map water accessibility Another method for exploring this data involved building data visualizations. In the map below, each village is represented with a graduated circle proportionate to the population size. This data also reveals how long it takes for each village to travel to a water source. The data visualization below maps travel time using the same collector types as in the previous map. While the average time for all villages clocks in at an alarming 46 minutes and 45 seconds, the severity of this problem is much worse for some villages than others. Selecting the auto-style feature on average time to water source reveals that adult women in Zirai travel approximately 2 hours and 21 minutes for water. More generally, participants discovered that in households where women made most of the decisions they not only continued to collect water, but walked further and carried more liters on average than men as well. Conclusion In addition to helping raise awareness on the global water crisis, WaterHack 2018 was able to highlight the problem’s multidimensional nature and the impact water insecurity has had on gender inequality. As Earth Day 2018 approaches, it is more important now than ever to work with data sharing technologies to find sustainable solutions to combat the effects of climate change. Originally published at carto.com.
How our solutions team engineered WaterHack 2018
14
how-our-solutions-team-engineered-waterhack-2018-142196ed8402
2018-04-20
2018-04-20 05:26:11
https://medium.com/s/story/how-our-solutions-team-engineered-waterhack-2018-142196ed8402
false
732
null
null
null
null
null
null
null
null
null
Water
water
Water
11,627
CARTO
CARTO leads the world of location intelligence, empowering any organization and individual to discover and predict key insights through location data.
bad1a3c5dd60
carto
5,759
2,268
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-08
2018-02-08 17:29:35
2018-02-08
2018-02-08 17:36:38
1
false
en
2018-02-08
2018-02-08 17:36:38
5
1421ee3cc731
3.022642
0
0
0
If human history is taken as a reference point and compared with the current state of affairs (in all walks of life), we would find a clear…
5
The Future of Compliance Management If human history is taken as a reference point and compared with the current state of affairs (in all walks of life), we would find a clear pattern emerging between demand and supply. As far back as in 6000 BC, humans developed the barter system. With passing time, rules were created to accommodate complex transactions. As rudimentary as those regulations were in those times, their pace of development remained steady and was in direct correlation with ever developing human acuity and urgency of demand. Now, in the year 2018, the organization heads in different industries of the world find themselves at a juncture, where the future looks so complex that they are struggling to cope up with it. But just like in the past, when demand for a solution challenged the human brain, the mind triumphed again in the form of the Compliance Management Solution. This amazing software solution is powered by a transformative technology that is ready to shape up the world of tomorrow. Future of Compliance Management A survey of 150 leading compliance officers conducted by Accenture in 2017 revealed that compliance functions have matured, but steps needed to be taken to enhance their capabilities to increase efficiency and effectiveness. It is expected that investment in capabilities would increase for 89% of institutions (surveyed in Compliance Risk Study) over the next two years, as organizations gear up to meet the demands of the future. The survey also points towards a rise in cyber risk, which is expected to be one of the top three most challenging risks to manage in the coming years. Through this survey, we also understand that there is a need for continued innovation for countering threats posed by regulatory risks. Managing Risks in BFSI industry with Artificial Intelligence The banking, financial services, and insurance industry has been the worst affected by the rising number of regulatory compliances. As highly qualified managers, CEOs, and decision makers struggle to solve the puzzle of compliances laid down by regulatory bodies, the world awaits a helping hand from the genius minds in the IT world. After years of research and struggle, the help seems to have arrived in the form of machine learning and NLP backed software solutions that are becoming efficient by the day. Machine Learning and NLP are subsets of AI, which has been used by many organizations for minimizing compliance related risks. AI has the power to evolve and when it is integrated into a Compliance Management System, it is able to learn from the system by deciphering trends and patterns. Through constant learning, it can offer solutions to the toughest of problems that a human mind simply cannot understand. When it comes to employing an AI based solution, it becomes important to do a thorough research of the many features that it offers. A comprehensive Compliance Management Solution should ideally cover the following aspects of an organization’s business: Regulatory Change Management Risk Management Policy and Procedure Management Compliance Management Audit Management Qualification Management Learning Management Performance Management Although organizations tend to opt for a point-solution (one solution for a specific area) just to address the issue at hand and neglect the other areas where compliance overlaps and creates operational hazards. Due to the unpredictability of future, it is not advised that you leave the fate of your organization in the hands of a software solution that covers only a few aspects and leaves other important areas unattended which could hinder to the overall compliance objectives of the organization. Harnessing the Power of AI with Predict360’s Software There are many software solutions available that are specifically designed for eliminating compliance related errors and thwarting risks. Before procuring a Compliance Management Solution, you need to ensure that it meets your requirements and covers the operations of all the departments in your enterprise. As regulatory compliances are predicted to be even more complicated than they are right now, you should look to employ a compliance-intensive solution from a company that has a vision for the future. Predict360™ Compliance Management Solution is endorsed by the American Bankers Association and comes with seamlessly integrated modules that can work in tandem and solve all compliance related problems. Besides standard modules for compliance management, it also offers Third Party Management to streamline complex vendor management issues. For more information about Compliance Management Solution and how it can be further enhanced as part of an integrated risk and compliance management suite, visit http://www.360factors.com/aba.
The Future of Compliance Management
0
the-future-of-compliance-management-1421ee3cc731
2018-02-08
2018-02-08 17:36:39
https://medium.com/s/story/the-future-of-compliance-management-1421ee3cc731
false
748
null
null
null
null
null
null
null
null
null
Compliance
compliance
Compliance
1,940
fahad.mateen
Digital Marketing professional overseeing www.360factors.com inbound marketing efforts. Having +3 years of experience in Digital space,
8a96425087be
fahad.mateen
1
21
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-21
2018-09-21 04:45:37
2018-09-21
2018-09-21 08:35:13
4
false
en
2018-09-21
2018-09-21 08:35:13
16
1423b9ecb603
7.262264
12
0
0
In the sixties, Philip Emeagwali grew up in a civil war-torn Nigeria but found the curiosity, resourcefulness and tenacity to obtain two…
5
AI Transformation in Africa: Are you paying attention? In the sixties, Philip Emeagwali grew up in a civil war-torn Nigeria but found the curiosity, resourcefulness and tenacity to obtain two master’s degrees in mathematics and civil engineering despite incredible odds. Inspired by a 1922 science fiction novel, Emeagwali disproved naysayers and accessed the Los Alamos National Laboratory to remotely program 65,536 processors to perform 3.1 billion calculations per second. This, amongst other notable technology advances in that era, laid the foundation for the computational models behind the Artificial Intelligence (AI) machines and mechanisms of today. In modern computational circles, he is considered one of the true AI visionaries, and a pioneer to a league of great African innovators. Fast forward to 2018, and the fourth industrial revolution has been ushered in by the power of AI, with a potential to add $16 trillion to the global GDP and create 2.3 million new type of jobs according to data from McKinsey. Attentive and innovative nations are currently devoting significant investments towards better placement in the potential reformulation of the new world order. Across Africa, some visionaries have also started tackling major socio-economic challenges with AI with multi-sectorial focus and outcomes. South African business icon and chief supporter for AI initiatives in Africa Andile Ngcaba once remarked “the blank slates in many areas of Africa represent massive opportunities for the brightest entrepreneurs and innovators”. DEMO Africa, the continent’s premier launchpad for startups recently announced its 2018 class of 30 finalists for this year’s edition in Casablanca, Morocco. Six of the 30 competing startups leverage AI to tackle important socio-economic challenges across industries like finance, healthcare and agriculture. From our vantage point, there are three (3) major factors that will drive Africa into the Fourth Industrial revolution, and these factors will be powered by AI in a new age of venture creation and innovation methods. The factors that we are betting on are Courageous Founders, Ecosystem Enablers and Infrastructure Advancements. Courageous Founders The six (6) DEMO finalists leveraging AI are my new favorite class of founders, as they look to transform lives for millions on the continent across a number of industries. In the health space Elcs research from Algeria has built Smarth.io, a radiology information system that helps hospitals manage cancer patients, provide access for doctors abroad to assist during busy periods and collect data used to train AI models to automate cancer diagnosis for millions who can’t make it to the hospital. In agriculture, Complete Farmer from Ghana is looking to create food sustainability in the region by offering a secure online channel for expanding local agricultural investments while employing improved seedlings, novel farming methodologies and AI-driven insights to maximize farm output. In the industrial space, Niotek from Egypt is building industrial IOT solutions to enable manufacturing facilities automate processes to reduce errors, improve quality of service and directly impact bottom line. Casky from Morocco is making it safer to be a motorcyclist by offering an IOT safety and data collection device attachable to driver helmets, along with a SaaS platform with data analytics to improve riding habits and insurance premiums. Atlan Space also from Morocco is providing drones to track illegal fishing activity, while Chefaa from Egypt offers a smart on-demand prescription drug service to expand care to more people. These founders along with the 24 other finalists represent the fresh crop of African entrepreneurs who, like their counterparts abroad, obsess about making the world around them a better place, but also have to display a peculiar courage and tenacity to innovate through all the infrastructure and resource gaps unique to their countries. They have shown great personality to make it this far, and hope to meet and engage with DEMO delegates who will plug their gaps and accelerate their road to impact. Ecosystem Enablers African governments, investors, entrepreneurs and innovation stakeholders including diasporans are beginning to build pockets of innovation ecosystems across the continent to promote the use of AI driven technologies in delivering solutions to some of the continent’s greatest problems. Investors General venture investments on the continent grew about 50% last year to the tune of about $560 million. Recently the Rise Fund invested $47.5 million in Cellulant to advance the direct application of cutting-edge technologies like AI and Blockchain to drive financial inclusion. As far back as 2011, Cellulant had built e-wallets for Nigerian farmers to receive subsidies from government and increased the reach of the national subsidy program from 1 million to 17 million farmers. Cellulant intends to continue to leverage this type of reach to expand the transformation and digitization of the Agricultural value chain across Africa. Innovation Catalysts Now on its 7th edition, DEMO Africa continues to provide a technology launchpad for young African startups to meet a global stage of customers, investors and enthusiasts. Moving from South to East and now North Africa via Casablanca, Morocco, the DEMO event has also focused on identifying budding ecosystems and opening them up to the rest of the world for an innovation exchange that positions that region for stronger growth and impact. This year, a good percentage of the DEMO Africa finalists are launching technologies that are AI driven. National Governments A number of African governments are also driving initiatives to foster a culture of solving problems with data, bringing in transparency and efficiency to the way resources are deployed. Tunisia and Kenya are two African countries that have publicized their long-term national strategies towards growing the general awareness of AI driven initiatives that will drive economic growth. A number of national governments and their respective innovation stakeholders continue to send key delegates around the world to engage in knowledge exchanges with leading global innovation actors,and return home to implement policy and infrastructure improvements that position their countries for growth powered by AI. AI Infrastructure The nature of infrastructure that is necessary for sustainable AI adoption and integration continues to evolve around the world especially as technology as tools and services advance towards making the complex processes of AI more practical, and expanding the required human capital pool from just statisticians and scientists to software developers, and from tier-1 engineering organizations to regular businesses. This encapsulation and abstraction layer is best delivered by cloud service providers around the world who provide access to the required big data storage, low-latency serving networks and specialized high performance computing hardware and software stack to catalyze scalable AI environments. Cloud and mobile penetration are key to operationalizing AI in Africa, and both depend on having a strong network capacity and connectivity backbone to support smooth distribution for smart end-user services. It is no longer news that the Africa mobile phone adoption growth rate is one of highest in the world, with almost a billion connections and over 35 mobile network operators on the continent today. Several countries, such as Seychelles, Tunisia, Morocco and Ghana, have mobile subscription penetration rates in excess of 100%. Mobile phones are the primary distribution channels for new technology solutions on the continent and have already enabled companies distribute AI-powered finance, health and education solutions to users in urban and rural centers alike. Cloud adoption is a bit slower as large corporates continue to hesitate on moving away from their already deployed on-premise infrastructure, but represents a remarkable opportunity. Companies like Deviare in South Africa continue to push the cloud & digital transformation message and have achieved considerable success transitioning younger companies to the cloud and ramping up their operations to serve millions of customers. Internet penetration on the continent grew a further 20% in 2018, after growing more than seven times the global average from 2000 to 2012. Bandwidth and connectivity on the continent continues to grow with strategic investments to reduce cost of access. Continental pioneers like Convergence Partners, Liquid Telecom and SEACOM in collaboration with Silicon Valley players like Google, Verizon and Facebook have deployed critical infrastructure towards faster and better priced broadband services across the continent. These investments and advancements will open the doors to new digital economies across the continent driven by ubiquitous, high-speed networks that will deliver digital services to millions of citizens at affordable rates. As African stakeholders continue to bet on future technologies like 5G, local economies will start to set infrastructure targets towards supporting everything from self-driving cars to remote medical surgery, new immersive virtual realities, drone deliveries, AI robots, intelligent agriculture, connected cities, smart logistics and more. Africa Is Ready — Are you? As the continent embraces the Fourth industrial Revolution, progressive thinking early adopters are using AI to address tough socio-economic challenges and ensuring that a lot more businesses are empowered with knowledge, capital and market ready resources to solve a myriad of multisectorial problems with cutting edge solutions. The recently concluded conference for African Academics in AI — Deep Learning Indaba — saw a total of 274 top quality research papers get showcased and the likes of Nando de Freitas of DeepMind shared their experiences on the ingenuity and applicability of Africa specific research in solving problems of global relevance. Over 500 professionals in AI from within and outside the continent came together to gain knowledge and exchange global best practices that are geared towards advancing the impact of AI on the continent and empowering a self taught, youthful population that is bench-marking against global standards. The next stop on our journey is Casablanca at DEMO Africa this October. The DEMO platform has connected more than 3,000 local startups to a global network, created thousands of high paying jobs, and incubated 250+ startup teams across the continent. At this year’s event I will join several other industry specialists to advance the discussions on the Business of AI in Africa, and work with the DEMO Africa finalists to sharpen their technology driven go-to market strategies. Join us at DEMO to amplify this movement as we look to reshape the very tapestry of technology and investment in Africa.We expect the ripple effects of our efforts to travel beyond the boundaries of the continent. See you in Casablanca! About the Author Alex Tsado is a Product Marketing Lead for Cloud Service GPU business at NVIDIA, with verticals in AI, Deep Learning and Accelerated Compute. He is also an Associate at the African Technology Foundation and will be moderating a panel on AI at DEMO Africa 2018.
AI Transformation in Africa: Are you paying attention?
127
ai-transformation-in-africa-are-you-paying-attention-1423b9ecb603
2018-09-21
2018-09-21 08:35:13
https://medium.com/s/story/ai-transformation-in-africa-are-you-paying-attention-1423b9ecb603
false
1,739
null
null
null
null
null
null
null
null
null
Africa
africa
Africa
18,978
African Technology Foundation
To bridge knowledge gaps and support the internationalization of African technologies
3742310c2df0
Innovate_Africa
360
265
20,181,104
null
null
null
null
null
null
0
#loaded dataset url=“https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv" names = [‘preg’, ‘plas’, ‘pres’, ‘skin’, ‘test’, ‘mass’, ‘pedi’, ‘age’, ‘class’] df = pd.read_csv(url, names=names) train_df, test_df = train_test_split(df, test_size = 0.2, random_state = 42) #shows shape of train set + data print(train_df.shape) print(train_df.head()) #shows the same thing but with the test data print(test_df.shape) print(test_df.head()) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.2, random_state=12) model = LogisticRegression(C=20, penalty="l1") model.fit(X_train, Y_train) result = model.score(X_test, Y_test) print("Accuracy: %.3f%%" % (result*100.0)) Y = df['class'] X = df.drop(['class'], axis=1) from sklearn.model_selection import KFold from sklearn import model_selection kfold = KFold(n_splits=5, random_state = 42) results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring = 'accuracy') print(results) print("Mean accuracy and STD", (results.mean()*100.0, results.std()*100.0)) #import our libraries import numpy as np from sklearn.model_selection import GridSearchCV logistic = LogisticRegression() penalty = ['l1', 'l2'] C = [2,4,6,8,10,12,14,16,18,20,22] hyperparameters = dict(C=C, penalty=penalty) classifer = GridSearchCV(logistic, hyperparameters, cv=5, verbose=0) best_model = classifer.fit(X,Y) from pprint import pprint pprint(classifer.grid_scores_) from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import GaussianNB from sklearn.neighbors import KNeighborsClassifier svc = SVC() svc_scores = cross_val_score(svc, train_X, Y, cv=3, scoring='roc_auc') print("Mean AUC Score - SVC: ", svc_scores.mean()) decision_tree = DecisionTreeClassifier() decision_tree_scores = cross_val_score(decision_tree, train_X, Y, cv=3, scoring='roc_auc') print("Mean AUC Score - Decision Tree: ", decision_tree_scores.mean()) random_forest = RandomForestClassifier() random_forest_scores = cross_val_score(random_forest, train_X, Y, cv=3, scoring='roc_auc') print("Mean AUC Score - Random Forest: ", random_forest_scores.mean()) naive_bayes = GaussianNB() naive_bayes_scores = cross_val_score(naive_bayes, train_X, Y, cv=3, scoring='roc_auc') print("Mean AUC Score - Naive Bayes: ", naive_bayes_scores.mean()) knn = KNeighborsClassifier() knn_scores = cross_val_score(knn, train_X, Y, cv=3, scoring='roc_auc') print("Mean AUC Score - KNN: ", knn_scores.mean()) print('\n')
22
null
2018-05-27
2018-05-27 16:25:52
2018-05-27
2018-05-27 21:53:56
1
false
en
2018-05-28
2018-05-28 14:38:23
1
142534848754
3.773585
1
0
0
I had a late start to this section after taking a much needed break from technology, but when I came back the great employees at Lambda…
5
Learning Model Tuning/Supervised Learning I had a late start to this section after taking a much needed break from technology, but when I came back the great employees at Lambda guided me so I could continue my education. Today I’m going to show you parts of Model Tuning and Supervised Learning in Machine Learning (mouthful I know). Cross Validation The first part I’d like to discuss is Cross Validation. We already know that we have to split the dataset into the test and training set. The problem is that when you are doing this the data could overfit and under-fit as the 80 percent(Training Set) and 20 percent(Test Set) are selected randomly. First let’s start off by splitting the data(in this case the diabetes dataset) manually: We loaded the features after we loaded the dataset and using test_train_split we can now split the data. Just like that we have 2 sets of data. For further testing we can print the shape and the head of the dataset. We know we can get the accuracy of this dataset with LogisticRegression: The problem arises when we don’t know what set of data is being chosen for the test and the training set since it’s randomized. This is where KFold comes in. KFold Imagine you have a large piece of paper and you fold it into different sizes. We know that the more folds the piece of paper has the more creases or “set of data” we can say, the machine has to iterate and in return we should get a better prediction and even accuracy of the data. So what we did here is we took the Y (class from the model) and the X which are the features of the model and we implemented KFold which essentially “folded” the data and gave us a more accurate and presentable accuracy score. Now what if we wanted to do a sweep of values and find the most optimized mean result? Well we’d use GridSearch! While my C values might not be the best numbers to test, we can actually get a reasonable answer as to what C values give us the best estimate for the model we have provided. Notice that our X and our Y really do not change even with these added code. This is the case MOST of the time but if one needed to test new features or add more that would change. These are just some basic concepts of Model Tuning and if anyone is confused about the test and training set I’d like to think of them like this: X_train = features going into model(NOT classifications) Y_train = Classes that we have set that most likely will be presented as a 1 or a 0. In this case our Y_train would of been just the “classes” column from above. X_test = Test used to validate data with features used to make classifications that we compare with Y. Y_test = Just like Y_train this would basically be our features I’d also like to point out that we have certain types of models that we can use. These range from SVM, Random Forest, Naive Bayers, etc. While it can be overwhelming to actually understand each model and it’s function or best case of use I’d like to point out how easy it is to actually implement these models into your code using libraries that already exist. SKlearn is an amazing library that gives you just what you need to test different types of models. Implementing these models are pretty easy and once you set up your dataset for training of X and your Y it’s a matter of setting your cv value and waiting for the compiler! This blog post was mainly to make a point that while on paper a lot of these terms are new and confusing, with some patience and scrolling through documentation you can implement them and still be successful as long as you know what you’re implementing. As I continue my education with Lambda School I hope grasp a larger understanding in the world of Machine Learning and help others the best I can along my journey.
Learning Model Tuning/Supervised Learning
25
the-week-i-learned-model-tuning-supervised-learning-142534848754
2018-05-28
2018-05-28 15:42:25
https://medium.com/s/story/the-week-i-learned-model-tuning-supervised-learning-142534848754
false
947
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Brenner Haverlock
Attending Lambda School for Machine Learning
e07eaec56584
brenner.haverlock
19
23
20,181,104
null
null
null
null
null
null
0
null
0
3ef2e33bdd06
2018-09-10
2018-09-10 10:01:04
2018-09-10
2018-09-10 10:30:58
2
false
en
2018-09-10
2018-09-10 10:44:03
0
1425350b0d97
0.730503
6
0
0
Robotics is foreseen to be the next technological uprush. Many seem to admit that robots will have a powerful impact over the following…
4
Envisioning the future of robotics Robotics is foreseen to be the next technological uprush. Many seem to admit that robots will have a powerful impact over the following decade, and some are greatly betting on it. Comprehending where the robotics industry is heading is not just a guesswork. With the onrush of such technologies as artificial intelligence and machine learning, robotics potential dramatically becomes feasible rather than a science fiction scenario. In this infographics, we present some breathtaking insights into technically attainable robotic technology capabilities we might expect in the nearest future.
Envisioning the future of robotics
14
envisioning-the-future-of-robotics-1425350b0d97
2018-09-10
2018-09-10 10:44:48
https://medium.com/s/story/envisioning-the-future-of-robotics-1425350b0d97
false
92
We provide offshore full-cycle software product engineering and design services.
null
trinetix
null
Trinetix
null
trinetix
null
trinetix
Robotics
robotics
Robotics
9,103
Julia Zhylina
On-line Marketing Manager in Trinetix
83e91f8c90cf
juliazhylina
28
38
20,181,104
null
null
null
null
null
null
0
null
0
467df38ba6f8
2018-08-06
2018-08-06 22:09:15
2018-08-06
2018-08-06 22:11:03
1
false
en
2018-08-08
2018-08-08 16:32:27
4
14269807b625
1.222642
0
0
0
What do AI researchers do at conferences? Check this out to find out!
5
Inside an AI Conference — Robotics Science and Systems What do AI researchers do at conferences? Check this out to find out! Originally published at www.skynettoday.com. Our mission here at Skynet Today is broadly to make understanding of AI more accessible. This involves more than just understanding the technical concepts of AI, but also means understanding how AI techniques get developed. The process of AI research is far more incremental and slow than the big milestones of the last several years may indicate. The basic unit of research is the paper, which is a summary of a single or a couple of new ideas and results, as Wikipedia nicely summarizes: “In academic publishing, a paper is an academic work that is usually published in an academic journal [or conference]. It contains original research results or reviews existing results … A paper may undergo a series of reviews, revisions, and re-submissions before finally being accepted or rejected for publication. This process typically takes several months.” In AI, the majority of papers are submitted to conferences: “An academic conference or symposium is a conference for researchers (not necessarily academics) to present and discuss their work. Together with academic or scientific journals, conferences provide an important channel for exchange of information between researchers.” And so we get to this article’s topic: the Robotics Science and Systems Conference, one of the top conferences in robotics. I happened to attend this event, and decided to record the experience for the benefit of AI-researchers and non-researchers alike: Hopefully, this video makes the world of AI a little less mysterious for those not in it!
Inside an AI Conference — Robotics Science and Systems
0
inside-an-ai-conference-robotics-science-and-systems-14269807b625
2018-08-08
2018-08-08 16:32:27
https://medium.com/s/story/inside-an-ai-conference-robotics-science-and-systems-14269807b625
false
271
Putting AI News In Perspective
null
Skynet-Today-548433905531
null
Skynet Today
skynet-today
AI,DEEP LEARNING,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,ROBOTICS
skynet_today
Academia
academia
Academia
4,312
Andrey Kurenkov
An eclectic artistically inclined engineer who says things sometimes. www.andreykurenkov.com
8bac9fde4757
andreykurenkov
136
3
20,181,104
null
null
null
null
null
null
0
null
0
58769d5c41e1
2018-03-21
2018-03-21 16:44:20
2018-03-21
2018-03-21 16:45:02
5
false
en
2018-03-21
2018-03-21 16:56:56
5
14275a4e3a9a
5.203145
1
0
0
As the boom of the app era starts to die down, the search for a new platform becomes evermore important. Could chatbots be the next logical…
3
Where Do Chatbots Fit Into Your Mobile Strategy? As the boom of the app era starts to die down, the search for a new platform becomes evermore important. Could chatbots be the next logical step? The expanding messenger market Messaging apps have seen strong growth in recent years, so much so that they have overtaken social networks in popularity. Facebook Messenger alone has over 1 billion monthly users, overtaking its own sister social network in 2015. The messenger market is huge, and for good reason; it’s where people communicate. While attracting users to a new platform will always remain a viable and valuable strategy for many businesses, for others the logic of ‘go where your customer are’ still reigns. Source: Business Insider In spite of their popularity, businesses aiming to take advantage of this shift in the market face many challenges, the largest of which revolves around resourcing. To keep the conversation open between users and your brand, there has to be someone available to keep these conversations going. Having humans handle this volume of conversations is unsustainable as a strategy for most businesses, and as such organisations are seeking to improve their mobile strategy through the use of chatbots, looking for ways to properly take advantage of this medium. Like many promising new technologies, chatbots have been clouded in hype, and it can be easy to be either overly optimistic or cynical about what they can actually deliver. The rush to deploy them without fully understanding their place in an organisation’s strategy has spawned a raft of decidedly sub-par experiences for many users. This makes them easy to dismiss, but cutting through the hype and examining chatbots in the cold light of day, there are definite, valuable uses for them. Automated customer service Picture the scene: your washing machine starts leaking water everywhere but you have a date and are pressed for time; are you going to walk to your computer and wait for it to boot up,or do you just grab your phone and look for help? Until now, seeking help for situations like these has usually meant a call centre, a web-based FAQ, or perhaps live chat with a human, but research is showing consumer expectations are shifting. For most organisations, the majority of inbound queries are simple and easy to solve. Waiting to connect to a human and get a response from them is time consuming, even with more recent additions to the customer service channel mix such as Twitter and live chat. Chatbots, on the other hand, are almost instantaneous. They can resolve many simple queries, allowing customers to resolve their own problems quickly while giving them the same ‘I fixed it myself’ satisfaction that web-based solutions give. Source: VentureBeat While chatbots still struggle when it comes to more complex issues, this is most often than not due to their lack of context or poor communication by the user. In these cases chatbots can escalate the issue to a human, where the case takes on the form of a more traditional ‘support ticket’ process. While the user will likely have to wait a bit longer to get their answer, having the chatbots handle these somewhat simple, consistent, easily resolved queries that clog up customer service centres and email inboxes can reduce the number of human staff an organisation has to rely on, allowing for faster and higher-quality resolution to complex issues. A single chatbot can deploy through multiple channels Through APIs, chatbots can connect to Facebook Messenger, Slack, Telegram, SMS, Twitter, and more. When the chatbot’s main AI is updated with new abilities or intelligence, these enhancements become available throughout all interfaces. This versatility allows for widespread and cost-effective support on a variety of platforms, giving customers the ability to reach out to businesses on any of their preferred messaging platforms. Chatbots can follow up on their conversations A simple answer isn’t always what a customer is looking for, and sometimes they need more information that they can revisit in their own time after they have left the conversation. Thankfully chatbots have this covered: they can follow up through other channels such as email or links, and send important documentation or even functional prompts for other apps, such as a calendar app. As chatbots continue to grow in popularity, improving customer satisfaction through the use of chatbots will become a must for any business dealing with customer service, and will allow for the reduction of the amount of resources needed to handle calls and messages, thus saving on customer service costs and improving your overall service delivery. Improving customer experience The basic search and find journey, one which is critical for websites with large volumes of products or services, has not seen any significant changes to best practice in recent years; chatbots might just be the next step in improving this functionality. A good usable search and a well thought out and tested architecture have, and will still be, the best solution to improving this journey, but what happens when a user just simply doesn’t know where to start looking, or doesn’t have the time and patience for trial and error? Chatbots by nature are intelligent decision trees that get a user to an endpoint through a series of questions, quickening the path to content while clarifying the intent of the user. This is why chatbots are proving to be helpful as a search and find functionality. Through a series of questions, users can be guided through a smooth and more natural experience to find products, services, documents, and events. The more a user interacts with a chatbot, the more it can learn about the user, and this knowledge can then be used to create a more personalised journey with greater accuracy of returned results. Remote staff assistance Internal chatbots can also help your staff in a similar manner. They can allow remote staff to self serve in the field, so they can keep doing their jobs without the need for extended interruptions when seeking information. This is an ideal situation for hunting through lengthy documents for specific clauses and paragraphs while on a mobile, or finding the right answer to a customer query. Are chatbots the future? The more we have researched and experimented with chatbots, the more we have seen the true value of what they can bring to the future of UI. While they are still in their infancy, chatbots have already shown that in certain use cases they can greatly improve the experience of solving an issue, finding content, or answering queries from users. The demand for chatbots is growing, the tech is rapidly improving, and the benefits for both customer and business are proving to be significant. If you’re interested in exploring the chatbots for your business; or if you’re looking to explore new digital media including voice, wearables, and the Internet of Things, get in touch to see how we can support you. Originally published at www.screenmedia.co.uk.
Where Do Chatbots Fit Into Your Mobile Strategy?
5
where-do-chatbots-fit-into-your-mobile-strategy-14275a4e3a9a
2018-03-26
2018-03-26 13:37:05
https://medium.com/s/story/where-do-chatbots-fit-into-your-mobile-strategy-14275a4e3a9a
false
1,158
Screenmedia is a BAFTA-award winning digital design practice working on the leading edge of multi-platform design and development.
null
null
null
Screenmedia Lab
screenmedia-lab
VOICE ASSISTANT,AUGMENTED REALITY,ARTIFICIAL INTELLIGENCE,UX,CHATBOTS
screenmedia
Chatbots
chatbots
Chatbots
15,820
screenmedia
Screenmedia is an award winning digital design practice. We craft compelling user experiences across web, mobile, and connected devices. www.screenmedia.co.uk
5ec7edd902de
screenmedia
485
405
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-21
2018-03-21 07:39:19
2018-03-21
2018-03-21 07:53:35
4
false
en
2018-03-21
2018-03-21 07:53:35
0
14284cf5ef5f
1.488679
4
0
0
Ubex, the pioneering advertising exchange platform, has attended the Token 2049 Conference in Hong Kong, showcasing the project to a broad…
5
Ubex at Token2049 in Hong Kong Ubex, the pioneering advertising exchange platform, has attended the Token 2049 Conference in Hong Kong, showcasing the project to a broad audience of industry professionals and enthusiasts. CEO&Co-founder of Ubex Artem Chestnov with Founder of Civic Vinny Lingham Ubex has presented the project at the conference and had multiple highlights at the event, one of which was a meeting with Vinny Lingham, the founder of the Civic project. The discussion was held on the topic of security solution development for Ubex internal platform currency holders and contributors. Artem Chestnov & AI Sofia robot The second major highlight was a meeting with Singularity Net representatives and their AI Sofia robot, who has already demonstrated her abilities as an advanced construct. Ubex has expressed heightened interest in collaborating with Singularity on the issue of developing algorithms, neuronets and AI solutions for the market. Ubex representatives are hopeful that negotiations with Singularity Net can proceed and develop into cooperation opportunities. Token2049 is the largest digital asset event in Asia taking place on 20–21 March 2018, in Hong Kong, exploring in-depth the growing crypto ecosystem. Ubex is conducting its full-scale roadshow and is aiming at visiting a series of high profile events of the blockchain industry to garner support for the pioneering project. Stay tuned for more news and updates from Ubex.
Ubex at Token2049 in Hong Kong
52
ubex-at-token2049-in-hong-kong-14284cf5ef5f
2018-05-14
2018-05-14 12:32:20
https://medium.com/s/story/ubex-at-token2049-in-hong-kong-14284cf5ef5f
false
209
null
null
null
null
null
null
null
null
null
Ubex
ubex
Ubex
116
Ubex AI
Telegram: t.me/UbexAI
bb334fb51280
ubex
6,289
2
20,181,104
null
null
null
null
null
null
0
files <- list.files("Downloads/") for(file in files) { data <- fread(file) final_data <- data_frame() data %<>% filter(played_duration > 300 %>% mutate(event_time=as.POSIXct(event_time)) %>% filter(event_time > '2017-05-01 00:00:00' ) final_data <- rbind(final_data,data) } user_matrix <- final_data %>% select(user_id,event_time,played_duration) %>% mutate(event_start_time=event_time-played_duration) %>% mutate(time_slot_end=ifelse(strftime(event_time,"%H:%M:%S") >= '01:00:00' & strftime(event_time,"%H:%M:%S") < '09:00:00',0, ifelse(strftime(event_time,"%H:%M:%S") >= '09:00:00' & strftime(event_time,"%H:%M:%S") < '17:00:00',1, ifelse(strftime(event_time,"%H:%M:%S") >= '17:00:00' & strftime(event_time,"%H:%M:%S") < '21:00:00',2, ifelse(strftime(event_time,"%H:%M:%S") >= '21:00:00' | strftime(event_time,"%H:%M:%S") < '01:00:00',3,NA))))) %>% mutate(time_slot_start=ifelse(strftime(event_start_time,"%H:%M:%S") >= '01:00:00' & strftime(event_start_time,"%H:%M:%S") < '09:00:00',0, ifelse(strftime(event_start_time,"%H:%M:%S") >= '09:00:00' & strftime(event_start_time,"%H:%M:%S") < '17:00:00',1, ifelse(strftime(event_start_time,"%H:%M:%S") >= '17:00:00' & strftime(event_start_time,"%H:%M:%S") < '21:00:00',2, ifelse(strftime(event_start_time,"%H:%M:%S") >= '21:00:00' | strftime(event_start_time,"%H:%M:%S") < '01:00:00',3,NA))))) dataset = np.loadtxt("/Users/ricky/Desktop/train.csv", delimiter=",") X = dataset[:,0:168] # 前六週 Y = dataset[:,168:196] X = X.reshape(51807,6,28,1) sequence_input = Input(shape=(6, 28, 1)) x = sequence_input x = Conv2D(32, (1, 3), activation='relu')(x) x = Dropout(0.75)(x) x = Dense(64, activation='relu')(x) x = Dropout(0.75)(x) x = Flatten()(x) x = Dense(64, activation='relu')(x) preds = Dense(28, activation='sigmoid')(x) model = Model(sequence_input, preds) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc']) output_vector = xgb_train[,"week33"] bstSparse <- xgboost(data = sparse_matrix, label = output_vector, eta = 0.1, nthread = 2, nrounds = 200, objective = "binary:logistic") train_ensemble$true <- as.factor(train_ensemble$true) bag_fit <- bagging(true ~ xgboost + lm, data = train_ensemble, mfinal = 30) test_pred <- predict(bag_fit, test_ensemble)
10
76d582de2a0e
2018-05-29
2018-05-29 03:28:01
2018-01-24
2018-01-24 09:33:21
1
false
zh-Hant
2018-05-29
2018-05-29 03:52:02
8
142963f8286e
1.34717
0
0
0
這是第二次參加KKTV舉辦的比賽,這次比賽主要是透過用戶觀測資料去預測用戶在未來的一週各個時段是否會觀劇?
3
KKTV Data Game practice 這是第二次參加KKTV舉辦的比賽,這次比賽主要是透過用戶觀測資料去預測用戶在未來的一週各個時段是否會觀劇? 主辦單位也很用心的提供了幾個benchmark,以下就帶大家來一一實作,首先先介紹資料前處理方法,接下來介紹各模型。 Data preprocessing 1.1 data preprocessing 2. Modeling 2.1 regression 2.2 convolutional neural network 2.3 xgboost 3. Ensemble learning 3.1 bagging 1.1 data preprocessing training data 分散在45個csv中,我們篩選近三個月的資料並且觀看時間大於5分鐘的行為當作有效的行為,並合成一個大的dataframe。 time_slot的切法,就照著資料描述裡面的定義 timeslot0 : 01:00:00 ~ 09:00:00 timeslot1 : 09:00:00 ~ 17:00:00 timeslot2 : 17:00:00 ~ 21:00:00 timeslot3 : 21:00:00 ~ 01:00:00 這邊跨time_slot的處理比較偷懶,假如從八點看到十點,會在slot0和slot1都各記上二小(以後有機會在改掉)。 2–1. regression 一開始看到迴歸可以得到這麼好的結果有點嚇到,因為當我們一開始看到這個問題時,就覺得每個User的觀劇行為應該是很不一樣的,所以應該很難用同一個model去描述所有人的行為,我們一開始使用facebook release的時間序列資料預測方法-Prophet,根據每個User前32週的資料去預測,準確率最高大概80.3%,後來看到迴歸準確率可以到那麼高真的有點錯愕。 丟進迴歸資料就像官方公布的圖一樣,用前32周的資料與timeslot去建立迴歸,預測第33周,然後每個User應該會有28筆資料分別是28個timeslot。 原本整理的時間格式 2017–08–21–0 (日期後面加上屬於哪個timeslot) 拆成日期與timeslot,將日期轉成屬於第幾週 format(as.Date(date), “%W”) 利用weekdays function 得到該日期是星期幾,整理好資料後,就可以利用lm開始建立迴歸模型拉。 2–2. convolutional neural network 因為這次的比賽才有機會接觸CNN,透過tensorflow實作了簡單的CNN,根據第二名的大大分享用前六週的資料去訓練是最佳的,所以input放入 六週 28個timeslot 每個timeslot的觀看時間 2–3. xgboost 之前有稍微摸一下xgboost這個套件,但一直沒在比賽中實作,這次剛好有機會可以來實際運用看看,input跟迴歸一樣是32週的資料加上time_slot 一開始要將data轉成sparse matrix,在使用sparse.model.matrix時,第一行會產生1這個intercept,一般在用xgboost,都會寫成~.-1,把intercept移除 sparse_matrix <- Matrix::sparse.model.matrix(week33 ~ .-1, data = xgb_train[,-1]) 接著就可以開始建立model 3–1. bagging 很多kaggle的比賽前面的名次,最後都是以ensemble learning再去優化模型,整體學習的概念其實就是將數個分類器的預測結果綜合考慮,藉此達到顯著提升分類效果。而bagging是通過組合隨機生成的訓練集而改進分類的集成算法。 Model comparison 這邊ensemble learning的方法好像還需要在試看看,不知道為啥bagging的結果居然是最爛的。 Reference [1] 第二名分享 [2] 官方benchmark公開 [3]集成學習說明
KKTV Data Game practice
0
kktv-data-game-practice-142963f8286e
2018-05-29
2018-05-29 03:53:19
https://medium.com/s/story/kktv-data-game-practice-142963f8286e
false
304
Share Data science, machine learning and deep learning
null
null
null
Data Scientists Playground
null
data-scientists-playground
DATA ANALYTICS,DATA SCIENCE,MACHINE LEARNING,DEEP LEARNING,DATA ANALYSIS
null
Machine Learning
machine-learning
Machine Learning
51,320
ricky chang
null
118decbb44c2
ricky94511
5
2
20,181,104
null
null
null
null
null
null
0
null
0
92170c1e63a5
2018-03-04
2018-03-04 19:30:52
2018-03-05
2018-03-05 08:01:00
1
false
en
2018-03-05
2018-03-05 09:30:21
8
142ba872c3ed
1.822642
21
0
0
Files haven’t changed since they were born in 1950. They have always been used to storage all kind of information onto bites and make it…
5
The Next Generation of Distributed & Autonomous Files. Files haven’t changed since they were born in 1950. They have always been used to storage all kind of information onto bites and make it accessible to everyone. Files can be adapted based on new available techonologies and provide more value to the end users. That’s why in FileNation we want to build the next generation of intelligent files (IPFS + Ethereum) and actively contribute to all these changes that computer files are going to have to face over the following years. But how do we see the files of the future? We see them as files that can allow the sender know when the file has been opened, as some popular apps are including nowadays. The next generation of files will have the option to be opened in an specific time or event or even auto-destruct at an specific event, files will then be created also for the future to improve even more user experience. This will also have to include an option to send files only to an specific person and make sure that this person is the only one allowed to open the file, even only at an specific place as Geocaching-based files. Files will also be allowed to be tracked around the world, and to be fully analyzed by their expansion and engagement by the user. We can also call them Smart Files as they will have a bot to talk with the user and their name and color will be automatically generated identifying the content and size of the file. We strongly think that file sharing has to be also modified and improved, they won’t be shared if one of the users with access doesn’t agree and the permissions system will also be modified. Users will be allowed as well to replace or modify the file in real time using new and disruptive ways like voice control. Finally, unlocking of files is also important. We are thinking about files that can only be unlocked if a certain amount of people wants to or files that can only be unlocked once a payment is made. This is just the beginning for a new generation of files, we have the tools and the oportunities are endless. Do you have any other ideas on how Smart Files can improve file sharing? If you liked the post, please show some love and clap your 👏 hands say yeah 🎵 Follow Alex Sicart on Twitter Try FileNation Follow FileNation on Github Follow FileNation on Twitter Thanks David Andres for helping.
The Next Generation of Distributed & Autonomous Files.
295
the-next-generation-of-distributed-autonomous-files-142ba872c3ed
2018-05-26
2018-05-26 18:19:25
https://medium.com/s/story/the-next-generation-of-distributed-autonomous-files-142ba872c3ed
false
430
The simplest way to send your files around the world using IPFS. ✏️ 🗃
null
null
null
FileNation
filenation
BLOCKCHAIN,ENTREPRENEURSHIP,DECENTRALIZATION,ICO
filenation_io
Blockchain
blockchain
Blockchain
265,164
Alex Sicart Ramos
Founder & CEO @ethshasta. @Forbes 30 Under 30. Advisory board @internxt_io. Crypto investor.
ea2b113e054f
alexsicart
389
107
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-16
2017-09-16 18:01:19
2017-09-16
2017-09-16 18:11:28
0
false
en
2017-09-16
2017-09-16 18:11:28
0
142e4b947179
0.460377
0
0
0
Data cleaning is a key part of data analytics process. In our toolbelt, using awk and sed at the command line are indispensable for working…
1
Command line tools for data analytics Data cleaning is a key part of data analytics process. In our toolbelt, using awk and sed at the command line are indispensable for working with text files. Here are a few examples of how one can save time on the command line. Modify the contents of a file: sed -e ‘s/01/JAN/’ -e ‘s/Pizza/Piazza/’ Sort and then print specific contents to a new file : sort file1 file2 | uniq > file3 Find records containing a string and saves it to new file: !grep ‘icecream’ somefile.csv > grep anotherfile.csv Modify the contents of a file: !sed -e ‘s/01/JAN/’ -e ‘s/Fashion/Fusion/’ somefile.csv > anotherfile.csv Puts the third column in new file: !cut somefile.csv -d”,” -f3 > anotherfile.csv
Command line tools for data analytics
0
command-line-tools-for-data-analytics-142e4b947179
2018-06-15
2018-06-15 04:22:04
https://medium.com/s/story/command-line-tools-for-data-analytics-142e4b947179
false
122
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Technology and Data
www.databitly.com
85323c0cd504
databitly
0
1
20,181,104
null
null
null
null
null
null
0
null
0
f45af444892f
2018-09-17
2018-09-17 14:28:56
2018-09-18
2018-09-18 13:37:06
2
false
en
2018-09-18
2018-09-18 13:44:46
2
142ef213866d
4.439937
5
1
0
David Farrell, General Manager, IBM Watson & Cloud Platform
1
HOW A CFO CAN BECOME A “CDM” David Farrell, General Manager, IBM Watson & Cloud Platform If you’re a chief financial officer (CFO), you know about the challenge of making sense from a tsunami of data within your organization. The pressure to keep your enterprise agile and make better decisions is only intensifying. You must manage business growth, allocate resources, contribute to strategy, and navigate your organization by managing risks, complying with regulations and improving governance. To do so, you must take advantage of cognitive tools and artificial intelligence (AI) — systems able to adapt and learn — to make sense of all the data flowing in. Analytics and AI help digest vast amounts of data and create insights with much greater speed than human or traditional computing platforms. This translates into better and quicker decisions to drive innovation, improve operational efficiency and address capital investments. In other words, CFOs must become “CDMs” — Cognitive Data Masters. AI provides more powerful predictive methods to draw out correlations between related operational, external and financial data, such as revenue and cost changes driven by customer demand, supply chain change, weather impacts or other external factors. AI can also help CFOs make decisions in both operations and performance analysis. To address fraud, waste and abuse, finance organizations can use machine learning and stream computing to create virtual “data detectives.” In governance and compliance, AI can cut through the mountains of data, documents and regulations to ensure organizations fulfill its fiduciary and legal requirements. The rewards can be substantial. We’ve seen significant productivity savings with IBM clients who have adopted AI: An international bank fields over 35% of customer service inquiries and successfully resolves 85% of them automatically — saving human assistance for complex, high-value customer service. A software firm speeds up customer response times by 99%, resulting in a 10-point increase in customer satisfaction. And an energy supplier reduces the time employees spend searching for expert knowledge by 75%. Here are three additional success stories: Sapporo Sapporo Group Management Ltd. manages budgets for each subsidiary of Sapporo Holdings Ltd., including the group’s restaurant chains. The eateries offer membership cards to frequent customers to gain insights into their preferences. But the company’s data analysts had problems accessing, analyzing and reviewing customers’ data — which in turn meant the company couldn’t make decisions on a host of matters, the most fundamental of which was a master budget for the restaurants. Sapporo turned to IBM to help its accountants plan budgets and improve operating efficiency. Using IBM analytic software and Watson solutions, accountants can conduct data analytics and simulations for budget planning. Sapporo employees can also cluster restaurant customers by age, gender and ordered dishes. Then, they use that data for planning marketing measures and creating new menu items. The IBM solutions also automate data analysis, discovery and visualization across the group. Analysts can examine both structured data acquired from operation systems, such as the membership management system, and unstructured data collected from social networks such as Twitter. With their cognitive capabilities, Sapporo Group Management can now make consistent budget planning across the entire group. They can get precise and timely answers from their data, create compelling reports to distribute throughout Sapporo and use automated alerts to monitor changes to key findings. Cargills Bank Cargills Bank Ltd., needed to enhance its existing defensive cyber security capabilities, improve monitoring and implement stronger preventive protocols to guard against sophisticated threats. As they sought the best solution, Cargills learned about IBM’s Watson security solution. The offering uses AI to fill gaps in security intelligence and analysts can act with greater speed and accuracy. The solution assists in the investigation of potential threats by analyzing security blogs, websites, and research papers along with other sources. It combines these insights, with threat intel and security incident data to shorten cyber security investigations from weeks and days, to minutes. With this new capability, Cargills is the first commercial bank in Sri Lanka to deploy AI to augment its security capability, giving it a distinct advantage in the industry. It now has a trusted advisor for investigating and qualifying security incidents with a vast breadth of information. It can rapidly navigate vast pools of knowledge to speed response times, addressing challenges related to intelligence, speed and accuracy when investigating cyber threats. Crédit Mutuel A CFO always wants to ensure that his lines of business are using the best technologies to cut expense and improve customer experience. Crédit Mutuel is using Watson to transform the customer experience. Working with Watson-based solutions trained with internal business knowledge has helped them free up time; improve the speed, relevance and accuracy of responses to queries; and ultimately reinforce relationships with their customers, providing more personalized attention. The bank is using Watson Assistant and other Watson offerings for 20,000 clients’ advisors that aims to increase their reactivity and time efficiency of processing emails. The benefits? One-third the processing time of simple emails, resulting in 200,000 working days/year in productivity gains. The cognitive system can address 50% of customer emails without human intervention. This translates into €40 million in estimated savings for Credit Mutuel. The way forward · Prioritize where to apply analytics and cognitive computing AI and cognitive solutions are well-suited to a defined set of challenges. As you prioritize, determine whether each challenge involves a process that today takes your staff inordinate amount of time to seek timely answers and insights from various information sources. For each problem, establish an integrated data strategy to identify the key data sources. · Lay the foundation CFOs need to drive commonality with data, process and technology through standardization, governance and rationalization. Additionally, finance organizations should establish a center of excellence for cognitive and AI to scale expertise. The scope should include activities targeting revenue growth and risk management. · Operationalize Make sure you can operationalize and understand the value of your AI capabilities. IBM offers an AI solution that can be integrated into your existing cognitive framework — providing trust and transparency in your AI deployment. It deploys a dashboard and system-use metrics that help you decide how to scale AI while managing specific applications. New insights into data, better control of financial “books” across the enterprise, improved governance and risk management — all are possible, along with new competitive advantage, for those CFOs taking advantage of the revolution in cognitive and AI. Learn more about “the cognitive CFO”
HOW A CFO CAN BECOME A “CDM”
43
how-a-cfo-can-become-a-cdm-142ef213866d
2018-09-18
2018-09-18 13:44:46
https://medium.com/s/story/how-a-cfo-can-become-a-cdm-142ef213866d
false
1,075
Cognitive Voices. Discussions on latest happenings in AI and cognitive computing.
null
ibmwatson
null
Cognitive Voices
null
cognitivebusiness
COGNITIVE COMPUTING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
ibmwatson
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
David Farrell
I'm General Manager of IBM's Watson & Cloud Platform team. I'm passionate about helping clients achieve business success through cloud and cognitive technology.
886d2ca24ede
davidfarrellIBM
25
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-28
2018-04-28 13:55:00
2018-04-28
2018-04-28 14:17:40
6
false
en
2018-05-04
2018-05-04 05:50:30
9
142fbd02489b
3.006604
7
0
0
Dear Alphacats!
5
Alphacat Report - Mid-month of April Dear Alphacats! As part of our efforts to be transparent and communicate regularly with our community, we are pleased to share this mid-month report, which includes our progress during these last two weeks and our outlook for the future. Branding and Community 2018.04.17 Alphacat’s introduction video begins production. It will be released along with our new official website. 2018.04.18 Dr. Liang Li — a well-known global fintech expert — joins the Alphacat team as Director of our U.S.-based laboratory. Dr. Liang Li has joined Alphacat’s Research & Development (R&D) laboratory in the United States and will focus on “FinTech and Crypto assets” research. This includes the R&D of Alphacat’s core financial engines. the core algorithm construction of AI prediction engines, big data analysis and the processing of cryptocurrencies, core investment strategies, and other related technologies (such as automated trading). Reference: https://medium.com/@AlphacatGlobal/alphacat-us-laboratory-facility-using-ai-to-help-to-build-a-powerful-financial-engine-focused-on-9f0d5b1f35c2 2018.04.20 A handy guide on adding Alphacat tokens to the Neo wallet is published. The Neo wallet’s supported ACAT include: O3, NEO-GUI, Neon Wallet, Morpheus Wallet, NEO Tracker, and OTCGO. Reference: https://medium.com/@AlphacatGlobal/adding-the-alphacat-token-to-your-neo-wallet-a-handy-guide-12b39d84ec4a 2018.04.23 Alphacat’s Whitepaper is updated to version 1.5.0, and will be released on our new official website. The new version of our Whitepaper will be more detailed and provide a clear introduction to Alphacat, it will also include a new version of our roadmap. 2018.04.26 Alphacat’s Daily Forecasting Service begins its second round of user testing. The first product used by the ACAT prediction engine, “BTC Daily Forecasting”, has undergone small-scale testing and has the support of its users. The Alphacat team is also working on ETH, NEO, and EOS daily forecasting products. The beta versions are expected to be launched between May and July. Reference: https://medium.com/@AlphacatGlobal/eth-neo-eos-daily-forecasting-coming-soon-1995628b3881 2018.04.27 An updated version of the Alphacat Roadmap is completed and will be released on our new official website. The new roadmap re-sorts the timeline of development for each product, and refines the development roadmap for each product category. A segment of our new roadmap Exchange 2018.04.24 The ACAT token listing date on HitBTC exchange is delayed. HitBTC exchange integration is currently in progress. The exact date of the listing will be announced upon the official confirmation of the exchange. More exchanges are in negotiation. We will announce further status updates as soon as possible. Product Development 2018.04.24 The initial design draft of the ACAT Store homepage is completed. The Alphacat team invited the Alphacat community to vote on the design of Alphacat’s store. According to the voting results and user suggestions, the Alphacat team is working to improve the design. 2018.04.26 The beta versions of ETH, NEO, and EOS Daily Forecasting are expected to be launched between May and July. 2018.04.26 The API database is undergoing internal testing. Based on noted problems, we fixed bugs and improved performance. 2018.04.28 The core algorithm for Real-time Forecasting of cryptocurrency is updated to version 3.0. This version modifies the AI training and forecasting algorithm, and reduces the average rate of forecasting errors. For more information of Alphacat: Website: www.Alphacat.io Telegram:https://t.me/alphacatglobal Medium:https://medium.com/@AlphacatGlobal Twitter:https://twitter.com/Alphacat_io Facebook: https://www.facebook.com/Alphacat.io/ Reddit: https://www.reddit.com/r/alphacat_io
Alphacat Report - Mid-month of April
296
alphacat-report-mid-month-of-april-142fbd02489b
2018-06-03
2018-06-03 02:43:30
https://medium.com/s/story/alphacat-report-mid-month-of-april-142fbd02489b
false
545
null
null
null
null
null
null
null
null
null
Cryptocurrency
cryptocurrency
Cryptocurrency
159,278
Alphacat
Alphacat is a robo-advisor marketplace that is focused on cryptocurrencies, and is powered by artificial intelligence & big data technologies. www.Alphacat.io
6300c5cec1ab
AlphacatGlobal
318
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-20
2018-09-20 12:57:54
2018-09-20
2018-09-20 13:00:49
0
true
en
2018-09-20
2018-09-20 13:14:27
0
1433edf4b116
4.581132
0
0
0
It’s a relatively common thing, to consider life as a competition. There are essentially finite resources, growing populations, and each…
5
The Importance of Bettering Yourself & Skill Acquisition It’s a relatively common thing, to consider life as a competition. There are essentially finite resources, growing populations, and each new person needs to find a way to provide for himself/herself. Some people hold a more pessimistic view, that life is a zero-sum game (one in which a person winning, requires another losing). Whatever your view of the competition in life, it’s clear that being competitive is a positive personality trait, and one that is often sought after in various markets. Making oneself a competitive candidate is presumably the goal of all young undergrads — like myself — who wish to take an impactful first step into the postgraduate world. Developing a sound strategy for this requires diligence, creativity, commitment, and — of course — competitiveness. In this post, I’m deciding to discuss a very particular part of postgrad strategy: bettering oneself by acquiring new skills. First, I should be clear about what types of skills I’ll be referring to. These “new skills” will be skills that function as extra knowledge, certifications, abilities, etc. Most importantly, they will be skills that I believe will have great utility and attractiveness going into our future reality. Predicting the future is certainly dicey business, but I’m not going to get too radical here. As much as it seems the world changes, it’s remarkable how much it stays the same. The skills I believe will be critical moving forward are: creative solution making, coding & machine learning, moral & ethical reasoning (in regards to AI), data analytics, and critical thinking. Creative solution making is essentially the lifeblood of entrepreneurship, along with self-sufficiency and determination. It takes creativity to start new businesses, build new technologies, and create new markets. We’ve witnessed the benefits that come with understanding coding and computer programming, perhaps most famously with Mark Zuckerberg creating Facebook basically out of his dorm room. The new frontier of coding seems to be machine learning, which is the field of CompSci that involves giving computers access to large amounts of data and coding them in a way that allows the computer to “learn” without giving it specific programming to do so. In other words, computers usually work by explicitly programming them to do specific tasks; they don’t sift through data on their own and use that data to predict outcomes. ML makes this possible, and is perhaps the largest step we’ve taken toward creating an Artificial Super-Intelligence. Artificial Intelligence is a whole breadbasket in and of itself, and I’ll certainly have to write a few posts about it in the near future, but here I’ll simply outline my point about the morality and ethics surrounding it. It’ll most likely be quite awhile before any Super AI is developed, meaning an AI that has general intelligence, not just narrow intelligence. However, our current world is surely heading toward a massive shift transportation-wise into self-driving cars. The self-driving car is one of the most premier narrow AI’s as far as widespread application is concerned. Car accidents account for an incredible amount of deaths worldwide, and self-driving AI has been shown to be more skilled at driving than human operators. It’s probably safe to say that we will see the transition to self-driving cars in the near future, and that comes with a whole host of moral and ethical concerns. The self-driving AI will surely come under scrutiny when the cars do eventually crash, and the choices the AI will need to make have to be decided by us. A simple example would be a sudden stop happening in front of a self-driving car, it finds it needs to switch lanes quickly as it cannot stop in time safely, on the right of the car is a motorcycle and to the left is a car. We know that swerving into the motorcycle almost surely would result in serious injury — if not death. However, does this mean in that situation the car should always decide to swerve into other cars? What if the car has children in it? What if the people in the car aren’t wearing seatbelts? What if the road has drop-offs on the side the other car is on? Deciding how an AI driver will handle the worst case scenarios is difficult, and the more sophisticated AI’s become, the more ethical and moral reasoning we will need. It’s certainly possible that an AI with general intelligence would need to be given some form of rights. Would a house-robot (imagine Alexa but with human-like general intelligence) be essentially a slave? General intelligence would allow it to think like a human being; it would be conscious. Does that consciousness necessitate human rights? What is it that makes us human? It seems clear to me that having a strong foundation of moral and ethical reasoning will be indispensable going forward in our technological world. - Data analytics and critical thinking are fairly self-explanatory. Our world runs on data, and analyzing it through programs like SPSS, STATA, and processes like machine learning will continue to be the standard. The usefulness of critical thinking should go without saying and cannot be overstated. An inspiration for this post came partially from the New Year’s resolution of a friend of mine, which was to do something new each week. I, myself don’t actually have a weekly goal to do something new, but I do feel that the concept is important. I try to take steps to “better myself” in whatever way I can, whether that’s reading up on an interesting subject, listening to an informative podcast, learning a new language, or even starting a blog to force myself to analyze and write on a more frequent basis. My most recent undertaking was one that I had been meaning to do for a long time now. This week, I started learning how to code. I’ve started with learning HTML5 and definitely have a lot to learn. My goal is to become comfortable and competent with fundamental program coding, adding a layer of sophistication to my personal profile that will separate me from those who haven’t done so. Obviously, there will be plenty of people who understand coding much more than I ever will, but life isn’t about being the absolute best at everything all of the time. Life, like Poker, is a game of small edges, and exploiting small advantages here and there makes quite a positive difference on long-term Expected Value (+EV). As I see it, adding new skills that are useful, pertinent, and future-oriented is a great strategy for increasing one’s competitiveness in the market. In conclusion, whether you see yourself in the job market, the venture market, or in a future emerging market, there will always be competition. You will always be competing, so why not make yourself more competitive? Find something you really want to do, and be great at it! Challenge yourself to be a better analyst, a better marketer, a better engineer, a better programmer, a better nurse, a better scientist, a better philosopher, a better trader, a better lawyer, a better politician, a better citizen, and — most importantly — a better person. Stay Thinking Presently, Holden
The Importance of Bettering Yourself & Skill Acquisition
0
the-importance-of-bettering-yourself-skill-acquisition-1433edf4b116
2018-09-20
2018-09-20 15:12:40
https://medium.com/s/story/the-importance-of-bettering-yourself-skill-acquisition-1433edf4b116
false
1,214
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Holden Cantrell
null
309e80cbb833
holdencantrell
24
25
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-18
2018-02-18 18:31:17
2018-02-18
2018-02-18 18:47:48
0
false
en
2018-02-18
2018-02-18 18:51:27
0
14343379e9f3
1.996226
5
0
0
I myself have heard the words, “Artificial Intelligence is going to destroy humanity” many times. Whether I am at my job, with friends, or…
5
Why Artificial Intelligence IS NOT going to take over the world. I myself have heard the words, “Artificial Intelligence is going to destroy humanity” many times. Whether I am at my job, with friends, or online, it seems that many people are under the wrong impression that somehow the new advent of robotics and the integration to concepts such as Computer Vision, Machine Learning, and Artificial Intelligence will follow an exponential curve resulting in the singularity, or the point in time when Artificial Intelligence meets the intelligence of an average human. Now, I’m not going to say that this will never happen. It definitely has a chance of happening in the future, but definitely not the near future. Let’s take a look why. Right now, Artificial Intelligence is primarily geared towards Convolutional Neural Networks, Generative Adversarial Networks (GANs), and Deep Learning. We classify all of these different methods of machine learning to be artificially intelligent, which they are, but only to an extremely limited degree. Whenever we code a GAN or a Neural Network, we are programming the system to learn over time, commonly segmented into what is known as Epochs. However, the system is not learning by itself! We are forcing it to learn. One important facet to what distinguishes us as human beings is our emotion, or personal connection to ideas thrown at us. If you were a kid and your mother told you to take out the garbage, you would not always take out the garbage. Maybe, you would say, give me 10 minutes, or start whining. What we are doing with robotics is automatically forcing the robot to learn, then automatically evaluating the machine after. Now, some will argue that this is the reason that the field is known as artificial intelligence, because it is artificial, not generally intelligent. That is absolutely a very strong point to bring up, but lets compare what the word artificial really means. Taking an analogy, we can look at Oranges and Orange Candy. Now, an Orange is not artificially flavored, but Orange Candy is. What makes Orange Candy artificial is that it is not occuring naturally. Nevertheless, Orange Candy still tastes just like an Orange, at least most of the time. We can use this analogy to draw a comparison to how we see Artificial Intelligence. If we wish to define a field such as Artificial Intelligence, are we not to carry out the same processes that regular intelligence does, but simply not let it occur naturally? That is why in Artificial Intelligence must be re-defined. Overall, it is easy to say that the true definition of Artificial Intelligence does not matter, but thinking that is not gripping the problem by the horns, and dealing with it. Until America sees that the current definition of Artificial Intelligence can never reach the singularity, our economy will not be able to move as fast as we are potent to. If we choose to continue to grow as much as an economy as we are currently, we need to capitalize on true AI Applications. ________________ If you enjoyed this article, please follow this Medium Channel to learn more about Tensorflow/Machine Learning
Why Artificial Intelligence IS NOT going to take over the world.
154
why-artificial-intelligence-is-not-going-to-take-over-the-world-14343379e9f3
2018-03-24
2018-03-24 12:22:34
https://medium.com/s/story/why-artificial-intelligence-is-not-going-to-take-over-the-world-14343379e9f3
false
529
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Discover Artificial Intelligence
“The world’s first trillionaire will be an Artificial Intelligence entrepreneur “— Mark Cuban, Billionaire Investor/Businessman
b2402d5edcde
learndiscoverai
29
152
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-13
2018-02-13 05:14:04
2018-02-13
2018-02-13 05:14:53
0
false
en
2018-02-13
2018-02-13 05:14:53
4
14343bc8b1d1
1.932075
0
0
0
Elevators don’t give a fuck about you or your safety. You wanna bet? Go ahead. Hold the door of the next elevator you step into…
1
Tech Tomorrow Elevators don’t give a fuck about you or your safety. You wanna bet? Go ahead. Hold the door of the next elevator you step into… Not with the button! It expects that. Place your breakable forearm in front of its unbreakable door. In position? Good. Now…your mission is simple…KEEP IT OPEN. At first, the elevator may play along — “conceding” to your blockade. Don’t be fooled. Elevators aren’t known to be patient. This one is no different. Moments pass. The door nudges your arm — a “suggestion” that you get a move on. Hold your ground! Don’t let this cold, metal box tell you what’s up. Put that door back in its place! Oh shit…yah you pissed it off. Play time’s over. It’s been summoned eight floors up and you — with your gooey center and your “free will” — you’re preventing the destiny it was programmed to fulfill. In a fit of “robotic rage”, the elevator screams in your ears with its high-pitched alarm before closing its door with unmatchable mechanical strength. If an arm is the price to pay to achieve existential purpose then so be it. All attempts to resist are futile. You’ve lost control. In fact…you never had any at all. Okay. Cards on the table. I don’t actually believe that EVERY elevator is like this. Some of them. DEFINITELY. But that’s beside the point. I chose an elevator to begin this blog because it illustrates two goals that are universal to most (if not all) human invention — past, present and future: autonomy and purpose. Let’s start with purpose. A simple meaning. The singular reason that something exists. And yet, many people find it difficult (if not impossible) to define their own. Sure. People have their faith. But faith is fluid. Faith can be questioned. Machines don’t have faith. Humans build purpose right into the body (hardware) and mind (software) of machines. This is a major distinction between people and their inventions. A topic I intend to discuss further. But for now… Let’s talk autonomy. Just as simply — the ability to act independently. Most inventions up to this point are merely semi-autonomous. A modern elevator requires some sort of directive input (i.e. up or down) before it will perform the task. The physical act of moving up and down is autonomous for an elevator. But because it requires some sort of external input, it is not fully autonomous. That said, elevators are old technology. The technology of today and tomorrow is a different story. So. Am I afraid of elevators? No…well at least not yet. If they start reading minds…we’ll talk. This blog is about the tech of today and tomorrow. What’s possible? What isn’t? Should we care? Should we be scared? I hope to explore all of these questions and more… I will also be sharing more hypothetical scenarios a kin to the intro of this post. Don’t worry! They will all be backed up with Academic Research and a machine-learning algorithm programmed to fact check. HAHAHAHAHA. I’m kidding? Right?
Tech Tomorrow
0
tech-tomorrow-14343bc8b1d1
2018-02-13
2018-02-13 05:14:54
https://medium.com/s/story/tech-tomorrow-14343bc8b1d1
false
512
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Will Milvid
null
c47441ef272a
willmilvid
0
1
20,181,104
null
null
null
null
null
null
0
null
0
49c33a77be27
2018-01-03
2018-01-03 17:51:19
2018-01-03
2018-01-03 17:57:34
1
false
en
2018-01-03
2018-01-03 17:57:34
12
143514f3352d
3.611321
4
0
0
We often hear that Artificial Intelligence is present in a lot of aspects of our daily lives. Is this true though? Not sure? Let’s…
5
10 uses of AI in your Everyday Life We often hear that Artificial Intelligence is present in a lot of aspects of our daily lives. Is this true though? Not sure? Let’s accompany Anna through her day and see whether this is true. 6:34 am Anna wakes up. We didn’t hear an alarm sound putting an abrupt end to her sleep. Instead, she is wearing a smartband on her wrist. This band recognizes her movement, her heart rate and other sensory data that form her sleeping patterns. It wakes her with a gentle vibration, exactly in a phase of shallow sleep, so that she feels refreshed and vital the moment she wakes. 7:15 am Anna likes to read some news while having breakfast. It is always good to know what’s going on in the world. She picks up her phone and opens up her news feed. Wait a minute. Didn’t she have to unlock her phone? Oh right. It uses a face recognition AI and automatically unlocks for authorized users. Isn’t that convenient? Furthermore, her news app has learned her preferences over time and only highlights those news that actually are relevant to her, which makes it very efficient for her to be up-to-date. 7:40 am The news have been too captivating so Anna is late for work. The only way she can manage to get there on time now is by hailing an Uber. She tells Alexa to call an Uber for her. Alexa carries out her command while Anna puts on her shoes and luckily, the ride is there almost instantly. How lucky she is. Or is it because Uber tries to predict the frequency of rides and the directions to be prepared for the upcoming demand? 7:42 am Anna rushes out of her home and into the Uber. She didn’t turn out the lights, nor did she check the heater. But that’s no problem. Her smart home appliances turn the lights out shortly after she has left and turn down the heating automatically. Still she can be sure that, despite the cold weather outside, her home will be warm and cozy when she returns. Her smart heating has learned the usual patterns of Anna being at home and in case that should be different one day or the other, Anna can still control everything remotely using an app on her phone. 8:05 am After she arrives at the office, she usually checks her email. One email says: “Easy trick to become a Bitcoin millionaire”. This message has somehow managed to pass the line of defense of her spam protection, which filters out spam, and reliably categorizes other emails like ads and social media. Anna marks the suspicious, previously unseen email as spam and the spam filter learns, refines its rules and is better prepared for future incoming mails. 10:54 am Anna receives an email from a business partner with a meeting request. She is glad to meet, and confirms the meeting. She puts her assistant Amy into cc to schedule the meeting. Amy, however, is not a real person. Amy is an AI connected to her calendar that carries out the communication with the business partner until a suitable time slot has been found. 11:46 am Anna receives a call from her bank. She is told that there has been a suspicious transaction on her credit card and this call is to verify that it was legit. She made a day-trip across the border a couple of days ago and instantly remembers that she had spent a fair amount shopping for clothes and verifies the transactions. The fraud detection system may have misfired here, but she can still be happy that it is keeping track of her expenses and alerts for any irregular and suspicious activities. 2:27 pm Suddenly, there is some movement in Anna’s home while she is busy at the office. Her vacuum cleaner has turned on and is now autonomously moving around her apartment and cleaning the floors. It uses machine learning algorithms to simultaneously build an internal map of the environment and localize itself in it (SLAM), and thereby be able to better plan its routes through the rooms. 4:31 pm Anna is alarmed by her office chair that her sitting position is not healthy and the accompanying app shows her how she can adapt her position. The chair has sensors included and from the distribution of Anna’s weight on the sensors it can infer the sitting position and alarms whenever she is sitting badly or remains in one position for too long. 8:02 pm It has been a long and tiring day and Anna just likes to lay back and watch a movie on Netflix. She has started a new TV series yesterday and she can’t wait to continue today. Though she had never heard of the series before it has been recommended to her by the system with a nice personalized artwork that spoke to her. And what a good recommendation that was. Anna calls it a day and that’s where we leave her to her series. These are just some of the many examples of how AI is changing our daily life. How many of these AI and machine learning cases do you recognize? How many do you use in your own life? Thanks for reading! If you enjoyed this article, please leave a 👏 Say Hello On: Twitter | LinkedIn
10 uses of AI in your Everyday Life
19
10-uses-of-ai-in-your-everyday-life-143514f3352d
2018-04-10
2018-04-10 04:34:02
https://medium.com/s/story/10-uses-of-ai-in-your-everyday-life-143514f3352d
false
904
Insurance, Tech and Startups is our topic! #InsurTech the market!
null
InsurTech.vc
null
InsurTech.vc
insurtech-vc
INSURTECH,INSURANCE,VENTURE CAPITAL,SEED INVESTMENT,STARTUPS
insurtechvc
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Babak Ahmadi
Artificial Intelligence expert, practitioner, and enthusiast dedicated to delivering AI services to the insurance industry.
56410a6adf6f
babak.ahmadi
24
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-09
2018-02-09 15:58:59
2018-02-11
2018-02-11 17:24:41
2
false
en
2018-02-11
2018-02-11 20:05:02
1
1435798ddccb
1.666352
2
0
0
An exploration of Boston property assessment data 2017
4
Understand Boston Housing Properties With Data Science and Infographic An exploration of Boston property assessment data 2017 Why I started the project The Boston housing dataset from the 1970s has always been a great material for data science beginners to play with. It is used for countless machine learning and data analytic trainings. However the data was outdated and cannot reflect merely any aspect of today’s situation. I have been living in Boston ever since 2013 and have witnessed its tremendous housing market growth during the past few years. A great amount of new constructions have been built including luxury condominium, hotels and public facilities. Therefore, I think it would be a good time to take a deep insight into the current properties in Boston area and find answers for a few questions that I had for the current market. How I started In order to begin my research, I grabbed property assessment data from the Boston government’s open datasets. This dataset includes property, or parcel, ownership together with value information in the Boston area for 2017. For this particular dataset, the Boston area includes downtown Boston and its neighborhood. The Boston area included in the dataset The dataset covers a total number of 75 columns within 17 kinds of registered properties. However, as a resident of Boston who works in the city, I am more interested in residential and commercial properties rather than properties such as factories and farms. I cleaned up the dataset, filled empty strings and find out some very interesting factors about the market. Tell the story via infographics Here are five factors that I have found out through this exploration Boston Property Assessment 2017 infographics Conclusion A design with real data can tell stories. We live in a world of information explosion, in our age a good design should convey effective information. There are many possibilities of designing with data. As a designer, I would like to utilize the data and make them accessible for my audience. If you would like to explore more details about the data exploration, find the information here: https://www.kaggle.com/mistyhx/boston-housing-2017-exploration
Understand Boston Housing Properties With Data Science and Infographic
16
understand-boston-housing-properties-with-data-science-and-infographic-1435798ddccb
2018-04-01
2018-04-01 08:17:48
https://medium.com/s/story/understand-boston-housing-properties-with-data-science-and-infographic-1435798ddccb
false
340
null
null
null
null
null
null
null
null
null
Infographics
infographics
Infographics
6,355
XIN HU
UI/UX designer @ Pearson Education. I am a data driven designer who design to solve day to day problems. http://www.xinhudesign.com
7062a7b98cc8
huxin1
31
184
20,181,104
null
null
null
null
null
null
0
model = inception-v3('/mnt/inception-v3_weights.h5') model.layers.pop() for layer in model.layers: layer.trainable = False model.outputs = [model.layers[-1].output] model.layers[-1].outbound_nodes = [] model.add(Dense(6, activation='softmax')) sgd = SGD(lr=1e-3, decay=1e-6, momentum=0.9, nesterov=True) model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy']) #Imports required from shapely.geometry import Point, Polygon from geopy.distance import great_circle, vincenty from sklearn.cluster import DBSCAN from scipy.spatial import ConvexHull # Spatial clusters based on the histogram data = metadata[['latitude', 'longitude']] db = DBSCAN(eps = 0.0007, min_samples = 8, metric ='euclidean', algorithm='auto') db.fit(data) # Visualization of clusters with shapely and geojson coords = metadata.as_matrix(['latitude', 'longitude']) cluster_labels = db.labels_ n_clusters = len(set(cluster_labels)) clusters = pd.Series([coords[cluster_labels == n] for n in range(0, n_clusters)]) maploc1 = folium.Map(tiles='cartodbpositron', location=[40.678361, -74.019592],zoom_start=11) for cluster in clusters: if len(np.unique(cluster)) <= 2: print ('bad cluster ' + str(cluster)) continue inverted = [[x[1],x[0]] for x in cluster.tolist()] ring = Polygon(inverted) ring_hull = ring.convex_hull folium.GeoJson(mapping(ring_hull)).add_to(maploc1) #print(ring) #print(mapping(ring_hull)) maploc1
6
null
2018-01-22
2018-01-22 23:46:55
2018-01-30
2018-01-30 01:36:28
2
false
en
2018-05-10
2018-05-10 05:58:17
9
1436d381ddff
4.775786
18
0
0
SNAPLOC is a product that I have created as a final project of a data science bootcamp, Metis. It does automatic image classification and…
5
SnapLoc — Places of interest in a city, from geo-tagged photos SNAPLOC is a product that I have created as a final project of a data science bootcamp, Metis. It does automatic image classification and spatio temporal analysis in order to recommend the places of interest for traveling in a new city. The idea came out of a need, as I find most of the current POI recommendation services very painful to use. I love to travel and whenever I have been to a new city and felt a need to know about the most interesting places to explore near me, I generally look up interesting photos on various photo services like Instagram or 500px. I mean the choice is between - “Hawk Hill is a 923-foot peak in the Marin Headlands, just north of the Golden Gate Bridge and across the Golden Gate strait from San Francisco, California. The hill is within the Golden Gate National Recreation Area” and Hawk Hill at sunrise (Photo Credit : David Yu) If you are around Golden Gate, I highly recommend going to Hawk Hill at sunrise as you would see something similar to the pic above. How did it all start? I started looking at the pictures that people are taking and posting it on various services and saw that most of these pictures convey our interests and can be categorized into food, natural scenes/scapes, urban scenes, wildlife, birds, etc. Further, I could see a pattern of places where there were more pictures taken than others and there were different kinds of pictures taken at different locations and different times of the day. The question that I wanted to explore using these spatio-temporal patterns was, how to use this data to build an application that could figure out how to parse them and make recommendations based on the preferences of user. I also referred this paper to check the feasibility of the idea. How is this application useful ? To recommend most interesting places based on user preferences. For eg: a small cupcake shop that everyone has been posting about but unless you search for it, you would never know about it. To the growing number of photo enthusiasts, I want to provide a way to see some great pictures of well known places. For eg. there is an awesome view of Golden Gate from Hawk’s point but many do not know that. And finally I want to take the location recommender a step further to suggest a travel itinerary for a new place that user wants to travel to. it is also a photo opportunity recommender and a travel itinerary recommender based on hotspots and landmarks. Damn, how do you implement this awesome idea? To accomplish this, I looked at some 1 million geo-tagged images on Flickr from Flickr API and classified them in common categories of interest. For first part of my pipeline, I trained a deep neural net of images, which would be able to classify any new image with high accuracy. Here the use of CNN also means that you can add more categories anytime in future and becomes an automatic classifier. CNN Classification: I did a convolutional neural network with Inception V3 and got 0.81 accuracy on classifying these 6 classes. The benefit of doing the CNN here is I could add onto these categories, like what if I want to explore the location for adventures/activities that are going on around me and use transfer learning to be able to classify that category to make recommendation for that category to users. For classification, I referred this notebook. I popped the last layer of trained inception V3 model and added my labels instead For the second part, I took the location data of these images and did density based clustering which gave me hotspots in terms of geolocations. These are places along with their categories, that people are talking about a lot. After which I ranked these images based on number of user preference signals, popularity of photo, distance from user, as well as time of day and that’s how I created a personalized recommender of images based on category, location and time. Spatial Clustering: For clustering, I used DBSCAN to come up with density based clusters with the help of latitude & longitude. Also, I had to convert these clusters into polygon to see the varying shapes and sizes of all my clusters, for which I used shapely and convex hulls. Finally, I did querying with KD-tree to get the nearest clusters and ranked these clusters based on user preferences, distance and time. This way I am able to provide personalized locations to users, relevant at that time. I referred the work for spatial cluster analysis from this github notebook. I ran the DBSCAN clustering algorithm (DBSCAN helps removing outliers and bad clusters) for density clusters of image locations, you can see result in the map below. How will the application look ? I also made a demo of my application using Flask where you enter the latitude, longitude and time, on the first page you get to select the cluster of photos that you want to look at then clicking on that cluster, you can see the bunch of images in that category. As golden gate, here is other example that we can look at. How could I miss out on these? If there was more time, I could have made the application more personalized based on user profiling and interests and give suggestions based on may be just the photography of wildlife. I can complete the itinerary creation for a new place — map of landmarks and hotspots that can be covered in a day based on current location and time a person wants to spend in that area. And who says that only Flickr’s geotagged images can be used? The whole world is open and I can take up tagged images from Instagram, Facebook and many other sites to do the same. Here is the link to Github Repo for this project: kalgishah02/SnapLoc SnapLoc is a product that does automatic image classification and spatio-temporal analysis in order to recommend the…github.com
SnapLoc — Places of interest in a city, from geo-tagged photos
86
snaploc-personalized-recommender-for-points-of-interest-in-a-city-1436d381ddff
2018-05-10
2018-05-10 05:58:18
https://medium.com/s/story/snaploc-personalized-recommender-for-points-of-interest-in-a-city-1436d381ddff
false
1,164
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Kalgi Shah
Data Scientist
7579e12f79c0
kalgishah
47
62
20,181,104
null
null
null
null
null
null
0
null
0
34ff12aadef0
2018-07-18
2018-07-18 07:04:12
2018-07-18
2018-07-18 07:07:07
1
false
en
2018-07-18
2018-07-18 07:07:07
3
14398ff6b63f
0.796226
0
0
0
One innovation above all is shaping the future of the financial industry — Artificial Intelligence (AI). The data-rich sector presents huge…
4
AI-the Next Breakthrough in Banking One innovation above all is shaping the future of the financial industry — Artificial Intelligence (AI). The data-rich sector presents huge opportunities for AI adoption and harnessing the technology will help banking institutions gain a competitive advantage. There is one area in which banks are arguably adopting AI faster — the management of unstructured data. This cluster of unstructured data can either be from emails, news articles, legal documents, or recorded telephone conversations. For instance, JP Morgan Chase, an American multinational investment bank and financial services company, recently introduced a platform known as Contract Intelligence (Co In). The usual process of manually reviewing annual commercial credit agreements was time-consuming. However, by implementing the Co In platform — which runs on a machine learning system powered by a private cloud network that banks use — the documents could be analyzed and reviewed in a matter of seconds……….. click here to read
AI-the Next Breakthrough in Banking
0
ai-the-next-breakthrough-in-banking-14398ff6b63f
2018-10-25
2018-10-25 13:10:33
https://medium.com/s/story/ai-the-next-breakthrough-in-banking-14398ff6b63f
false
158
An easy access to Banking trending News
null
null
null
Banking Technology
banking-technology
BANKING TECHNOLOGY,FINTECH,ANALYTICS,MOBILE BANKING,RISK MANAGEMENT
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
John Stones
Technology Expert
989533288ce3
john_stones
17
58
20,181,104
null
null
null
null
null
null
0
null
0
16bb545ae704
2018-07-27
2018-07-27 10:10:03
2018-08-03
2018-08-03 08:01:01
3
false
en
2018-09-27
2018-09-27 11:42:41
3
1439fcd0f553
3.463208
1
0
0
Finding a name is always momentous to a firm, and we wished to bring you behind the scene of choosing Danae Human Intelligence.
5
Danae Human Intelligence: The Story Behind the Name Titian, Danaë receiving the Golden Rain, 1560–65, oil on canvas, © Museo del Prado, Madrid. Finding a name is always momentous to a firm, and we wished to bring you behind the scene of choosing Danae Human Intelligence in embracing our values and goals. Danae represents a classical myth that inspired artists from every period. As told in Ovid’s Metamorphoses, Danae was a princess of Argos, whose father, Acrisius, imprisoned her into a bronze tower to avoid her from falling pregnant after an oracle had predicted his grandson would kill the king. Eventually, the king was responsible for his faith. The mystery created around Danae’s outstanding beauty and reclusion appealed to Zeus who took the form of golden rain to meet with the young lady and seduce her. From this relationship, their son, Perseus, was born. Feeling again the urge to act upon destiny, Acrisius set his daughter and her newborn son into a chest to drift away at sea, but Zeus and his brother Poseidon ensured the pair would reach the shore. When Perseus reached adulthood, he fulfilled the prophecy, killing his grand-father without knowing as part of an Olympic dual. First depicted on Antique ceramics, Danae became increasingly represented throughout art history. While religious paintings were of the highest importance during Renaissance’s very beginning, the 16th-century saw the coining of hierarchies in figurative art, placing historical and mythological paintings as alternative subjects of predilection. In turn, Old Flemish master Jan Gossaert depicted Danae in the manner of an annunciation — wearing blue cloths with sun rays above her head representing the Spiritus Sanctus — while his Italian counterpart Antonio da Correggio showed her with cupids in a similarly divine vision of procreation. Other Renaissance artists like Titian and Tintoretto departed from religious subjects strictly-speaking with depictions of Danae increasingly highlighting her nudity, and featuring gold coins instead of rain (most likely foreseeing our use of Danaes, the platform-exclusive tokens for the purchase of digital artworks’ intellectual creation and editions). Gustav Klimt, Danaë, 1907–1908, oil on canvas, © Leopold Museum, Vienna. To many artists, narratives around Greek myths allowed to represent female nudes and erotic scenes while preventing a direct connection to reality, bringing viewers’ eye to sensuousness without controversy. Danae stood as the perfect subject because of the suggestive nature of her immaterial fecundation. Depictions gained eroticism in the 18th-century in works from artists like Jean-Baptiste Greuze and Anne-Louis Girodet. Ultimately, the Argive princess appeared in a non-diluted sensuousness in Gustav Klimt and Egon Schiele’s representations, and her contemporaneity was again made visible with Anselm Kiefer’s Danae sculpture displayed at the Louvre in 2007. Over time, her unparalleled beauty and grace owed her to symbolize the arts, creation, aesthetic, and taste. What about Human Intelligence? The acronym is just as important to the name, bringing a contemporary, digital twist to the almost divine patroness of the arts. Artificial Intelligence is a vital stake extensively covered by all media. With threatening bets about the disappearance of humanity and the rise of the machine, it is ubiquitous to understand the excitement around this futuristic technology. Sculptor Jack Burnham thoughts on art and technology perhaps best embrace the quest entailed to artificial intelligence. As he explained in his 1968 book Beyond Modern Sculpture, “Behind much art extending through the Western tradition, exists a yearning to break down the psychic and physical barriers between art and living reality — not only to make an art form that is believably real, but to go beyond and furnish images capable of intelligent intercourse with their creators.” Justine Emard, Co(AI)xistence, short film, 2017. But while computer-generated artworks, such as digital painting, videos, photography, and media involving VR and AR disconnect from an artist’s hand, its impact is still very present in pieces created. As art historian Christiane Paul wrote, “It has been suggested that the creation of artworks such as paintings or drawings on a computer implies a loss of relationship with the “mark” — that is, that there is a significant lack of personality in the mark one produces on a computer screen as opposed to one on paper or canvas.” Something she counters, saying “Concept, all elements of the composition process, the writing of software, and many other aspects of digital art’s creation are still highly individual forms of expression that carry the aesthetic signature of an artist.” The mark of a digital artist is fully connected to one’s creativity, emotional understanding, and thought process, which a computer cannot replicate. Although the interconnection between human and machine is complementary, at Danae HI we perceive computer engineering as a tool for empowering artists to produce anew. Subsequently, “Human Intelligence” grasps the nature of the artist’s intrinsic value in technology-based environments.
Danae Human Intelligence: The Story Behind the Name
15
danae-human-intelligence-the-story-behind-the-name-1439fcd0f553
2018-09-27
2018-09-27 11:42:41
https://medium.com/s/story/danae-human-intelligence-the-story-behind-the-name-1439fcd0f553
false
772
Highlighting current trends and history of digital art and design weekly
null
DanaeHumanIntelligence
null
DIGITAL ART WEEKLY by Danae HI
digital-art-weekly-by-danae-hi
DIGITAL,ART,DESIGN,CONTEMPORARY,DIGITAL ART
Danae_HI
Art
art
Art
103,252
Danae HI
Danae Human Intelligence is a capital market for the appreciation of digital creation and its copyright management. A project powered by Laffy Maffei Gallery.
9e5606da9b1a
danaehi
15
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-19
2018-05-19 08:27:15
2018-05-19
2018-05-19 14:46:22
1
false
en
2018-05-30
2018-05-30 09:12:37
0
143a5b82dbf1
3.660377
0
0
0
Since the announcement of Google Duplex, I am left baffled by the critical responses to this huge artificial intelligence (AI) development…
5
Why you should not be left perplexed by Google Duplex Since the announcement of Google Duplex, I am left baffled by the critical responses to this huge artificial intelligence (AI) development. The doom scenario‘s entered my feeds quicker than booking a haircut appointment. And for me — that‘s quite a thing, mhmm. What struck me most is the sentiment of fear for an AI voice diminishing social interaction and inhibiting human behavior. Preparing for all kinds of scenario‘s where people are doomed is a healthy psychological response to a world changing so quickly around us. Predictions about the future, 5 years from now, are sometimes not even possible. And we, human beings as we are, are extremely sensitive to the idea of ‚knowing what‘s coming‘. However skepticism based on fear is a wrong starting point for understanding AI. Currently we are still operating, working and designing in a time where experimenting with AI occurs in the broadest sense possible. The accumulation of data in the last decade and the new possibilities that it has created call for unprecedented exploration. Whilst knowing this, many skeptics now call for defining the boundaries of AI exploration and production. In my opinion, the latter is still mainly based on science fiction and fear for the unknown than on scientifically proven track-records of ‘things gone wrong’. Anticipating for the worst case scenarios is possible when these scenario’s are at least a little quantifiable. But in our current AI era of exploration and endless possibilities, frankly, we cannot even motivate which scenario’s have a realistic possibility to manifest themselves. Thus adopting unidentified boundaries today just doesn’t make sense. Having this in mind, I am a supporter of focusing on the current potential of exploration. Drawing conclusions from Google Duplex, as individuals and businesses we should invest our energy in defining, in particular, which services are suitable to be handed over to AI. In parallel, these should be clearly distinguished from services where we wish to keep intact a certain degree of human interaction. Where should human conversations not be replicated? I should devote a separate story on this topic, but I am sure that we could all define some areas in which AI has already proven unmissable. Duplex is taking us a step further in that future. On this note, don’t get me wrong. I am definitely in love with the nostalgic statement on my tasty bag of chips — “Baked by Natasha!”. I am one of those examples — self-aware AND sensitive to marketing tricks. But at the end of the day I’m pretty sure that Natasha didn’t even press the button of the robotic arm which transferred 300 grams of 1 ton of freshly baked, bio potato slices to my couch on Saturday evening. Nostalgia will not help us cope with the current pace of AI development. Moreover, the topic whether Duplex should announce that it is a virtual assistant and not a real human (to the person on the other side of the line) is very hot NOW. However, it it will become obsolete and redundant by the time when 85% of appointments interaction is done by Alexa, Google Home or Siri. It is just like I fancy the idea that a lot of effort has been put by a human in reaching that 90% level of crisp. But eventually, I am 100% aware and perfectly content with the fact that it was produced by Natasha’s much bigger and advanced clone sister. I am considering Google Duplex as the one of the first ultimately successful attempts to a smart economic distribution of resources. One in which daily, repetitive and often socially frustrating tasks (especially if the human being on the other side of line has a bad day), are successfully executed. Meaning that whilst others teach the algorithms, I, as an individual, will gain valuable time to keep developing in my fields of interest, and ultimately — in becoming a better human. Equally important is this distribution in fields where the development of emotional intelligence skills is crucial, for example for our children. Replacing the human by a virtual assistant triggers the aspect of developing communication skills. Obviously, these can be thought by other means and through other daily tasks. Thus I doubt that the ‘communication between humans’ argument is a strong one in relation to AI. Recently, particularly this assumption has been challenged more than ever by waves of (especially young) people opposing the idea of always being connected by focusing consciously on human communication (I fancy the concept of the phone free bar too!). So there is still hope! But I bet that if you are a parent — you would prefer to focus on teaching your kids the much more demanding examples of emotional intelligence (such as empathy, motivation, self-awareness and self-regulation) instead of spending even more time for that haircut. Approaching AI questions from a state of positive criticism (“What services are suitable for AI?”) instead of pre-defined emotions as “AI is scary!”, will step-by-step and eventually steer individuals, businesses and decision-makers to a development of some realistic boundaries of AI application. Meanwhile I will let myself be positively overwhelmed by Google Duplex, but not frightened, whilst enjoying the highly-advanced baking of Natasha. This is my first story on Medium. I am here to share my views and openly exchange them for others. Especially if you dislike my content, keep reading and drop me a comment. ;)
Why you should not be left perplexed by Google Duplex
0
why-you-should-not-be-left-perplexed-by-google-duplex-143a5b82dbf1
2018-05-30
2018-05-30 09:12:38
https://medium.com/s/story/why-you-should-not-be-left-perplexed-by-google-duplex-143a5b82dbf1
false
917
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Kristina Kovatcheva
Privacy geek, tech fan, ethics explorer, innovation enthousiast
3dfa2032ad2d
kristinakovacheva_31162
7
35
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-03
2017-11-03 16:13:54
2017-11-03
2017-11-03 16:30:31
1
false
en
2017-11-03
2017-11-03 16:30:31
26
143abdccfde8
3.837736
0
0
0
(As seen on Forbes.com)
5
Artificial Intelligence In Drug Discovery: A Bubble Or A Revolutionary Transformation? (As seen on Forbes.com) Artificial intelligence (AI) has become a hot topic in the area of life sciences lately. With a growing number of groundbreaking AI use cases in other hi-tech industries — ranging from self-driving cars to speech and image recognition tools to personal assistants (you know Siri, don’t you?) — players in the biopharmaceutical industry are looking toward AI to speed up drug discovery, cut R&D costs, decrease failure rates in drug trials and eventually create better medicines. And while it’s clear there is a fair amount hype surrounding AI, the public has already developed a degree of skepticism about its real value. To realistically assess the current state of the AI field in biopharma, let’s reveal some of the historical drivers of progress in this area. Despite a halo of futurism, the field of AI is relatively old, with its official birth at a famous Dartmouth College conference in 1956. Fifteen years later, AI made its way to the medical field with growing public interest and hype. Several biomedical AI-based systems were developed during the 1970s, including Internist-1, CASNET and MYCIN. The high expectations were not met, however, and in 1973, Sir James Lighthill officially described the AI field as a total failure. Government and public interest in AI cooled down, which led to shortages in research funding. In the early 1980s, the interest was regained due to the creation of AI-based “expert systems,” which were quickly adopted worldwide. The first system of this kind, XCON, was a staggering commercial success that led to a multimillion-dollar industry by 1985. Two years later, the market of specialized AI products collapsed and the so-called “AI winter” prevailed with little hope for the field to ever re-emerge as a mainstream topic. Computer scientists would often avoid any associations with AI so as not to be regarded as “wild-eyed dreamers,” as John Markoff wrotein the New York Times in 2005. The downturns in the AI progress in the late 1970s and 1980s were due to an immature technological environment. Only in the early 1990s did the field of AI gradually advance and soon flourished on the waves of exponential growth in computational power (Moore’s law), data communication (the internet), cloud technologies (Salesforce, AWS, EC cloud, apps, etc.) and big data (aka the “big data revolution”). Here is a nice illustration that shows the availability of sufficient data to train AI models is crucial for breakthroughs in various fields. In 1997, Deep Blue, IBM’s supercomputer, defeated Garry Kasparov in chess, and this marked the first in what would be a series of milestones for AI. Since 2012, the progress in AI jumped exponentially after a major breakthrough in deep learning of neural networks. That year, the press was shaken by a wave of publications about how a computer managed to identify a cat profile from a large series of YouTube videos without any prior instructions about cats. Big Pharma’s Bet On AI AI has exciting opportunities to prosper in the biopharmaceutical field. The advances in combinatorial chemistry in the 1990s generated many millions of novel chemical compounds for testing as possible drugs. This stimulated the development of different high-throughput screening (HTS) techniques to perform such testing in relatively short terms, generating numerous public and private databases of compound bioactivities and toxicities. Simultaneously, a rapid progress in biology unfolded in the 1990s with advances in gene sequencing and “multi-omics” studies leading to the accumulation of billions of data points describing genes, proteins, metabolites and mapping interconnections between different biochemical processes and their phenotype manifestations. The availability of big data in life sciences and a rapid progression in deep neural networks led to a wave of AI-based startups focused on drug discovery sweeping through the biopharma industry over the last three years. A number of significant AI-big pharma collaborations were announced in 2016–2017, including Pfizer and IBM Watson, Sanofi Genzyme and Recursion Pharmaceuticals, and GSK and Exscientia, among others. Sifting Through The Hype While multiple media outlets continue to rave about how AI is the future of healthcare, this technology has yet to prove itself in the biopharmaceutical industry. As of today, there are no AI-inspired, FDA-approved drugs on the market. Also, it is important to realize that while AI-based data analytics can bring innovation at every stage of drug discovery and during the development process, this data will not magically serve as a substitute for chemical synthesis, laboratory experiments, trials, regulatory approvals and production stages. What AI can do, though, is optimize and speed up R&D efforts, minimize the time and cost of early drug discovery, and help anticipate possible toxicity risks or side effects at late-stage trials to hopefully avoid tragic incidents in human trials. It can help incorporate knowledge derived from genomics and other biology disciplines into drug discovery considerations to come up with revolutionary ideas for drugs and therapies. AI failed to deliver on its promise in the 1970s and 1980s, but now the situation is fundamentally different. The industry now has access to sufficient computational powers, commercially available cloud-based services, and it possesses big-chemical and biological data to train models. The current technological environment in conjunction with existing deep learning techniques provides exciting opportunities for major industry disruptions in the next three to five years. The necessity to start embracing AI technologies and revamping human resource strategies to create data science-driven interdisciplinary teams has become a matter of the future business sustainability for biopharma organizations. Call To Action If you want to follow latest trends and insights in drug discovery and life science innovation, please subscribe to receive monthly updates from BiopharmaTrend.com
Artificial Intelligence In Drug Discovery: A Bubble Or A Revolutionary Transformation?
0
artificial-intelligence-in-drug-discovery-a-bubble-or-a-revolutionary-transformation-143abdccfde8
2018-05-02
2018-05-02 18:07:56
https://medium.com/s/story/artificial-intelligence-in-drug-discovery-a-bubble-or-a-revolutionary-transformation-143abdccfde8
false
964
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Andrii Buvailo
Editor @ BiopharmaTrend.com
c323fbdc69bb
ABuvailo
75
339
20,181,104
null
null
null
null
null
null
0
import pandas as pd import oauth2client.client, oauth2client.file, oauth2client.tools import gspread # {CHANGE} STRICTLY BYOC [Bring your own credentials] client_id = "XXXXXXXXXXX....apps.googleusercontent.com" client_secret = "XXXXXXXXXXXXXXXX" # This is code I picked up from StackOverflow flow = oauth2client.client.OAuth2WebServerFlow(client_id, client_secret, 'https://spreadsheets.google.com/feeds') storage = oauth2client.file.Storage('credentials.dat') credentials = storage.get() if credentials is None or credentials.invalid: import argparse flags = argparse.ArgumentParser(parents=[oauth2client.tools.argparser]).parse_args([]) credentials = oauth2client.tools.run_flow(flow, storage, flags) gc = gspread.authorize(credentials) # {CHANGE} Azi Update (Responses) is the name of my Google Sheet sheet = gc.open("Azi Update (Responses)").sheet1 # Save the sheet to a DataFrame df4 = pd.DataFrame(sheet.get_all_records()) #Print the number of records loaded print("loaded spreadsheet with", len(df4), " records") # Hopefully this would not apply to your case.. but I had to change a bunch of friggin column names df4 = df4.rename(columns={ "Dryness around Eyes" : "eyedry", "Ear Flapping": "earflap", "Food (prior to stools)": "foodPrior", "Exercise": "exercise", "Hair Loss": "hairloss", "Motion (Stools)": "stools", "Redness of Eyes": "eyered", "Swelling of Eyes": "eyeswell", "Today\'s Date": "date", "Motion Description": "stoolDesc", "Water Intake":"water", "Watering of Eyes": "eyewater", "Wiping of Eyes": "eyewipe", "Supplements": "supplements", "Upload a picture of Azi\'s face": "aziPic", "Temperament": "temperament", "Paw Chewing": "pawchew"}) fr = df4[['date', 'earflap', 'hairloss', 'eyered','eyeswell','eyewater']].copy() display(fr.head()) bd1 = pd.melt(fr, id_vars='date', value_vars=list(fr.columns[1:]), # list of symtoms var_name='Symptoms', value_name='Severity' bd1.pivot(index="date", columns="Symptoms", values="Severity").plot(figsize=(16,4)) # Let's sum values into a new score columns fr['score'] = fr[['earflap','hairloss', 'eyered','eyeswell','eyewater']].mean(axis=1) # Get rid of the columns we don't need to plot fr.drop(['earflap','hairloss', 'eyered','eyeswell','eyewater'], axis=1, inplace=True) # Go ahead and plot these values plt.plot(pd.to_datetime(fr['date'], format='%m/%d/%Y %H:%M:%S'), fr['score'])
15
null
2018-01-24
2018-01-24 04:23:08
2018-02-03
2018-02-03 07:36:01
5
false
en
2018-02-03
2018-02-03 07:36:01
3
143b8a51ed12
5.852201
2
0
0
“There are two kinds of pain. The sort of pain that makes you strong, or useless pain, the sort of pain that’s only suffering. I have no…
1
Visualizing my dog’s Atopic Dermatitis — Part 3 (Exploring data in Python and data think strategies) “There are two kinds of pain. The sort of pain that makes you strong, or useless pain, the sort of pain that’s only suffering. I have no patience for useless things.” said Frank Underwood in the opening scene of House of Cards as he went on to put Whartons’ dog out of his misery. For Frank, the choice was obvious. Prolonging the experience of life for the dog was inhuman. But for dogs that suffer from Atopic Dermatitis, things get a lot more blurry. For the uninitiated, here is the backstory. Now for the serious business of trying to make sense of it all. Before diving into the data, I’d like to go through my approach of thinking through the process and see what role data can play in achieving this goal. The objective of the project is to do three main things: Hindsight: Track relevant symptoms, medications/supplements and food intake on a daily basis. Insight: Visually describe patterns of symptoms overlaid by medications/supplements as well as food intake to try and get a sense of the trends of all the variables. Foresight: Use AI’s predictive capabilities in helping define the best dietary and medical supplementation to manage Azlan’s symptoms throughout the year. The Data Now for the daily form fields that we fill out. Timestamp: Auto Generated field from Google forms that timestamps the entry of the form Today’s Date: Filled daily, but on the odd occasion, we have had to use it for a few retroactive entries Hair Loss (Scale 0–5): Linear scale for hair loss around face (aka Alopecia) Watering of Eyes (Scale 0–5): On days when experiencing a flare up, Azlan’s eyes tend to pool with water Wiping of Eyes (Scale None, Intermittent, Frequent, Crazy): This typically occurs when there is some watering around the eyes Redness of Eyes (Scale 0–5): This should probably read redness of skin around eyes, but it is the most reliable indicator of the state of allergy for Azlan Swelling of Eyes (Scale 0–5): This symptom tends to show up around the time of a flare up and becomes pretty bad during a flare up Dryness around Eyes (Scale 0–5): Azlan normally dries up around the eye after a flare up that has required a corticosteroid Paw Chewing (Scale 0–5): While not always connected to his AD, we found this an interesting symptom to track and it seemed to be food related Motion (Stools) (Scale 0–7): If you are feeling queasy with this, skip the next one. 4 is ideal, so from an analytical perspective, this symptom has a non-zero optimum score. Motion Description: Well… exactly what the column name suggests Food (prior to stools): Comma separated list of food ingested Water Intake (Scale 0–5): Another non-zero optimum field. We think around 3 is just right. Exercise (Scale 0–10): Another non-zero optimum field. We aim for 6 everyday, which would equate to 2 hours of walking in a day. Upload a picture of Azi’s face: A daily picture of Azlan’s face Supplements: Comma separated list of supplements and medications consumed in a day by Azlan. Temperament (Balanced | Imbalanced): This should be a binary choice but on some days, Azlan is imbalanced during the start of the day and finds his balance towards the evening. Ear Flapping (Scale 0–3): Also a harbinger for flare ups. Typically, this is a leading indicator for a flare up. Irritability (Scale 0–9): On bad days this score would be close to 8. Itchiness (Scale 0–9): The higher the score, the worse everything else was, including itching. Also, the more we’ve had to intervene. Hot Spots (Scale 0–5): We’ve seen this symptom show up and then go away — we need to know which supplement would have likely helped or maybe it was a change in food. Allergens (Scale 0–10): We do have a suspicion that there is a positive correlation between dust and dander levels in Calgary and Azlan’s allergy symptoms. Anything of note: Noteworthy events of the day (if any) Let’s get started. Live link the Google Sheet Data into a Python Notebook Here is the code which works for my scenario. Remember to install these libraries (pandas, oauth2client, gspread) in your Anaconda environment prior to running the notebook. I’ve posted this to a GistHub including a link to the Excel file so you could at the very least, run this report to see what I found. Visualizing Symptom Severity (Matplotlib) The key indicators to gauge the severity of symptoms we’d want to track are: Note: We have renamed the column names in the step above earflap (Ear Flapping) hairloss (Hair Loss) eyered (Eye Redness) eyeswell (Swelling around the eyes) eyewater (Watering from the eyes) Once we have the data in a dataframe “df4”, we want to focus on the indicators above. We can create a copy of df4 with just the columns we want and assign it to a new dataframe “fr”. We can view what the “fr” dataframe looks like using the “display” command from the HTML library. To plot these values on a chart could be useful to get a sense of what the trend is. However, to plot multiple lines we need to change the structure of the data using the “pd.melt” method from pandas to change the data to this (reverse pivot). Here is the code to make this happen. Then using the Matplotlib is pretty easy Allergy Symptom Time Series Reorganizing Data through Grouping & Aggregation While studying the chart above, I found there to be some clutter. I realized that a number of variables could be logically grouped such as not to significantly reduce the resolution of the data. For example, Symptom Severity could be represented as one value — the sum of “Eye Redness”, “Eye Swelling”, “Watering of Eyes”, “Ear Flapping” and “Hair Loss” to represent the severity of symptoms on any given day. This would help reduce the clutter of numbers when looking for patterns. Ok, let’s visualize this. Similarly, factors influencing this score could be grouped as Environmental (Allergens, Temperature, Humidity), Food and Medication/Supplements. Lastly, annotations (Notes) could provide insight into what events may have played a role in exacerbating or helping with Azlan’s Symptom Severity. For instance, on Nov 24th, Azlan managed to scratch his right eye before we could intervene. This led to a swelling that took almost a week to settle down. I’ll look at Food and Meds/Supplements in the next post.
Visualizing my dog’s Atopic Dermatitis — Part 3 (Exploring data in Python and data think…
51
visualizing-my-dogs-atopic-dermatitis-part-3-exploring-data-in-python-and-data-think-143b8a51ed12
2018-02-09
2018-02-09 20:34:10
https://medium.com/s/story/visualizing-my-dogs-atopic-dermatitis-part-3-exploring-data-in-python-and-data-think-143b8a51ed12
false
1,330
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Ash S
SAP Architect, dot com entrepreneur, literature grad, 1 time copy writer, 1 time tv production guy, movie buff, 6 hcp golfer, geek dabbling in AI
eae3284d1f13
wiredash
4
3
20,181,104
null
null
null
null
null
null
0
null
0
65bf09a89a0b
2017-11-13
2017-11-13 15:17:20
2017-11-13
2017-11-13 15:26:21
3
false
en
2018-03-12
2018-03-12 09:44:13
1
143cdcbb9912
1.738679
1
0
0
Part of U+’s expansion into the US market is the start-up AlertMe. Its creation was a cooperation between the U+ studio and two Americans…
3
It’s About Time We Alerted You to AlertMe. Part of U+’s expansion into the US market is the start-up AlertMe. Its creation was a cooperation between the U+ studio and two Americans, investor Walter Burr, and publishing expert Adam Shapiro. AlertMe aims to help publishers generate more relevant content and reduce dependence on social media. More specifically, AlertMe helps filter information overload from the web by personalizing content. Readers and publishers both benefit, as readers can control the content they receive, while publishers can give readers more of what they want. ‘The overwhelming amount of news that comes to the reader today, along with Facebook’s growing involvement in content publishing, is currently threatening the existence of smaller publishers,’ Jan Beránek, U+ founder and CEO. Similar to Spotify, the user relevance of the content is determined by a constantly learning application algorithm. It uses artificial intelligence and data collection by processing natural language and contextual semantics. In addition, U+ developed tailor-made AlertMe solutions. These involve UX analysis, graphic design, frontend development, and a machine-learning based backend. U+ also provided infrastructure for Amazon Web Services and API functionality. Publishers who use AlertMe, like Inc.com, have been seeing increased traffic and engagement on their sites. The open rate of AlertMe notifications is 50%, a significant improvement on publishers’ usual newsletter open rates. Publishers are now reclaiming some of their power from social media channels like Facebook, on whom, by necessity, they’ve become far too dependent. By using AlertMe, publishers can enjoy the acquisition of first-party data, and a new revenue stream as they deliver their highest-performing CPM content. Specific information is what’s important to each individual, and AlertMe is the only way to track granular interests and deliver extreme relevancy. It’s no surprise, but if you deliver relevancy to people, they will open it. Consider yourself alerted to custom content! See more here.
It’s About Time We Alerted You to AlertMe.
1
its-about-time-we-alerted-you-to-alertme-143cdcbb9912
2018-03-12
2018-03-12 09:44:15
https://medium.com/s/story/its-about-time-we-alerted-you-to-alertme-143cdcbb9912
false
315
U+ is a global digital product development company and a tech investor. We turn ideas into real life opportunities. http://www.u.plus #exploredigital
null
usertech
null
U.plus
u-plus
USER EXPERIENCE,PROGRAMMING,DESIGN,TECHNOLOGY,STARTUP
usertech
Machine Learning
machine-learning
Machine Learning
51,320
U+
We write about building startups and digital products, the future of technology, and how to live a technologically better life. Enjoy!
575dba62c180
usertech
38
7
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-24
2018-02-24 11:39:22
2018-02-24
2018-02-24 12:07:01
0
false
en
2018-02-24
2018-02-24 12:09:40
3
143d280c3ae9
0.516981
1
0
0
In the age of so many useful packages like Tidyverse, we tend to forget that for some basic operations, base R’s functions can be just as…
1
Using base R to import and clean data In the age of so many useful packages like Tidyverse, we tend to forget that for some basic operations, base R’s functions can be just as useful especially when you are in a position of having no data connectivity and forgetting to download your favourite packages before walking out of the door 😖. So, through our KampalR group, we took the conscious decision to walk our members through a basic 1 hour tutorial on how to use a few of R’s functions to import and clean data using a fairly clean dataset from data.ug on PLE Results in Uganda from 2010–2016. We documented our walkthrough in Rmarkdown. Clone the repo and let us know what you think! kampalr/meetup_2 meetup_2 - Files used for the 2nd R meetup 24 February 2018github.com
Using base R to import and clean data
1
using-base-r-to-import-and-clean-data-143d280c3ae9
2018-02-24
2018-02-24 16:37:27
https://medium.com/s/story/using-base-r-to-import-and-clean-data-143d280c3ae9
false
137
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Robert Muwanga
“In God we trust. All others must bring data.” — W. Edwards Deming, statistician, professor, author, lecturer, and consultant.
66c65a8a4a09
robertmuwanga
11
45
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-26
2018-09-26 15:41:06
2018-09-26
2018-09-26 15:51:32
2
false
en
2018-09-26
2018-09-26 15:51:32
6
143d5404396d
3.387107
0
0
0
Written by Janet Hwu, User Experience Designer at Shield AI
5
Three Keys to Evaluating User Experience Written by Janet Hwu, User Experience Designer at Shield AI This past July I joined Shield AI, and so far it has been an experience designer’s dream. To work on a system that combines the physical and the digital, to create for a high-stakes use environment where design really matters, and to serve people who are committed to protecting others has been an opportunity that has proven both stimulating and challenging. As part of my responsibilities as UX (User Experience) Designer, I have been planning and conducting user evaluations for our work in development. User evaluation (also known as user testing or UX assessment) is the practice of seeking feedback from users throughout the design life-cycle, with the goal of improving their interactive experience with the final product. Here are a few key actions to take before beginning such a pursuit, inspired by the design thinking process taught at the Stanford d.school. 1. Get to know your user One of my favorite parts of UX is developing empathy for people from other walks of life. There are so many facets of the human experience to discover, and each project brings fascinating learnings on how a life can be lived. Besides being fun, though, taking the time to get to know your users is a critical step of human-centered design that can easily get overlooked under the pressure of tight schedules and hard deliverables. Stepping into the user’s shoes can involve a number of approaches — the creation of user personas, for example, is a conventional UX practice. Besides performing customary research online and through books and film, it’s also refreshing to get out of the office, track down people like your users, and get to know them as human beings. What motivates them? How do they think and why do they think that way? What line of reasoning do they follow when making decisions? At Shield AI, our users are typically service members and first responders. I’m fortunate to work alongside a number of colleagues who have served in the military themselves. Learning from and bouncing ideas off of them has already led to a number of insights to be applied to the experience we are building. 2. Pick the appropriate method Different phases of a project call for different methods of user evaluation or research. If you’re early on in the project and haven’t yet settled on a solution, consider taking a survey of your intended users to gather quantifiable data about them, or conduct one-on-one interviews to formalize the process of understanding them. You could organize ethnographic field studies wherein you observe and participate in the user’s natural environment to identify their needs, or pursue a line of contextual inquiry, which involves shadowing a user while asking them questions. If your project is already well underway and you have functional prototypes to share, usability testing is the most useful and inexpensive approach to gaining awareness on how to improve the end result. This process typically requires observing one user at a time and evaluating how easily he/she can navigate your product while trying to perform a task. This has been among my primary methods of choice so far at Shield AI. 3. Simulate the final use context This is important for usability testing in particular. Testing can be done in a lab for websites and apps that are purely digital experiences, but for products that include a physical component, more care is needed in creating the right context for evaluation. The more similar you can get to the proposed use case, the better your takeaways will be. Conditions of light and dark, the user’s physical and cognitive load, surrounding noise, and other environmental factors can drastically influence the way testers interact with your product. Our use cases at Shield AI are challenging to replicate for obvious reasons, but we try to get as close as we can. We use a number of off-site facilities to test our robots for reliability and usability. During usability testing, our test users are asked to simulate anticipated conditions and to emulate our intended users as closely as possible. Conclusion These are some of the considerations that we take into account in how to best serve — and design for — our customers. At Shield AI we keep the human customer at the forefront of our product development, and we know the importance of user experience in fulfilling our company's mission: to protect service members and innocent civilians with artificially intelligent systems. My teammates are driven by a meaningful objective and shared values, which is what I appreciate most about working here. The User Services & Experience team at Shield AI is hiring — Join us! https://www.shield.ai/careers/
Three Keys to Evaluating User Experience
0
three-keys-to-evaluating-user-experience-143d5404396d
2018-09-26
2018-09-26 15:51:32
https://medium.com/s/story/three-keys-to-evaluating-user-experience-143d5404396d
false
796
null
null
null
null
null
null
null
null
null
User Experience
user-experience
User Experience
25,182
Shield AI
Our mission at Shield AI is to protect service members and innocent civilians with artificially intelligent systems.
2b6175ab5
shield_ai
20
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-04
2018-02-04 13:37:44
2018-02-12
2018-02-12 19:48:09
2
false
en
2018-02-22
2018-02-22 15:52:40
0
143e438647ae
7.372013
2
0
0
The ‘Back to Basics’ series is targeted for people who are just starting their journey in machine learning. It covers a wide range of…
4
Back to Basics: the pipeline The ‘Back to Basics’ series is targeted for people who are just starting their journey in machine learning. It covers a wide range of topics, in fact, anything and everything related to machine learning. The complexity level will vary since some topics will go into the mathematical and statistical underpinnings. The motivation for the series came after spending a fair share of late nights debugging / troubleshooting deep neural networks. It seemed time to take a breather — revisit the basics and shore it up. As the series takes shape, it is hoped that deeper insight is gained and an impetus is created to pursue new concepts/research in this fast changing field. In this first part, groundwork is created, through scenario analysis, evaluating different aspects of a system that can learn, generalize and predict. Define the objective Machine learning is not a panacea. Not all projects are data science projects. And not all data science projects require machine learning to generate value. Machine-learning relies heavily on probabilistic reasoning and is not an exact science. Consequently, machine learning may not be applicable in many cases. A clear problem statement along with an understanding of how the final solution will be used provides an indication on the problem scope and potential solutions. Problem types in ML can be one of optimization, automation, control, and prediction. Solutions fall into one of three main categories — pattern discovery (unsupervised learning), predictive analytics (supervised learning), or some form of decision-making as seen in reinforcement learning applications. If there is one thing that can shut down a ML project faster than a disillusioned sponsor, it is the availability, or lack thereof, of lots (and lots) of data. Data could be either unavailable or inaccurate, incomplete and misleading. In any case, the existence of relevant data should be verified and appropriate data sources identified along with a storage/sync strategy. It is very important to have a well-defined hypothesis early on to ensure that the model is grounded on solid reasoning rather than data mining. Good visualization tools for exploratory data analysis can help demystify complexity while improving insight into the problem domain. This enables a better understanding of the structure of the data, its underlying patterns along with feature differences, contrasts and correlations. More often than not, data will be multi-dimensional increasing the complexity level by a couple of notches. Finally, governance processes to maintain data integrity and model currency should not be neglected for too long. A typical ML pipeline A typical ML pipeline takes the form of data pre-processing -> training -> verifying performance on out of sample data. These are typically automated to avoid information leakage. The specific actions performed here are specified in a model which serves as the blueprint of the entire system. The rest of the article evaluates some of the design considerations that go into the model definition. The approach used here can be abstracted to create a repeatable, consistent requirements gathering framework which can be applied to any machine learning problem. Model Definition A well-thought feature identification strategy and an algorithm selection strategy form the pillars of a robust and scalable machine learning model. Feature Engineering: How well a model generalizes is highly correlated to the predictive strength of its underlying features. The process to identify features is both complex and time-consuming. Libraries exist that generate scores for feature set optimization and/or separate valid signals from noise. Deep learning provides some capabilities in this regard a well. Nevertheless, feature engineering is a crucial activity and requires domain expertise to be effective. Below are some design choices to evaluate. Feature encoding — Do features need to be categorized or abstracted for a better representation of the underlying distribution? Are dummy variables needed? Feature interaction — Is there any interaction (positive or negative) between features? Combining or removing features might increase accuracy. PCA — Are features highly correlated or clustered together? Will the model benefit from dimensionality reduction or feature extraction? Noise — Is there noise in the data due to irrelevant features? Will the model benefit from discarding or filtering any features? Size — As feature size grows, memory and computing power requirements increase. A balance is needed between speed and accuracy. Algorithm Selection: Algorithms can be categorized into three major learning types — Supervised (to predict given some input), Unsupervised (to understand underlying patterns) and Reinforcement learning (to select the best course of action given a payoff structure). Algorithm selection — What is the problem type? How will be final product be used by the end-users? Select an appropriate algorithm based on the problem type and usage — Supervised (regression algorithms such as OLS, SVM, Decision Trees, Ensemble methods, Naive Bayes, etc), Unsupervised (K-means, etc), Reinforcement (q-learning, etc). Loss function — Which cost / loss function makes the most sense? Options include mse, cross-entropy, maximum likelihood, etc Optimization — Which cost optimization technique should be used? Options include stochastic gradient descent, an old workhorse but needs manual fine-tuning of the learning rate and decay rate, or one of the adaptive optimization techniques such as RMSProp and Adam. Neural networks — Will neural network perform better than a traditional one-shot algorithm? NNs are best used for supervised learning although this is changing. NNs are very good at separating a complex, non-linear space. If there is enough data, Nn is almost always a better option. Use the appropriate non-linear kernels in a traditional ML algorithm or use neural networks based on data availability. Hyper-parameter tuning — What parameters need to be fine-tuned? This varies significantly based on the algorithm. However, the process can be that of a brute force optimization or semi-automated with OTS libraries such as scikit-learn. Bias-Variance trade-off — What is the strategy to avoid over-fitting? Select a model that best fits the true regularities — either combine multiple models or consider ensemble method such as bagging and boosting. If everything else fails, consider adding more training data. Neural networks add a layer of sophistication to increase the predictive strength of the model. Over the past few years, deep learning networks (NNs with more than one hidden layer) have been used quite successfully to increase accuracy multiple folds. Based on its architecture, NNs can be further classified as — MLP (feed forward multi-layer perceptron), RNN (recurrent NN for sequence data such as time series), CNN (consolutional NN for image analysis), LSTM (long-short term memory NN; a specific type of RNN), etc. The remaining design choices are applicable for neural networks only. Specific options that are unique to a type of NN such as a CNN or RNN are not covered here. Network Architecture — What is the optimal network architecture? How many hidden layers? How many neurons in each layer? Will the model benefit from a shallow or deep or a wide architecture? Parameter initialization — What is the weight initialization strategy? What about the bias term? A naive zero starting weight will be unable to create the necessary asymmetry conditions. Options include small random numbers, Xavier uniform initializer, Glorot initialization, etc. Activations — What activations are used for the hidden and the output layers? Options include ReLU, sigmoid, softmax, tanh, etc. Over-fitting — What is the over-fitting strategy? Options include lowering the number of layers and/or neurons, dropouts, adding more data, early-stopping and regularization. Exploding / Vanishing Gradient — If the model calls for a very deep architecture (such as in sequence models). a strategy is needed to avoid an exploding / vanishing gradient. Options include such as gradient clipping, LSTM and batch normalization. Data pre-processing Data pre-processing is a series of transformations that converts raw data into something that best satisfies the model specification. A majority of the rules applied here are derived from the model definition covered in previous sections. In addition, other areas to evaluate are: Bad data — Are there missing values? What are the rules to impute values? Skewed data — Did the exploratory data analysis identify outliers or skewed class distributions? Is there a presence of bias due to variations in training data? Normalization — Is normalization or standardization needed? This is typically required when features with differing means are combined or data from different sources are combined. Training data size — A large enough data size mitigates over-fitting risk. If there is not enough data, is upsampling or data augmentation possible? Training The ability of a machine learning system to generalize depends upon how well the system is trained. Training involves feeding pre-processed, sufficiently shuffled subset of data into a machine learning system, such as a deep neural network, and generating a set of optimized parameters. These parameters are then used to answer questions or predict outcomes on data not seen before. Data splits — What is the breakup between training, validation and test data? Does a k-fold validation strategy make more sense? Hyper-parameters — What is the optimal batch size and number of epochs? Over-fitting — Is an early stopping strategy needed? Custom tests — Is there a need to create custom tests to ensure that the model does not deviate too far from its hypothesis? Model performance metrics — How will the model be evaluated for performance? Use the appropriate metric based on the learning type — Supervised (precision, accuracy, recall, confusion matrix, etc), Unsupervised (Silhoutte coefficient, etc) For the most part, a project will undergo multiple iterations before a model demonstrates satisfactory predictive strength. Through the iterations, feature sets may get refined, hyper parameters fine-tuned, algorithms changed, etc. An accurate change history can be very helpful to reflect on failed tests for particularly difficult problems. Finally, once the model is in production, it should be periodically evaluated to ensure good fit as real-world data changes over time and for ongoing optimization. Summary This part covered the basics of a machine learning system. Feature engineering and selecting the right algorithm for the job are crucial. There are multiple knobs to set and an attempt was made to list them in some logical order. The details on some of these will follow in subsequent parts. NNs were looked at from a birds eye view along with unique considerations specific to neural networks. All of this was bootstrapped in a holistic process that creates a foundation for an enterprise to approach ML applications. In an attempt to keep the article to a reasonable and readable size only the most important design aspects were covered. Machine learning is a rich, complex and evolving ecosystem with a very active research community. There is a high probability that this article will need updates and tweaks over time to stay current. To that extent, this article should be considered as simply a launchpad for further research into associated concepts. Finally, reader feedback is welcome in case there are any errors or gaps in this article or if there are any questions on machine learning in general.
Back to Basics: the pipeline
4
back-to-basics-the-pipeline-143e438647ae
2018-02-24
2018-02-24 20:38:30
https://medium.com/s/story/back-to-basics-the-pipeline-143e438647ae
false
1,852
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
RJM
null
fbdd3b3ac951
rjm2017
15
0
20,181,104
null
null
null
null
null
null
0
null
0
c0c7b7b87964
2018-02-06
2018-02-06 22:56:59
2018-02-06
2018-02-06 23:16:11
3
false
en
2018-02-06
2018-02-06 23:29:53
6
14402ea13889
2.248113
1
0
0
‘The future belongs to those who learn more skills, and combine them in creative ways’ — Robert Green, Mastery
4
Font Finder gets featured on Intel DevMesh Site ‘The future belongs to those who learn more skills, and combine them in creative ways’ — Robert Green, Mastery Intel DevMesh is a site that developers can showcase all the groundbreaking and interesting projects they are working in the AI and IoT space. It aims at connecting the world’s brightest minds, by showcasing world class development projects and enabling developers to collaborate. It is in my belief that technology and its advancements are meant to empower us to combine it with our current skills, art and sciences to stir up a magnificent amalgamation. As a developer, it is essential to combine the technical and the creative, and learn how to package our skills in a way that can serve cross-sector by coming up with creative solutions to problems. FONT FINDER On August 2017, my project called Font-Finder got featured on the Intel DevMesh homepage, among two other projects selected for featuring within the month. It is currently among the top 15 projects on DevMesh, with over 180 followers. Font Finder - Intelligent typefont recognition using OCR The project features the development of a library that will detect the font of a text in an image. The library will be…devmesh.intel.com Font-Finder aims to assist designers and developers alike in typefont identification by using image recognition (OCR) to intelligently recognize and identify fonts used on physical print material or natural scene images, all with the use of a smartphone camera. Doubling up as a designer, it has always been my wish to have a technology in place that can enable us to look to previous designs for inspiration. The whole goal of the project is to make typefonts on physical print material searchable. With the precedent of AI, developers have been provided a large array of tools and resources that they can leverage on to make machines smarter, and propel AI integration. It is currently among the top 15 projects on DevMesh, with over 180 followers. I have made immense progress in the project execution, and will be open-sourcing it soon. You can follow on Github for any updates 😊. BREAKING BOUNDARIES Africa is poised to be the greatest source of the next generation developer workforce, and as community, we are focused on getting the world to see the amount of technical talent Africa possesses. Among the greatest DevMesh AI projects of 2017 is Ngesa Marvine’s research project on addressing climate change issues in Kenya. It has made its way into famous articles, and gained immense recognition. Find out more about the Hyacinth Monitor, and on how he plans to to predict the exact location of water hyacinth in Lake Victoria.
Font Finder gets featured on Intel DevMesh Site
11
font-finder-gets-featured-on-intel-devmesh-site-14402ea13889
2018-06-11
2018-06-11 10:28:08
https://medium.com/s/story/font-finder-gets-featured-on-intel-devmesh-site-14402ea13889
false
450
This program is designed to assist student experts in telling their story and share their expertise with other student data scientists and developers. Intel is offering exclusive access to newly optimized frameworks and technologies, hands-on training and workshops for them.
null
Intel-At-DEKUT
null
Intel Student Ambassador Program
null
intel-student-ambassador-program-dekut
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,INTEL
intelatdekut
Design
design
Design
186,228
Chris Barsolai
Intel AI Ambassador | Organizer, Nairobi AI | All things Python | For the best of AI | Live and let live
c3fdec9aef04
chrisbarsolai
36
73
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-27
2018-04-27 17:58:00
2018-04-27
2018-04-27 18:34:32
0
false
en
2018-04-27
2018-04-27 18:34:32
0
14406d22dac9
0.554717
0
0
0
27th of April is the day that united tech and entrepreneurship enthusiasts in Radisson Blu for the coolest event of the year. To sum up all…
1
Rockit Conference Moldova 2018: ideas that inspire 27th of April is the day that united tech and entrepreneurship enthusiasts in Radisson Blu for the coolest event of the year. To sum up all the presentations and panels, I made a list of ideas from Rockit speakers: Social media promotion Show your work, everyone is an artist. Differentiate yourself, be a creator. Internet made people too loud, people want to be heard on social media. Have a clear message that comes from the heart. Post things for a specific audience, do your research. Follow a schedule, be consistant, daily posts are most effective, quality is important. Understand your goal, find personal style New digital era Artificial Intelligence (AI) will free us from boring work. Before every industrial revolution people were afraid of new technology and didn’t accept it right away. Only 5% of jobs will be totally replaceable.
Rockit Conference Moldova 2018: ideas that inspire
0
rockit-conference-moldova-2018-ideas-that-inspire-14406d22dac9
2018-04-27
2018-04-27 18:34:33
https://medium.com/s/story/rockit-conference-moldova-2018-ideas-that-inspire-14406d22dac9
false
147
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Elizaveta Bazilevici
Marketing Enthusiast at Pentalog
f3ac8ef7a6c8
LizVici
32
62
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-12
2017-11-12 01:17:32
2017-11-12
2017-11-12 02:13:19
0
false
en
2017-11-12
2017-11-12 02:13:19
1
1442b110e6e
0.596226
0
0
0
Someone asked me recently if I was investing in ‘Crypto’. As it turns out I was, albeit indirectly. It’s what you might call a ‘picks and…
5
Got any Crypto Mate? Someone asked me recently if I was investing in ‘Crypto’. As it turns out I was, albeit indirectly. It’s what you might call a ‘picks and shovels’ play on the new age ‘mining’ industry. This is probably considered cheating, but have a think about NVidia. They are a leader in the field of computer processors. Crypto currencies and blockchain generally rely on distributed computing power. ‘Mining Farms’ are large banks of privately owned computer processors competing to ‘mine’ or process blockchain activity around the world. When I say compete … check out this video to get a sense of the scale of it. … and if the whole crypto craze comes crashing down, there are much broader trends to keep NVidia going … like AI, machine learning, deep learning, computer gaming, CGI in movies, virtual reality, drones, self driving cars, internet of things, health sciences, whatever other sciences and cloud or distributed computing generally.
Got any Crypto Mate?
0
got-any-crypto-mate-1442b110e6e
2017-11-12
2017-11-12 02:13:20
https://medium.com/s/story/got-any-crypto-mate-1442b110e6e
false
158
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ross McIntyre
null
52310444a4db
ross.mcintyre
1
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-11
2018-03-11 05:01:55
2018-03-11
2018-03-11 05:18:12
11
false
en
2018-03-18
2018-03-18 06:22:23
2
1442b1798b23
2.492453
0
0
0
Marketing has come a long way from the one-way single shot channel marketing. Marketing Quant, as it is called now-a-days, is two way that…
2
The Art of Quant Marketing a “JARVIS” Assistant Marketing has come a long way from the one-way single shot channel marketing. Marketing Quant, as it is called now-a-days, is two way that really understands & interacts with the users ! More importantly marketing is a recommendation both ways — A product manager of a software product would like to gently prod users to use a feature when a usage pattern is seen frequently And if a marketing layer sees a usage pattern that is not in the product surface area, it will mark that as a potential future product feature for the engineers to ponder … And one can also get a quantitative score for a set of new features … Marketing can classify behaviors into usage that can map to current features (recommend) and usage that spans future capabilities (input to product team) or white space that is not addressed anywhere (new adjacent product!) I really like the concept of path prediction, leading to “Next Best Action” Of course, Data Science methodologies spanning traditional Machine Learning to Artificial Intelligence techniques are the tools of the trade … Here is a pitch that I recently developed …. What says thee ? I really am interested in building the JARVIS — it has many specific applications incl. companionship for elderly and disabled, children, even the overworked billionaires in the valley ! This should understand context so that it can teach with increasing levels of sophistication — just think how valuable a JARVIS could be if it could teach us GO incrementally ie offer just enough skill so that we can barely beat it and it increases the level as we get better ! Next : I will write about our visit to UC Berkeley, BAIR (Berkeley AI Research) and a very interesting talk by Vint Cerf — yep that Vint Cerf in person !!!
The Art of Quant Marketing a “JARVIS” Assistant
0
the-art-of-quant-marketing-a-jarvis-assistant-1442b1798b23
2018-03-18
2018-03-18 06:22:24
https://medium.com/s/story/the-art-of-quant-marketing-a-jarvis-assistant-1442b1798b23
false
316
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Krishna Sankar
null
578c7297699c
ksankar
97
2
20,181,104
null
null
null
null
null
null
0
null
0
d82dbd11f86a
2018-01-12
2018-01-12 01:02:52
2018-01-12
2018-01-12 02:38:40
1
false
en
2018-01-12
2018-01-12 02:38:40
0
14440b955037
1.6
0
0
0
In a company there is a large amount of data, which differs in its nature, form and interpretation. The first thing we need to know is what…
5
Types of Data in a company In a company there is a large amount of data, which differs in its nature, form and interpretation. The first thing we need to know is what types of data there are and how the data can be interpreted. Let´s start at the beginning statistics is the process of collecting, organizing and interpreting data. Data consist of individuals and variables. Individuals are the objects of interest in your study, in other words, what you are collecting your information or data on. They can be actual individuals or people, or they can be other objects such as cities, animals, products, stocks, anything, really. Variables are characteristics of those individuals or the attributes that you are measuring, such as height, weight, age, satisfaction with something or the population size of a city, average income, median house price, and average temperature. A great way to store your data is in a spreadsheet. Data typically takes this form where the rows are individuals and the columns are variables, each row represents an individuals object of interest and each column, a piece of information or data that was gathered about that individual object. Numerical and Categorical Data Data can take on one of two forms. It can be categorical, meaning that it classifies as a variable into a specific category. “Gender” is a great example of a categorical variable. Your either identify yourself as a male or female. Sometimes you will hear data called qualitative. Qualitative data is categorical data, just by a different name. Or data can be numerical or quantitative. Generally, larger numbers mean more of something. Weight, height, temperature are examples of numerical variables. Further, numerical variables can be discrete or continuous. Discrete variables take on a specific value and must be an integer. The number of vehicles that you own, or the number of flights that you took this year, are examples of discrete variables. You can’t have a part of a car, or take part of a flight. Continuous variables can take on any value within a range. Weight, temperature are continuous variables because something mightweigh a fraction of a pound or a kilo, or temperature can be somewhere between two values on a scale.
Types of Data in a company
0
types-of-data-in-a-company-14440b955037
2018-01-12
2018-01-12 02:38:42
https://medium.com/s/story/types-of-data-in-a-company-14440b955037
false
371
All about the Data Analysis with entrepreneur
null
null
null
High Data Stories
high-data-stories
DATA SCIENCE,DATA VISUALIZATION,DATA ANALYSIS,TECNOLOGY,SMALL MEDIUM ENTERPRISE
null
Data Science
data-science
Data Science
33,617
Luis Alberto Palacios
I’m a passionate about life that enjoy meeting people, living new adventures and sharing my experiences
7a4f2ad9c738
LuisAlbertoPala
97
51
20,181,104
null
null
null
null
null
null
0
null
0
b87b1a1e2aa0
2018-07-10
2018-07-10 16:37:05
2018-07-10
2018-07-10 18:27:03
2
false
en
2018-07-10
2018-07-10 18:34:08
4
144592d774b1
1.651258
0
0
0
Since NBAI tokens are transferrable now, here’s a helpful manual for transferring using MEW.
3
How to Send NBAI Tokens using MyEtherWallet (MEW) Since NBAI tokens are transferrable now, here’s a helpful manual for transferring using MEW. If you dont already have a MEW wallet and need to create one you can read our post here. Ether You will need some ETH on your wallet to pay for transfer fees. Be sure you have some before you begin. Warning: Before you continue, one of the most important rules of managing crypto online is to NEVER click on a link to access websites such as wallets and exchanges. Clicking on links to access wallets or exchanges is one of the most common ways to get your funds stolen because you end up entering your password/private keys into a compromised website. To reinforce crypto-safety best practices, don’t click on the MEW link above, and instead make a new window and type “myetherwallet.com” into your address bar. Load your Wallet Go to https://www.myetherwallet.com/#send-transaction In the Address field, enter the Ethereum address you would like to transfer your NBAI to. If it’s a wallet, it needs to be a wallet for which you control the private keys. If it is an exchange, it should be an address specifically for NBAI ERC20 tokens. DOUBLE CHECK YOUR ADDRESS In Amount to Send, enter the number of NBAI tokens you would like to send. Select NBAI from the dropdown to the right of the Amount to Send field. MEW will suggest a gas limit based on current network conditions. If it does not, enter 9000 or more or check ETH Gas Station. If you would like your transaction to be confirmed faster, adjust accordingly. Click Generate Transaction Click Send Transaction If all went well, a green bar will appear at the bottom of the page with your tx hash, and a link to your tx on the blockchain. If you got a red bar, read the error it provides and correct the issue it illuminated. Nebula AI Team The Third Generation Blockchain — Decentralized AI Blockchain
How to Send NBAI Tokens using MyEtherWallet (MEW)
0
how-to-send-nbai-tokens-using-myetherwallet-mew-144592d774b1
2018-07-10
2018-07-10 18:34:08
https://medium.com/s/story/how-to-send-nbai-tokens-using-myetherwallet-mew-144592d774b1
false
336
Nebula AI is a decentralized blockchain platform where developers can deploy their Artificial Intelligence applications easily.
null
NebulaAI
null
Nebula-AI
nebula-ai
AI,BLOCKCHAIN TECHNOLOGY,MONTREAL,BLOCKCHAIN DEVELOPMENT
nebula_ai
Blockchain
blockchain
Blockchain
265,164
Nebula-AI
Nebula AI is a Montreal based decentralized blockchain platform integrated with Artificial intelligence and sharing economics.
138d9f98bcf0
nebulaai
52
3
20,181,104
null
null
null
null
null
null
0
average yen cd rates fall in latest week tokyo, feb 27 - average interest rates on yen certificates of deposit, cd, fell to 4.27 pct in the week ended february 25 from 4.32 pct the previous week, the bank of japan said. new rates (previous in brackets), were - average cd rates all banks 4.27 pct (4.32) money market certificate, mmc, ceiling rates for the week starting from march 2 3.52 pct (3.57) average cd rates of city, trust and long-term banks less than 60 days 4.33 pct (4.32) 60-90 days 4.13 pct (4.37) average cd rates of city, trust and long-term banks 90-120 days 4.35 pct (4.30) 120-150 days 4.38 pct (4.29) 150-180 days unquoted (unquoted) 180-270 days 3.67 pct (unquoted) over 270 days 4.01 pct (unquoted) average yen bankers' acceptance rates of city, trust and long-term banks 30 to less than 60 days unquoted (4.13) 60-90 days unquoted (unquoted) 90-120 days unquoted (unquoted) reuter average yen cd rate fall latest week tokyo feb 27 average interest rate yen certificatesof deposit cd fell 427 pct week ended february 25from 432 pct previous week bank japan said new rate previous bracket average cd rate bank 427 pct 432 money market certificate mmc ceiling rate weekstarting march 2 352 pct 357 average cd rate city trust longterm bank le 60 day 433 pct 432 6090 day 413 pct 437 average cd rate city trust longterm bank 90120 day 435 pct 430 120150 day 438 pct 429 150180 day unquoted unquoted 180270 day 367 pct unquoted 270 day 401 pct unquoted average yen banker acceptance rate city trust andlongterm bank 30 le 60 day unquoted 413 6090 day unquoted unquoted 90120 day unquoted unquoted reuter from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences max_vocab_size = 200000 input_tokenizer = Tokenizer(max_vocab_size) input_tokenizer.fit_on_texts(totalX) input_vocab_size = len(input_tokenizer.word_index) + 1 print("input_vocab_size:",input_vocab_size) # input_vocab_size: 167135 totalX = np.array(pad_sequences(input_tokenizer.texts_to_sequences(totalX), maxlen=maxLength)) array([ 6943, 5, 5525, 177, 22, 699, 13146, 1620, 32, 35130, 7, 130, 6482, 5, 8473, 301, 1764, 32, 364, 458, 794, 11, 442, 546, 131, 7180, 5, 5525, 18247, 131, 7451, 5, 8088, 301, 1764, 32, 364, 458, 794, 11, 21414, 131, 7452, 5, 4009, 35131, 131, 4864, 5, 6712, 35132, 131, 3530, 3530, 26347, 131, 5526, 5, 3530, 2965, 131, 7181, 5, 3530, 301, 149, 312, 1922, 32, 364, 458, 9332, 11, 76, 442, 546, 131, 3530, 7451, 18247, 131, 3530, 3530, 21414, 131, 3530, 3530, 3]) embedding_dim = 256 model = Sequential() model.add(Embedding(input_vocab_size, embedding_dim,input_length = maxLength)) model.add(GRU(256, dropout=0.9, return_sequences=True)) model.add(GRU(256, dropout=0.9)) model.add(Dense(num_categories, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) loss: 0.1062 - acc: 0.9650 - val_loss: 0.0961 - val_acc: 0.9690 import matplotlib.pyplot as plt acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() textArray = np.array(pad_sequences(input_tokenizer.texts_to_sequences([input_x_220]), maxlen=maxLength)) predicted = model.predict(textArray)[0] for i, prob in enumerate(predicted): if prob > 0.2: print(selected_categories[i]) pl_uk pl_japan to_money-fx pl_japan to_money-fx to_interest
16
null
2017-11-18
2017-11-18 07:31:23
2017-11-18
2017-11-18 07:37:47
3
true
en
2018-07-07
2018-07-07 03:25:30
4
1445bdd906c5
4.534906
4
0
0
My previous post shows how to choose last layer activation and loss functions for different tasks. This post we focus on the multi-class…
4
How to do multi-class multi-label classification for news categories My previous post shows how to choose last layer activation and loss functions for different tasks. This post we focus on the multi-class multi-label classification. Overview of the task We are going to use the Reuters-21578 news dataset. With a given news, our task is to give it one or multiple tags. The dataset is divided into five main categories: Topics Places People Organizations Exchanges For example, one given news could have those 3 tags belonging two categories Places: USA, China Topics: trade Structure of the code Read the category files to acquire all available 672 tags from those 5 categories. Read all the news files and find the most common 20 tags out of 672 we are going to use for classification. Here is a list those 20 tags. Each one is prefixed with its categories for clarity. For instance “pl_usa” means tag “Places: USA”, “to_trade” is “Topics: trade” etc. In previous step, we read the news contents and stored in a list One news looks like this We start up the cleaning up by Only take characters inside A-Za-z0–9 remove stop words (words like “in” , “on”, “from” that don’t really contain any special information) lemmatize (e.g. turning word “rates” to “rate”) After this our news will looks much “friendly” to our model, each word is seperated by space. Since a small portation of news are quite long even after the cleanup, let’s set a limit to the maximum input sequence to 88 words, this will cover up 70% of all news in full length. We could have set a larger input sequence limit to cover more news but that will also increase the model training time. Lastly, we will turn words into the form of ids and pad the sequence to input limit (88) if it is shorter. Keras text processing makes this trivial. The same news will look like this, each number represents a unique word in the vocabulary. Embedding layer embed a sequence of vectors of size 256 GRU layers(recurrent network) which process the sequence data Dense layer output the classification result of 20 categories After training our model for 10 epochs in about 5 minutes, we have achieved the following result. The following code will generate a nice graph to visualize the progress of each training epochs. Take one cleaned up news (each word is separated by space) to the same input tokenizer turning it to ids. Call the model predict method, the output will be a list of 20 float numbers representing probabilities to those 20 tags. For demo purpose, lets take any tags will probability larger than 0.2. This produces three tags the ground truth is The model got 2 out of 3 right for the given news. Summary We start with cleaning up the raw news data for the model input. Built a Keras model to do multi-class multi-label classification. Visualize the training result and make a prediction. Further improvements could be made Cleaning up the data better Use longer input sequence limit More training epochs The source code for the jupyter notebook is available on my GitHub repo if you are interested. Originally published at www.dlology.com.
How to do multi-class multi-label classification for news categories
9
how-to-do-multi-class-multi-label-classification-for-news-categories-1445bdd906c5
2018-07-10
2018-07-10 13:10:57
https://medium.com/s/story/how-to-do-multi-class-multi-label-classification-for-news-categories-1445bdd906c5
false
1,056
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Chengwei Zhang
Programmer and maker. Love to write deep learning articles.| Website: https://DLology.com | GitHub: https://github.com/Tony607
dc7e7f0185be
chengweizhang2012
416
19
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-18
2017-12-18 08:02:49
2017-12-18
2017-12-18 08:04:31
0
false
en
2017-12-18
2017-12-18 08:04:31
1
144639f28adb
0.298113
0
0
0
Just take a little glance with this article of karan bajaj which promote AI to help you in your work, find out here if you think that…
5
Will AI take your job? The Definitive Guide Will AI take your job? The Definitive Guide I was in a global role. 10% of the time I worked, the rest of the time I pushed paper around. Pre-alignment meetings…www.karanbajaj.com Just take a little glance with this article of karan bajaj which promote AI to help you in your work, find out here if you think that Artificial intelligence will be a good invention to help us. Read more.
Will AI take your job? The Definitive Guide
0
will-ai-take-your-job-the-definitive-guide-144639f28adb
2017-12-18
2017-12-18 08:04:32
https://medium.com/s/story/will-ai-take-your-job-the-definitive-guide-144639f28adb
false
79
null
null
null
null
null
null
null
null
null
Karanbajaj
karan-bajaj
Karanbajaj
1
rosekaran567
null
8c73fe9d3d67
rosekaran567
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-30
2018-07-30 09:19:13
2018-07-30
2018-07-30 21:58:20
3
false
en
2018-07-30
2018-07-30 21:58:20
0
14465b6f57fd
3.972642
1
0
0
The linked list is unlike a typical array. Though still a collection of items, it’s elements are not stored in contiguous memory. A great…
5
Data Structure Stories: The Linked List The linked list is unlike a typical array. Though still a collection of items, it’s elements are not stored in contiguous memory. A great advantage of this is that linked lists are limited only by the memory available to a program. They do not run out of reserved space like an array and do not require expensive allocation operations. The way I like to think of them is a chain. Each element is a “node”. These nodes each contain data, and a pointer to the next node in the list. Thus, the whole list forms a chain of pointers between nodes. Because we have these explicit references, it is not necessary for the nodes to be stored next to each other in memory. One thing to note about this configuration is that you often get poorer cache performance as a result. A simple linked list. Each node contains a value and the memory address of the next node. The Head of the list is the first element. The rest of the list is the Tail. If I know the first node of the list, the head, I can follow all the pointers and traverse the whole list. The rest of the list is often called the tail. It is the sub-list of everything except the head. The above example is the simplest form of linked list, often referred to as a singly linked list. As you can probably tell, we can go forwards through the list, but we have no pointers to traverse backwards with. Furthermore, what happens when we reach the end of the list? Every node has a pointer to the next, but there is no next at the final node. Usually this value is assigned as null, to signal the end of the list. This leads to the infamous traversal condition “while(node.next != null)”. An empty linked list obviously consists of no nodes, however, a sentinel linked list always contains some nodes. This consists of always holding placeholder nodes in memory, namely a start and end node. The normal list can then be attached to these nodes. These sentinel nodes are intended to simplify list operations and allow safe de-referencing of all other nodes. Evidently, the drawback here is the need to occupy more memory with these sentinel nodes. Linked lists are not without issue, but they are also very powerful for fast insertion and deletion of nodes. In fact they are far better than arrays at those operations! Furthermore, singly linked lists can easily be decomposed to heads and tails as we traverse the list. This leads to fluent and easily understood recursive algorithms. Linked lists can be used to implement other data structures like stacks or queues. History The linked list was invented in 1955–56 by Herbert Simon, Alan Newell, and Cliff Shaw at RAND corporation. Its original purpose was as a core data structure in their IPL language for Artificial Intelligence. Through many years it has seen more usage in major programming languages like LISP. Performance Lookup As we have seen, we must traverse a linked list from the head via its pointers in order to reach nodes. This means we get a linear indexing performance of O(n), far from desirable when compared to an array. To find the node with the value 3 I have to traverse twice from the head of the list. Insertion and Deletion A clear advantage of the linked list approach is that we do not have to shift elements around to insert or delete nodes. We simply have to rearrange pointers, a constant, or O(1) operation. Since we may often know the head or last node of a list, we can directly insert new nodes there in O(1) time. On the other hand, if we are to insert into the middle of the list we have to incur the time it takes to traverse and then insert the element, an average complexity case of O(n/2) + O(1). Speeding up Lookups There are two ways that linked lists can be sped up. The first is to use a move-to-front heuristic. When an element is found it is moved to the front of the list. This provides a small version of caching; one hopes that recently used elements will be accessed again soon. A second option is to use an external data structure to index the linked list. This data structure could hold references to nodes. The disadvantage of this approach is the need to update the external data structure in response to manipulations of the list. Drawbacks I alluded to the first drawback of linked lists previously. If the nodes are not contiguous in memory, then we can’t take advantage of caching as much as we would like. Secondly, we must traverse the list to perform most operations. Finally, it is important to consider the fact that the pointers we use take up memory, as well as the actual data in the nodes. Alternatives Though we have looked at singly linked lists, there are also doubly linked lists. In a doubly linked list, each node has a pointer to the next node, but also a pointer to the previous node. This gives one more flexibility in manipulating the list, as backwards traversal is now possible. Conversely, the list now occupies even more memory, with twice as many pointers as before. A doubly linked list. Each node contains a pointer to the next and the previous nodes. There are also multiply linked lists, where each node has two or more references to other nodes. This type is a generalised super-set of the doubly linked list. Circular linked lists are the last type. In a circular linked list, the final node’s next node is the head of the list, forming a cycle from end to start.
Data Structure Stories: The Linked List
29
data-structure-stories-the-linked-list-14465b6f57fd
2018-07-30
2018-07-30 21:58:21
https://medium.com/s/story/data-structure-stories-the-linked-list-14465b6f57fd
false
907
null
null
null
null
null
null
null
null
null
Programming
programming
Programming
80,554
James du Plessis
Software Engineer with a passion for programming, gaming, music, and physical sciences.
59a8048c7446
duplessisjdp96
2
1
20,181,104
null
null
null
null
null
null
0
null
0
6d6dcd6ce26d
2018-02-27
2018-02-27 16:23:08
2018-02-27
2018-02-27 16:30:40
47
false
en
2018-02-28
2018-02-28 10:06:03
52
14471176ab12
9.156604
0
0
0
This was first published on October 6th, 2017 at Subir Mansukhani’s Mailing List, Subirority Complex.
5
Subirority Complex — Issue #4 This was first published on October 6th, 2017 at Subir Mansukhani’s Mailing List, Subirority Complex. GENERAL Google’s People + AI Research Initiative Sets Out to Solve Artificial Stupidity | WIRED Injecting more humanity into artificial intelligence could help society — and Google’s business. The Data Science Product Manager — New Roles For The New Business Reality | IoT For All Without a quality data science product manager, projects languish in endless research or flop in the transition from prototype to production. Guide to attribution modeling — 5 tips to understand your marketing efforts better How to Map Behavioral Metrics Into Your Key Business Drivers In this article, I attempt to answer the question by suggesting seven of the most important behavioral metrics that drive business. Data liquidity in the age of inference — O’Reilly Media Probabilistic computation holds too much promise for it to be stifled by playing zero sum games with data. Google’s Stunning New Toy Shows the Wild Future of Machine Learning Teach a machine with nothing but your webcam. What Motivates Employees More: Rewards or Punishments? It depends whether you’re trying to encourage or discourage action. In Six Seconds, Giphy Could Make Billions | The future of business With 300 million daily users and every major media company as a partner, Giphy’s got a feeling it can shake up the internet advertising business. MACHINE LEARNING Google and Uber’s Best Practices for Deep Learning — Intuition Machine — Medium There is more to building a sustainable Deep Learning solution that what is provided by Deep Learning frameworks like TensorFlow and PyTorch. These frameworks are good enough for research, but they… A new kind of pooling layer for faster and sharper convergence In the max-pooling layer (used in almost all state of the art vision tasks and even some NLP tasks) you throw away roughly 75% of the activations. I wanted to design a new kind of pooling layer that… Convolutional Attention Model for Natural Language Inference In this article I’d like to show you a model I used for the Quora question pairs competition. First, I’ll describe a Decomposable Attention Model for Natural Language Inference (Parikh et al., 2016… The hippocampus as a ‘predictive map’ | DeepMind In our new paper, in Nature Neuroscience, we apply a neuroscience lens to a longstanding mathematical theory from machine learning to provide new insights into the nature of learning and memory. Specifically, we propose that the area of the brain known as the hippocampus offers a unique solution to this problem by compactly summarising future events using what we call a “predictive map.” PyTorch tutorial distilled — Towards Data Science — Medium In this post, I will cover some basic principles as some advanced stuff at the PyTorch. WaveNet launches in the Google Assistant | DeepMind Twelve months ago we published details of WaveNet, a deep neural network for generating raw audio waveforms that was capable of producing more realistic-sounding speech than existing text-to-speech techniques. We have now updated the model so that it is faster, higher quality and able to run at Google scale. As of today, we are proud to announce that this new model is being used to generate the Google Assistant voices for US English and Japanese across all platforms. A Brief Survey of Deep Reinforcement Learning GitHub — huggingface/torchMoji: A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc torchMoji — A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc GitHub — apple/coremltools: Converter tools for Core ML. coremltools — Converter tools for Core ML. Live Anomaly Detection compute tier. Detection and filtering of anomalies in live data is of paramount importance for robust decision making. To this end, in this talk we share techn… Deep Learning with R | DataScience+ For R users, there hasn’t been a production grade solution for deep learning (sorry MXNET). This post introduces the Keras interface for R and how it can be used to Sentiment analysis with Apache MXNet — O’Reilly Media Using deep neural networks to make sense of unstructured text. Discovering similarities across my Spotify music using data, clustering and visualization With the help of Spotify’s audio features, unsupervised learning, and visualization techniques, I delved into my music to find similarities between it. Numpy VS Tensorflow: speed on Matrix calculations — Towards Data Science — Medium In this post I wanna share my experience in matrix calculations. At the end of the post will become more clear which of the two libraries has to be used for calculations which do not require hours of… The Beginner’s Guide to Text Vectorization | MonkeyLearn Blog TECHNOLOGY Gartner’s top 10 technology trends for 2018 — SD Times Software Development News A simplified guide to gRPC in Python — Engineering@Semantics3 Google’s gRPC provides a framework for implementing RPC (Remote Procedure Call) workflows. By layering on top of HTTP/2 and using protocol buffers, gRPC promises a lot of benefits over conventional… Building a notebook platform for 100,000 users — Scott Sanderson (Quantopian) — YouTube Scott Sanderson describes the architecture of the Quantopian Research Platform, a Jupyter Notebook deployment serving a community of over 100,000 users, expl… Vis Academy Classes and tutorials from the Uber Visualization team Process Tutorial Series: Explorr — Part One (User Persona & Wireframes) So what is this you may ask? How is this any different to the countless text/image/video based tutorials that are readily available? Ok. So we’ve all absorbed a tutorial in some shape or form… The Open Home Lab Stack — Hacker Noon This is an article I’ve put together to create an Open source Home Network stack using various technologies which are mostly free however all have paid subscriptions as well. Before I start with this… Create Powerpoint presentations from R with the OfficeR package | R-bloggers For many of us data scientists, whatever the tools we use to conduct research or perform an analysis, our superiors are going to want the results as a Microsoft blogdown: Creating Websites with R Markdown A guide to creating websites with R Markdown and the R package blogdown. Google launches Cloud Firestore, a new document database for app developers | TechCrunch Google today launched a new database service for Firebase, its platform for app developers. The new Firestore database complements the existing Firebase.. From design patterns to category theory A crash course on Serverless with Node.js — Hacker Noon Regardless of your developer background, it’s inevitable you’ve heard the term Serverless in the past year. The word has been buzzing around in my ears for longer than I dare say. For too long have I… Foundations of streaming SQL [Strata NYC 2017] — Google Slides DATAVIX/UI/VIZ Priming and Anchoring Effects in Visualization Timelines Revisited Designing with AI — Elegant Tools — Medium Behind the scenes, AI helps make Facebook smarter and easier to use. We use it to help translate text so people can understand each other better, to recognize what’s in images so visually impaired… Revealed: the IIB Awards 2017 Longlist — Information is Beautiful Awards Announcing the 2017 Kantar Information is Beautiful Awards longlist. Journey Mapping is Key to Gaining Empathy — UX Planet This article is for people who already have a basic understanding of journey mapping, but if you are totally new to the tool and how to create one, UX Lady has a great example where she uses the… An Introduction to Interaction Flows — UX Planet As UX Designers, we put care and critical thought in our projects. Every interaction is planted with a specific purpose for the user. This is why it makes it that much more painful for us when the… The best Slack groups for UX designers — uxdesign.cc Tons of companies are using Slack to organize and facilitate how their employees communicate on a daily basis. Slack has now more than 5 million daily active users and more than 60,000 teams around… Modeling Color Difference for Visualization Design The why and how of effective design critiques — uxdesign.cc Critiques are a time-proven way of pushing design ideas forward. Art and design schools have used them as key teaching venues for decades. And while common in corporate teams, I suspect they’re often… Story Curves Exploring Histograms An interactive essay on the joys and pitfalls of histograms VR-AR Graphmented — Product Hunt Graphmented — Create stunning charts using AR (Patent Pending).. (iPhone, Productivity, and Analytics) Read the opinion of 24 influencers. Discover 3 alternatives like FnordMetric and AnyChart AMC Theaters Invests $20 Million In Virtual Reality Development AMC will help launch VR experiences in locations including its theaters. How VR Saves Lives In The OR Virtual Reality technology is going to train surgeons, improve patient outcomes, save countless lives and billions of dollars. Burberry Turns to Apple for Augmented-Reality Fashion App — Bloomberg Burberry Group Plc is the latest luxury brand to experiment with augmented reality, adding technology from Apple Inc. to its smartphone app as fashion retailers race to find new ways to engage with big spenders online. Virtual reality and augmented reality are the future of digital advertising — Quartz Ever met somebody who professed an abiding love for pop-up ads? What about autoplay videos — the ones squirreled away in some undiscoverable corner of a website, singing jiggles at you while you scamper to find the pause button? Yeah, me neither. We live in times of great division, but when it comes to modern marketing techniques,… https://xkcd.com/539/
Subirority Complex — Issue #4
0
subirority-complex-issue-4-14471176ab12
2018-05-31
2018-05-31 05:01:54
https://medium.com/s/story/subirority-complex-issue-4-14471176ab12
false
1,605
Change gears with Clutch.AI: the best in enterprise AI
null
clutchai
null
Clutch.AI
clutch-ai
AI,STARTUP,MACHINE LEARNING,DEEP LEARNING,PREDICTIVE ANALYTICS
clutch_ai
Machine Learning
machine-learning
Machine Learning
51,320
Clutch.AI
4 tools & 1 service that will change the way AI is used across industries, from #fintech & #sales to #insurance & beyond! Incubated @KhoslaLabs #AI #ML #startup
9f35ffb4ed61
clutch_ai
35
52
20,181,104
null
null
null
null
null
null
0
null
0
a0cad8bd07a5
2018-09-12
2018-09-12 08:10:18
2018-09-12
2018-09-12 08:14:01
1
false
en
2018-09-12
2018-09-12 08:14:01
0
144992bd6523
0.913208
1
0
0
Fetch are delighted to announce the arrival of our newest member of staff, Tom Nicholson. Tom will be assisting us as Machine Learning…
1
Machine learning specialist Tom Nicholson joins Fetch.AI Fetch are delighted to announce the arrival of our newest member of staff, Tom Nicholson. Tom will be assisting us as Machine Learning Researcher. After becoming fascinated by artificial intelligence as a child, Tom studied Computer Science at St Andrews, taking minors in Maths and Psychology. During his degree, he focused on the boundaries of AI and Human-Computer Interaction and his work in this field was later published. Tom also holds an MPhil in Machine Learning from Cambridge University. While studying for his masters, he specialised in reinforcement learning and probabilistic modelling. Since graduating, Tom has worked in AI-focused start-ups, where he has applied knowledge of deep learning, probabilistic modelling and reinforcement learning in areas such as natural language processing and automated trading. Tom has retained an interest in the latest developments in academia while remaining focussed on the technology’s practical applications. He is always seeking to develop novel ways to design, build and deploy innovative AI solutions to real-world problems. This desire is shared by everyone at Fetch. We look forward to taking the next steps together.
Machine learning specialist Tom Nicholson joins Fetch.AI
1
machine-learning-specialist-tom-nicholson-joins-fetch-ai-144992bd6523
2018-09-12
2018-09-12 08:14:02
https://medium.com/s/story/machine-learning-specialist-tom-nicholson-joins-fetch-ai-144992bd6523
false
189
AI and digital economics company
null
fetchaiplatform
null
Fetch.AI
fetch-ai
BLOCKCHAIN TECHNOLOGY,CRYPTO,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,ECONOMICS
fetch_ai
Machine Learning
machine-learning
Machine Learning
51,320
Chris Atkin
Digital Marketing Coordinator at Fetch.AI
dbeb6bfbd22
chris.atkin
12
1
20,181,104
null
null
null
null
null
null
0
null
0
61d8f53e661f
2018-08-15
2018-08-15 18:54:42
2018-09-10
2018-09-10 17:41:01
5
false
en
2018-09-10
2018-09-10 23:48:42
5
144b4b4c8b9a
3.184277
8
0
0
A former Redditer shares how millions of anonymous comments and users show our collective human preferences
5
Alexis Ohanian and Serena Williams. Reddit has decentralized content distribution in an unexpected way. Reddit is a “Crowdsourced Relevancy Engine” A former Redditer shares how millions of anonymous comments and users show our collective human preferences Luis Bitencourt-Emilio dropped out of Machine Learning, and formal education, in 2004. ML had an abysmal 40% accuracy rate back then. He came back with a learning vengeance to build a relevancy engine — the one Reddit uses to determine what exactly is relevant to humans these days. I sat through his talk at Big Data Day LA last month. He kept a steady pace, telling his story of learning alongside machines. His recent run at Reddit was the main draw of the talk. What he learned there was only possible from his own history of learning — and commanding machine platforms to do the same. If you want to go deeper — here’s the full slide deck and presentation from Big Data Day 2018 Source: A 90’s software wizard analyst At Microsoft, they (Luis and his machines) sorted feedback. They used sentiment analysis to identify negative responses to improve products like Office. Think of direct feedback instead of the dreaded paperclip. Simple visuals gave a clear view of user experience. Red=bad ; Green=good At WorkPop, they learned how to sort “uniqueness to a job.” The monster flows of applicants and resume needed machine learning to create a new way to rank candidates. They already knew the old, chronological way meant nothing. Applying early or recently has nothing to do with how you fit. At Reddit, the Recommended tab gave them a start of data to see what users like in real time. At first, they used up and down votes to link users, then subtract from a global mean (average) of what everybody prefers. Data sparseness was the main issue preventing further learning. When Subreddits were added, they offered diversity, further categories and labels for content. The engineers of Reddit concluded: We have built a Crowdsourced Relevancy Engine It used 10TB training data, 50M features, and 5M parameters. The engine had to be improved: Machine Learning makes the engine personalizable. Let’s add dashboards and visualizations to make refined data front and center, so we can each see relevance clearly. Focus: Remove default subreddits, cluster similar subreddits Use Natural Language Processing to decide what is a subreddit, understand and filter all the quips and unclear human language. “Subreddit Algebra” = themes can be added or subtracted to form hybrid subreddits to recommend They created Reddit Cartographers — a mixture of librarians and data analysts. They developed the “Reddit RabbitHole” — a place where your time disappears in an endless trail of clicks. They learned how to connect posts, and deliver post-to-post recommendations without a user being in the same category or Subreddit. The machines changed how they learned. The iteration of the architecture started with searching text, then moved on to processing language, and finally to Deep Learning with Images. Relevancy is a delicate subject. A relevancy engine lets us discover things we likely care about — or didn’t know we cared about. It’s a tool. It can easily go wrong when machines do all the learning and repeat the same assumptions. Youtube has been widely mocked for its Recommended Videos failure and irrelevance, going from kids cartoons to fetish videos in a few clicks. A solid, customizable relevancy engine needs you to learn. Engineers and initial co-learners like Luis get the process started — and create Rabbit Holes to imitate long schooldays for their learning machines. All this learning though, what’s the point? Maybe there’s a meta-Subreddit course I can take next time I visit the online University of Reddit. Interested in bold and honest reactions to future trends? Join Inevitable / Human where QuHarrison Terry prompts active writers like me for realtalk on tech and trends that matter. The future is first written in communities like Inevitable / Human.
Reddit is a “Crowdsourced Relevancy Engine”
125
reddit-is-a-crowdsourced-relevancy-engine-144b4b4c8b9a
2018-09-11
2018-09-11 18:38:49
https://medium.com/s/story/reddit-is-a-crowdsourced-relevancy-engine-144b4b4c8b9a
false
623
Futurism articles bent on cultivating an awareness of exponential technologies while exploring the 4th industrial revolution.
null
null
null
FutureSin
null
futuresin
TECHNOLOGY,FUTURE,CRYPTOCURRENCY,BLOCKCHAIN,SOCIETY
FuturesSin
Machine Learning
machine-learning
Machine Learning
51,320
Travis Kellerman
If I listen carefully, a collective future whispers — and it sounds a little crazy. @traviskellerman
1209a2e54db0
traviskellerman
623
254
20,181,104
null
null
null
null
null
null
0
null
0
a49517e4c30b
2017-10-25
2017-10-25 17:29:53
2017-10-25
2017-10-25 20:36:16
2
false
en
2017-11-30
2017-11-30 18:10:52
1
144d2e156db5
2.677673
0
0
0
In the movie Steve Jobs, Michael Fassbender who plays Jobs, draws an ideological split with Seth Rogen’s Wozniak, over opposing…
5
So much to choose from The Vendor Bias In the movie Steve Jobs, Michael Fassbender who plays Jobs, draws an ideological split with Seth Rogen’s Wozniak, over opposing perspectives on the democratization of computing. Jobs wanted a closed end-to-end control while Woz wanted it open. We all know how it went down when we buy a $70 thunderbolt cable today. In the consumer space it is easier to be arrogant and control the fate of your business by deciding to keep it closed. That is the reason why a lot of the apps and devices you use today don’t play well with other apps and devices or don’t have 100% parity on a competitor’s platform. In the business space the luxury of a closed system is quickly waning as providers are realizing that open is the way to go. The Oil and Gas industry is a different story. The software side of the business is controlled by three primary players — Schlumberger, Halliburton, Baker Hughes (now GE). Then there are slightly smaller players like Petroleum Experts, IHS, Kappa and so on. But a large fraction of the market is controlled by companies that can be counted on your fingertips. This isn’t great for the customers in a few ways — Due to the price slump of oil everyone is keeping a close eye of their cash flow. But even though there are better, cheaper alternatives out there for the existing software, the organizations are unable to switch to a more reasonable option because replacement cost is high and the learning curve is steep. The replacement cost is high because none of the software plays well with another vendor’s software so replacement will mean completely yanking out the existing infrastructure and putting in a new one. Its quite disruptive and no one likes that. The learning curve is steep because Oil and Gas software is HARD. Companies spend thousands of dollars every year on getting their employees trained on a software. So even if something cheaper comes along they do nothing because they don’t want to incur an additional cost of training on the new platform. Oil companies have various assets and within each asset there are various disciplines. Each discipline prefers their own vendors. Schlumberger’s Geoscience stack is great while Halliburton’s Production stack is great. So the asset decides to purchase from these two vendors but now the Production Engineer’s and Geoscientist’s become silo’ed because, surprise surprise, these two software stacks don’t talk to each other. Like oil companies have discipline silos, the service companies have product line silos. And so going with one vendor doesn’t ensure seamless integration because even their own products don’t play well with each other. So what can the oil companies do? And what can the service companies do? Oil companies can select smaller vendors who are more flexible and nimble in delivering openness features and dilute their loyalties to one big player. This puts pressure on the service companies to deliver and open their ecosystem to these smaller vendors and their own size-alike peers. And the service companies need to think like Woz and not like Jobs. Corporations don’t like dealing with monoliths so creating one isn’t helpful. Be open and flourish. We have created Nesh, A Smart assistant for Oil and Gas to open so that she can connect to and run existing data sources and applications using the conversational interface. And also, since no one ever took a training course in texting, learning to use Nesh comes as a zero cost. Plus she is fluent in Oilfield Science and Workflows that any organization can use and benefit from out-of-the-box.
The Vendor Bias
0
the-vendor-bias-144d2e156db5
2018-04-01
2018-04-01 04:13:44
https://medium.com/s/story/the-vendor-bias-144d2e156db5
false
608
Best place to learn about Chatbots. We share the latest Bot News, Info, AI & NLP, Tools, Tutorials & More.
chatbotslife.com
ChatBotsLife
null
Chatbots Life
a-chatbots-life
CHATBOTS,BOTS,ARTIFICIAL INTELLIGENCE,CONVERSATIONAL UI,MESSAGING
Chatbots_Life
Oil And Gas
oil-and-gas
Oil And Gas
1,309
The Nesh Hooman
Creator of the Smart Assistant for Oil and Gas
ae7cf4507cb0
hellonesh
4
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-04
2017-10-04 19:47:10
2017-10-04
2017-10-04 19:55:19
1
false
en
2017-10-06
2017-10-06 03:29:45
2
144df6fa6a79
5.701887
0
0
0
There are 240 million calls made to 911 each year. Of these 240 million calls, it’s estimated that around 50% of those are misdialed. For…
5
Triage: How We Can Change 911 and Save Lives There are 240 million calls made to 911 each year. Of these 240 million calls, it’s estimated that around 50% of those are misdialed. For over 5 decades, since the inception of 911 in the 1960s, dispatchers have been answering calls in the chronological order that they’ve come to the Public Safety Answering Point (PSAP). Finally, thanks to advancements in location and video streaming technology, we can break-away from the ineffective and resource-wasting First In First Out paradigm that controls the public safety industry. The First In First Out paradigm makes sense in theory. Each call that comes into a 911 PSAP must be answered in the order that it is made and each emergency carries equal weight. If this was a call center for tech support, the paradigm fits. All customers, no matter where they live or what they do, are alike and of identical importance to the company. However, public safety is a far cry from tech support and it’s time to expand one of the most important elements of emergency response, triage, into how we contact 911. When you walk, roll, hobble, or are carried into an emergency room, half a dozen things happen without you realizing it. Within 30 seconds you are immediately assessed by an experienced doctor or nurse and assigned into 4 categories/colors (depending on the country). These categories are: Minimal/Green — Walking wounded/uninjured. These are patients that can function freely without assistance and answer questions coherently and accurately. In an emergency situation, they can assist medical staff in helping more severe cases and can be attended to later. Injured/Yellow — Moderately wounded. These patients have cuts and lacerations and are in need of some minor form of medical attention (i.e. dressing of wounds) before being upgraded to Minimal/Green. Immediate/Red — Severely injured. These patients require immediate care and will not survive if not seen. Frequently, these involve a severe head injury, multiple fractures to the skeletal system, fractures to a delicate area such as the spine, unable to answer simple questions, or unresponsive but with vital signs. Immediate aid is applied, often in the staging area/emergency room, to stop heavy bleeding and ensuring no obstructions to airflow. Deceased/Black — Died. These patients have died, in a low-casualty event CPR will be initiated to try to revive vital signs, but in a mass-casualty event, they will simply be moved onto their side with their mouth open. The first responder will move to the next victim and bodies will be collected once all surviving individuals have been removed. The triage process (from the French word ‘to sort’) originated in the Napoleonic Wars and WW1 as battlefield aid stations were overrun with hundreds of patients at any one time. Today, it has been codified into a procedure that saves lives on a daily basis. The triage process exists because not all wounds are equal. Someone who has been shot is in need of more immediate attention than someone with a sprained ankle. By being able to assess and triage quickly, hospitals are maximizing their limited resources to save the most lives. The question is, why has this necessary process not been extended to other elements of public safety, namely dispatch? “A picture,” the old adage goes “is worth a thousand words” at Carbyne, we have a different saying, “a picture is worth a 50% drop in dispatch time.” Call Takers are the very first contact that the public have with first responders. They are simultaneously able to assess incoming calls, glean valuable information from distressed individuals, dispatch first responders, and they do all of this while trying to reassure callers and keep them calm. Many Call Takers have also had to perform over-the-phone CPR and deliver babies if ambulances are unable to reach people in time. For those who have been Call Takers for many years, they find themselves developing a near ‘sixth-sense’ in being able to identify callers in distress, intoxicated, or unable to speak. 911 Call Takers are essentially, assessors, medics, therapists, and navigators all rolled into one. Today, Call Takers are overburdened with more calls than they can deal with, an aging infrastructure that has not kept up with modern technology, and a citizenry that is abandoning wired lines for wireless. Because of this, a new call paradigm is needed and we have finally reached a point, technologically speaking, where it can be implemented with ease. The most important aspect of 911 triage is the ability to visually assess the severity of a situation. For this, video calling must come to the PSAP. Seeing Is Believing “A picture” the old adage goes “is worth a thousand words” at Carbyne, we have a different saying, “a picture is worth a 50% drop in dispatch time.” Carbyne has deployed our Public Safety Ecosystem (PSES) in countries across the world, bringing video, instant location, call prioritization, and texting to different PSAPs. As the number of PSAPs using our ecosystem has increased, so have response times dropped. Video changes everything. Whereas once you had to answer the call, request a location, assess the situation by asking a number of validating questions, and eventually dispatch first responders, you are now able to seamlessly triage calls. Our C2i, shown below, effortlessly displays incoming video calls in a scalable display. As the fields populate with calls, supervisors are able to sort the calls by severity. Because the video displays across the C2i instantly, supervisors can make quick determinations about the severity of a particular event and send it to a dispatcher if necessary. Should a supervisor see something serious, for instance, a car crash or a gunshot victim, they can prioritize that caller over someone waiting patiently. Carbyne’s C2i displays several incoming video calls, allowing PSAP supervisors to prioritize calls based on severity. Because of the high number of ‘misdials’ to 911 (also known as ‘pocket dials’), the telephone triage means that supervisors can identify and downgrade misdialed 911 calls quickly. The ability to dismiss these calls means that dispatchers no longer have to spend anywhere from 30 seconds to two minutes qualifying the call as a misdial and redialing the number to speak to the caller. The addition of instant location, prominently displayed on both our C2i and Call Taker screens, allows dispatchers to see exactly where the person is calling from. Utilizing device based location, Wi-Fi, Bluetooth, and other elements within a smartphone, our PSES can determine locations down to a 3-foot accurate radius. Our unique x,y,z axis positioning means that we can locate someone based on elevation. For instance, the ecosystem is able to determine both the floor that someone is calling from but also the room. Our roll out in countries around the world has shown that the process that Call Takers answer phones have shifted with the arrival of video. Whereas they used to answer calls with validating questions including: ● Where are you located? ● What is going on? ● Are there any injuries? ● Are there any weapons? Now, with video and instant location, the Call Takers are simply answering the phone like this: ● Hi, Michael, I see there’s a car crash at 7th Avenue and 30th St, is that correct? The PSES dramatically reduces the time to dispatch because Call Takers have greater situational awareness. Often, they can begin the process of sending first responders before they’ve even picked up the phone. While some qualifying questions are still needed, they are often able to be rephrased in a way that provides comfort to the caller. Because the Call Taker is presented with the caller’s information, including name, photo, and any allergies or disabilities, they can make a more personalized phone introduction. For people in a high-stress environment, as most are when they call 911, it helps to have a comforting voice on the other end of the line who knows your name. The Carbyne PSES has brought down times to dispatch, a greater contextual awareness, and saved lives, money, and resources for emergency services wherever it has been deployed. The new call paradigm of conducting telephone triage works because Call Takers have more information at their fingertips. It’s only thanks to this perfect storm of technologies: smartphones, IP communications, larger data networks, and video streaming that we can witness the next evolutionary step in the way that emergency services handle calls. Prioritization and triage save lives in the hospital, and it can save even more lives in the dispatch center. By performing triage at the initial point of contact with 911, it’s possible to avoid a trip to the hospital, to begin with. The following article was originally published by Carbyne CEO, Amir Elichai, on American Security Today. Reprinted with the permission of American Security Today.
Triage: How We Can Change 911 and Save Lives
0
triage-how-we-can-change-911-and-save-lives-144df6fa6a79
2018-05-22
2018-05-22 19:18:31
https://medium.com/s/story/triage-how-we-can-change-911-and-save-lives-144df6fa6a79
false
1,458
null
null
null
null
null
null
null
null
null
Homeland Security
homeland-security
Homeland Security
1,223
Carbyne911
Carbyne connects you to emergency services. We exist to save lives. http://www.carbyne-hls.com
48fa0203397a
carbyne911
17
11
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-28
2017-12-28 18:23:27
2017-12-28
2017-12-28 18:26:50
0
false
en
2017-12-28
2017-12-28 18:26:50
8
144ede5449e0
3.449057
1
0
0
2017 was a fabulous year for Deep Learning with Dr. Geoffrey Hinton’s new paper on CapsuleNet, IBM Watson’s new record on speech…
4
How to Master Deep Learning? 2017 was a fabulous year for Deep Learning with Dr. Geoffrey Hinton’s new paper on CapsuleNet, IBM Watson’s new record on speech recognition, Andrew Ng’s new AI company launch to transform the manufacturing industry, and much more. Deep Learning and Machine Learning is spreading like wild-fire and is transforming almost all areas of research with its exceptional ability in identifying patterns. If you are a researcher or someone who is planning for a PhD in STEM, not having machine learning skill sets at your side is going to put you at disadvantage. Now, with all the resources in the world at your fingertips, the real question is, where should you start? But before jumping into the arena, I have 3 advices for you, which will make your journey of mastering deep learning easier and less frustrating. Advice 1: Rome was not built in a day Remember the time you learned how to ride a bicycle or the time your siblings learned how to walk? You didn’t ride the bike perfectly nor your siblings ran a marathon at the first attempt. Learning takes time and you need to be patient. It is very important to not get frustrated when your first model doesn’t work or when you don’t understand a concept. Advice 2: No Free Lunch Theorem I think this is ‘the’ most given advice in the machine learning courses and textbooks. The idea is simple — don’t expect one model to fit all your machine learning problems. For newcomers this might be difficult to digest, as they have been told, mostly by overhyped social medias and blogs, that this is ‘Artificial Intelligence’ and the layman always mistakes AI with the Artificial General Intelligence and expects one model to fit them all. So as per No Free Lunch Theorem, if one model didn’t work out, move on and try another one. Advice 3: Don’t reinvent the wheel It’s 2018 and you don’t have to build each algorithm from scratch as we did in the early days of AI. Now you can make use of high-level APIs from frameworks such as scikit-learn and TensorFlow. In the beginning of your learning phase, the focus should be to see a running model of an algorithm or a method you learned, with minimal coding. The Path to Mastery Machine Learning by Andrew Ng (Coursera) — I don’t think there is any better place to start than Andrew Ng’s course on Machine Learning. The lectures are simple and cover almost all important topics in the field. If you are someone who is not used to MATLAB programing, it would be a good idea to skip the programming assignments, because in real world we use Python and R mostly. By skipping these assignments you are relieved from the frustrating hours of code debugging. Here your focus is understanding the basic concepts and not coding. Deep Learning Specialization by Andrew Ng (Coursera) — Andrew Ng is an amazing professor and he is good at explaining complex concepts in a simple manner. Though the interview videos with deep learning legends such as Geoffrey Hinton, Yoshua Bengigo and Ian Goodfellow are optional in the course, I strongly recommend watching them. You’ll get new ideas and get to know what the leading research labs in AI are up to. In this stage of your training you should start to get your hands dirty with coding. The courses in Deep Learning Specialization have python notebooks with good documentation and support that you are only required to write very few lines of code to see your models in action. Hands-On Machine Learning with Scikit-Learn and TensorFlow by Aurélien Géron (Textbook): By this time you should be familiar with almost all concepts and practices in Machine Learning. The focus of this stage is to strengthen your Machine Learning/Deep Learning programing skills. Most often, in almost all online courses, a clean dataset was given to you to start with, but in real life it’s not the case. In the Hands on Machine Learning book the author describes various data pipelines and walks you through the steps involved in preparing the datasets ready for modeling. With this text as a reference you should write every line of code discussed in the book. This textbook will prepare you to become an end-to-end Machine Learning engineer. TensorFlow for Deep Learning Research (Stanford Online): While skimming through the slides and notes provided in this website you should feel confident of the concepts and codes discussed. It would also be a good idea to try out the assignments in the syllabus. The focus of this phase is to build your confidence as a Machine Learning/Deep Learning engineer. Data Science from Scratch by Joel Grus(Textbook): In most cases you should be able to get away with high level APIs and is not required to build everything from ground one. However, if you have been challenged to do it, don’t worry, have this book handy and take up the challenge. Deep Learning by Ian Goodfellow, Yoshua Bengio and Aron Courville (Textbook): This is a must have textbook in your home library if you are planing to become a Deep Learning researcher. By this time you should be fine with the concepts discussed in Part I and Part II of this textbook. So skim through the first two parts and start reading part III in detail. Happy Modeling! This post was originally published in athulinks.com
How to Master Deep Learning?
1
how-to-master-deep-learning-144ede5449e0
2018-06-21
2018-06-21 12:41:34
https://medium.com/s/story/how-to-master-deep-learning-144ede5449e0
false
914
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Athul Sudheesh
Cognitive & Computational Neuroscientist | Machine Learning Practitioner | https://athulsudheesh.com
d61d49273316
AthulSudheesh
18
26
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-02
2017-10-02 18:25:19
2017-10-02
2017-10-02 18:31:45
0
false
en
2017-10-03
2017-10-03 13:23:16
3
144f11f8efee
3.845283
1
0
0
Ubiquitous sensors fused with large data sets and artificial intelligence doing deep learning will be here for leaders sooner than you…
5
What New Technologies Will Really Mean for Leaders and Leadership Development Ubiquitous sensors fused with large data sets and artificial intelligence doing deep learning will be here for leaders sooner than you think. There are already quite a few examples of A.I. offering advice to highly skilled professionals in fields like health, finance, and mechanical maintenance — we have all seen the IBM Watson and GE commercials by now. Technology will provide signals or triggers, context, and advice to leaders. To do this technology will: · Enhance a leader’s understanding of context, where they are, who else is there, what is going on now with those people and what, historically, has gone on. · Provide ubiquitous sensors both for what’s happening inside the leader but also what’s happening to those around them with whom they are interacting in either multi-lateral or unilateral interactions. · Provide advice, highly tailored to the individual and the context both of which it will understand intimately — and it will do it at the moment it is needed. The context technology will provide will include, for example, what is the emotional state of those with whom the leader is interacting, their cognitive condition — are they thinking well at this moment, how is what the leader is doing being received and understood by those around them, what is the history of interactions between the leader and those around them — positive and useful or something less so, and what is the collective condition of the entire organization — its mood at the moment and its culture more generally. Sensors will evaluate the biometric condition of the leader and those around them, sensors will evaluate the qualities of the leader’s voice and word choice, sensors will also evaluate the non-verbal communications such as facial expressions, using all of these to make predictions useful for giving advice. A.I. fused with sensor data, context information, and large data sets will provide advice that improves over time as the data set grows, sensors improve, and the dimensions of “context” are better understood. A.I. will continuously evaluate the advice it has given and will make adjustments and, because it is driven by algorithms, it will be much more evidence-base and less susceptible to idiosyncratic bias. Just see Ray Dalio’s most recent Ted Talk to understand the power of algorithm-based advice[i]. This will shift what leaders will need to “know” and make the experience of a more seasoned leader available to relative novices. Crossing this particular technology horizon will widen the talent pool and equalize to some extent the value of younger less experienced staff with the value of the older more experienced ones, provided of course that technology enabled leaders are properly selected and developed for the preferences and abilities they will need to make the best use of the technology. What will be needed is a more abstract understanding of leadership. Leaders and those who aspire to leadership will need to understand what type of leadership challenge they have, what tool is best used for that challenge, what does the tool do well, what does it do poorly? It will be less tactical because tactical advice will be provided real-time via technology. This will affect the leader selection process and the leader development approach. The selection process for and development of leaders will emphasize the need for rising leaders who can develop a deeper understanding of leadership frameworks and models in the same way that architect and engineer understand the theory of design and construction at a level the building project manager, carpenters, plumbers, and electricians do not need to understand. Basically leaders will have to know “what” and “why” because technology will provide the “how”. This will favor those with preferences for and abilities in abstract and strategic thinking as well as the need to develop those abilities to the highest degree possible. The selection and development of leaders will shift as well to place an even greater emphasis on self-awareness so that they recognize signals, triggers, and internal emotional responses as well as the signals they are observing or receiving from those around them. While technology will provide signals regarding the emotional and cognitive state of the leader and those around them, the management of their own emotional or cognitive state will still be a skill they will need to develop via practice and more practice. Although those, who by the luck of the draw or previous effort and experience, already have a high degree of emotional and cognitive self-regulation will have an advantage. Because technology will help the leader understand the context of their leadership in the very act of leading leaders will need to be able to understand system interactions and dynamics rapidly. The selection and development of leaders will need to emphasize and address preferences and abilities for seeing systems, relationships among elements, and seeing them in motion. Because technology will not likely be perfect, leaders will still need to be “in the loop” to validate what the technology is telling them and modify advice so that it is authentic when it comes from them. This will emphasize skills like improvisation, creativity, and abstraction of principles — the selection and development of leaders will need to accommodate this. The technology, particularly in the early implementations, will likely provide a lot of data in real time and for this reason leaders will need to be able to make sense of or ‘fuse” for themselves multiple data streams simultaneously not unlike pilots the cockpit of a fighter or traders using electronic trading systems. As you can see, the coming technological transformation of leadership will shift the leadership preferences and abilities required in ways that will force changes in the selection and development of leaders — some preferences and abilities may be previously unrecognized and undeveloped while others will need greater emphasis. The potential for radically diminishing the advantage of experience in leading, creating greater equality among novice and seasoned leaders, has tremendous implications for who gets selected, when they get selected, and how they are developed. [i] https://www.ted.com/talks/ray_dalio_how_to_build_a_company_where_the_best_ideas_win
What New Technologies Will Really Mean for Leaders and Leadership Development
1
what-new-technologies-will-really-mean-for-leaders-and-leadership-development-144f11f8efee
2018-03-02
2018-03-02 01:10:12
https://medium.com/s/story/what-new-technologies-will-really-mean-for-leaders-and-leadership-development-144f11f8efee
false
1,019
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jerry Abrams
The path less traveled …
1fcb45ab7708
JerryAbrams
1
4
20,181,104
null
null
null
null
null
null
0
null
0
35ef2c70d7cc
2017-10-26
2017-10-26 00:17:09
2017-10-26
2017-10-26 00:20:49
1
false
en
2017-10-26
2017-10-26 17:52:29
0
144faf4ce0bf
3.339623
1
0
0
Italian tenor Andrea Bocelli, accompanied by the Lucca Philharmonic Orchestra, delivers another pitch-perfect performance of one of the…
5
Being human in an age of machines Italian tenor Andrea Bocelli, accompanied by the Lucca Philharmonic Orchestra, delivers another pitch-perfect performance of one of the best-known arias in opera, La Donna è mobile, from the opera Rigoletto. Nothing new there but for one thing: The conductor of the orchestra is a robot. Humanoid robot YuMi was doing something many would see as a uniquely human achievement — not just playing music but interpreting it. This is one more example of the advances being made by artificial intelligence (AI) and the ways it is reaching out to touch all areas of human existence. AI technology has the potential to direct self-driving cars, drive trains autonomously, diagnose diseases, carry out surgery from afar, run audits and help transform industry in myriad ways. It is going to be part of the world’s responses to a range of challenges, whether they be caring for ageing populations, feeding the hungry, extending medical care or combatting the impact of climate change. Yet we do not understand how some AI techniques work. And its ever-increasing use raises enormous moral, ethical, practical and political issues that we are only just beginning to address. “Whether it is good or bad is tricky. The first thing is that technology is amoral — it is not benevolent or malevolent. What we are concerned with is the knock-on effects, the future of jobs, cities, social production. In the final assessment, whether future technology is good or bad will be determined by our ability to respond and adapt to the challenges,” said Bernise Ang, Founder and Executive Director of Zeroth Labs, at the Women’s Forum Global Meeting 2017 in Paris. Making things better Technological advances over the past 100 years have brought huge gains in human productivity, living standards and health. They have put millions of children in schools and helped create public welfare systems. There is no reason why AI cannot bring similar benefits to human welfare. Examples of what is possible include helping disabled people to live fuller and more rewarding lives, and creating more time for leisure, for music, for art and for caring for others. “There are so many great things that we could do,” said Kimberly Lein-Mathisen, General Manager of Microsoft Norway. Photo credit: Women’s Forum/Sipa Press What emerged from discussions at the Women’s Forum Global Meeting 2017 was that AI should augment the work of humans and not simply compete with human labour. There also needs to be clear accountability for algorithmic programs and designs, to allow tracking and tracing of the way machines operate. “We need to think of it as humans and machines, not humans versus machines. It is about making humans far better at using technology, about uniting humanity and technology,” said Paul Daugherty, Chief Technology and Innovation Officer of Accenture. With the rise of AI, companies will need to make ever more complex decisions about how ‘human’ their businesses should be. “We cannot be mesmerized by technology and forget that the core of whatever we do is our people,” warned Alexis Herman, Board Member of Coca Cola Company. Impact on jobs Whether AI will create or cut jobs is not yet clear. People have different talents than machines. However, there seems little doubt that used well, AI should boost economic productivity — even if there is little sign yet of this at the macro-economic level. Governments will need to become more actively involved through policy initiatives if the hoped-for productivity gains are to be achieved. What is clear is that getting the most out of AI will require a human capital revolution. “We need to pay attention much more than in previous revolutions to the quality of human capital. The regions that are successful are those that can maximise human capital,” said Gilles Babinet, Chairman of Capital Dash and Chief Digital Champion of France. The problem is that many companies look at human capital as a cost rather than as a long-term investment. Employees are used and then jettisoned when their skills no longer match requirements. “We should not just be thinking of employing today and, when a role ends, replace them,” said Anne Richards, CEO of M&G Investments. “The gig economy is a red herring,” she added. “It is actually more about employers having cheap labour.” Continuous training and learning must be built into the productive process. There also needs to be a change in mind-set. Bringing in more women is a key issue because diversity is a trigger for disruptive innovation. Leadership teams need to be drawn from different disciplines and cultures, so that disruptive technology and policy evolve together, said Tim Brown, CEO of IDEO. “Is the purpose of business to serve people, or to build better machines? We need to think about some of these challenges. If we think that AI is here to serve, we need to do things differently,” Brown added. This story is drawn from sessions at the Women’s Forum Global Meeting 2017.
Being human in an age of machines
1
being-human-in-an-age-of-machines-144faf4ce0bf
2018-06-06
2018-06-06 22:27:15
https://medium.com/s/story/being-human-in-an-age-of-machines-144faf4ce0bf
false
832
Perspectives from the Women's Forum on the issues shaping a world in transition.
null
womensforum
null
Women's Forum for the Economy and Society
womens-forum-for-the-economy-and-society
WOMEN,GENDER,GENDER EQUALITY,DIVERSITY,INCLUSION
womens_forum
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Women's Forum for the Economy & Society
World’s leading platform featuring women’s voices on major social and economic issues.
c02f04a0a964
Womens_Forum
1,459
777
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-09
2018-03-09 07:53:42
2018-03-09
2018-03-09 08:07:44
1
false
en
2018-03-09
2018-03-09 08:07:44
0
144fafeeac18
1.128302
0
0
0
Cyber-attacks have become a significant problem in recent years. With hackers continually changing their tactics, and new malware being…
5
Artificial Intelligence: The Future of Cyber Security Cyber-attacks have become a significant problem in recent years. With hackers continually changing their tactics, and new malware being released on the internet every day, cybersecurity firms can only play catch up. But, is there not anything that can be done to give cybersecurity firms an upper hand against cyber-attacks? Well, in fact, there is. The answer to most cybersecurity issues lies in artificial intelligence (AI). How can AI transform the cybersecurity industry? The application of AI systems in cybersecurity can serve as a turning point in the war against cyber-attacks. AI algorithms utilize machine learning (ML) to adapt to changing cyber-attack tactics. The machine learning capability helps in identifying and countering new attacks which would have otherwise been undetectable by traditional cybersecurity protocols. AI integration with cybersecurity protocols will reduce response time to cyber-attacks since it eliminates the need to reconfigure protocols with each attack. AI’s dynamic nature equips it with the ability to keep evolving with time and exposure to more cyber-attacks, saving significant amounts of staff-hours spent in configuring conventional cybersecurity protocols with each new cyber-attack. Is AI the Future of Cybersecurity? Today’s cyber-attacks are the most sophisticated and impactful cyber-attacks of our time. When you couple the frequency of such attacks with a diminishing cybersecurity force, you will see that AI integration into cybersecurity is the only hope left in the fight against cyber-attacks.
Artificial Intelligence: The Future of Cyber Security
0
artificial-intelligence-the-future-of-cyber-security-144fafeeac18
2018-03-09
2018-03-09 08:07:45
https://medium.com/s/story/artificial-intelligence-the-future-of-cyber-security-144fafeeac18
false
246
null
null
null
null
null
null
null
null
null
Cybersecurity
cybersecurity
Cybersecurity
24,500
Bernard Gatheru M
null
57454e5bf6b4
bernardgatherum
21
21
20,181,104
null
null
null
null
null
null
0
null
0
f885229893fe
2018-02-12
2018-02-12 18:00:51
2018-02-14
2018-02-14 13:37:29
6
false
en
2018-02-14
2018-02-14 13:37:29
1
1450150717b7
3.025472
1
0
0
With my interest in the future path of Artificial Intelligence I decided to do my blog on a subset of the topic which is Deep Learning.
2
Deep Learning With my interest in the future path of Artificial Intelligence I decided to do my blog on a subset of the topic which is Deep Learning. A Little History The idea of AI has its roots back to the ancient Greek’s myths of golden robots in Hephaestus and Pygmalion’s Galatea to the Middle Ages where there were rumors of secret mystical means of placing mind into matter. This has also been displayed in works of fiction such as Mary Shelley’s Frankenstein or Karel Capek’s R.U.R. (Rossum’s Universal Robots). Over 70 years ago on February 20 1947 Alan Turing put forward his thoughts on “a machine that can learn from experience” as well as his questions on its impact on jobs and society. It seems that we’ve had a fascination with automating tasks for quite some time. Neural Networks Are inspired by our understanding of the biology of our brains — all those interconnections between the neurons. But, unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation. You might, for example, take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons, then passes the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced. Neural Networks function with layers, beginning from Input -Output Layer. “Training” adjusts weights and biases to the output that is known to be correct Each neuron assigns a weighting to its input — how correct or incorrect it is relative to the task being performed. The final output is then determined by the total of those weightings. So think of a stop sign example. Attributes of a stop sign image are chopped up and “examined” by the neurons — its octagonal shape, its fire-engine red color, its distinctive letters, its traffic-sign size, and its motion or lack thereof. The neural network’s task is to conclude whether this is a stop sign or not. It comes up with a “probability vector,” really a highly educated guess, based on the weighting. In our example the system might be 86% confident the image is a stop sign, 7% confident it’s a speed limit sign, and 5% it’s a kite stuck in a tree, and so on — and the network architecture then tells the neural network whether it is right or not. Why does this matter? We all know that computers are very good at repetitive calculations and detailed instructions but they’ve been historically bad at recognizing patterns. Neural networks begin this task however as these task become more complex they become way too expensive and the accuracy starts to suffer. Enter Deep Nets. Deep nets take the complex task and break them down into simpler patterns. When tasked with identifying a human face, the task is broken down into recognizing the nose, eyes, ears etc then combine to form the human face. With massive amounts of data and powerful computing power we are now scratching the surface of what we are able to harness the potential of this life changing technology. Deep Learning’s applications: Self driving cars Healthcare detection and imaging Voice assistants (Alexa, Google Home..) Predicting natural disasters Financial applications Text generation Any questions?
Deep Learning
50
deep-learning-1450150717b7
2018-02-14
2018-02-14 19:26:14
https://medium.com/s/story/deep-learning-1450150717b7
false
550
Blog on the top of the relationship between Deep Learning and Programming
null
null
null
Deep Learning and its Relationship to Programming
null
deep-learning-and-its-relationship-to-programming
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Paul Elis
null
1b85f25dbc51
paulelis
0
1
20,181,104
null
null
null
null
null
null
0
null
0
b984f0201cd6
2018-05-21
2018-05-21 23:57:43
2018-05-23
2018-05-23 14:17:00
4
false
en
2018-05-23
2018-05-23 14:17:00
9
14504a156aad
5.990566
8
0
0
A Q&A with team Scriboto on their award-winning data science project, which aims to give doctors more time to focus on their patients.
5
Collaboration is Key Ingredient in I School Team’s Success Timothy, Erika, Jen Jen, and Yannie are presented with the Varian Award by Dean Saxenian at Commencement in May 2018 Doctors spend more time on the computers than directly with their patients, even during patient visits, because they’re expected to document all their clinical interactions. Scriboto aims to alleviate that computer time by converting audio from patient-doctor clinical conversations into electronic health records (EHR) classified text. By bypassing the need for documentation, Scriboto lets doctors be doctors, not scribes. Scriboto: Converting clinical visit audio to EHR classified text is the May 2018 Hal R. Varian MIDS Capstone Award-winning project by Jen Jen Chen, Timothy Hurt, Erika Lawrence, and Yannie Lee. What inspired your project? Erika: The inspiration for Scriboto came from Dr. Jen Jen Chen’s frustration with how much of her time as a physician is spent on the computer, rather than with patients. As a data scientist, she knew there was a smarter way, so in the middle of the MIDS program, she reached out to a few of her classmates to see how we could work together to create a solution. Jen Jen: I despise typing on my computer instead of talking with my patients. In fact, the only person that hates it more than me is probably my patient. Timothy, Jen Jen, Yannie, and Erika How did you work as a team? Erika: From the start, every team member was inspired by the goals of Scriboto, the potential benefits to doctors and patients, and the incredibly cool technology we would be using. This created a drive in each of us to make the project a priority, and to create the project management structures that would keep us on track. We were also able to get together in person for hackathons on more than one occasion, which helped us in pushing through technical blocks, and making the project more fun. Timothy: Working as a team on such an ambitious project did have its challenges, but in hindsight, two things really stand out to me. First, we all get along with one another on both personal and professional levels, I think that really helped us get through some of the interpersonal struggles that come with short timelines on large projects. Secondly, our team has a diverse set of experience and skills which allowed us to disperse the work in a way that enabled everyone to meaningfully contribute to the success of our project. To illustrate this point, there are aspects of our project that I would not have been able to complete in the three months we worked on our project but a different team member was able to complete those aspects over a weekend. How did you manage to work together on your project as members of an online degree program (MIDS)? Jen Jen: This project was a perfect example of how students with different backgrounds could come together in the MIDS program to create an impactful tool. I worried a lot about not being able to contribute as much to the project since my programming skills are seriously mediocre compared with my teammates, but it turns out that I had a role to play and was able to use my medical background to help with the project. Scriboto would not have been a success without the four of us collaborating as a team. Timothy: No one individual of Scriboto’s team could have developed Scriboto. This point is salient to me because it illustrates one of the crucial benefits of MIDS when compared to MOOCs: the impact of collaboration. I feared that doing an online program would take away the interpersonal interactions that are ever-present in in-person courses but that was not my experience in MIDS. Scriboto is a beautiful example of what MIDS enables through bringing together a cohort of talented and professionally experienced individuals who have an insatiable desire to learn and grow and use Data Science to help improve systems, organizations, and society. — Timothy Hurt What was the timeline or process like from concept to final project? Erika: As a team, we used as many MIDS resources and curricula as we could to further the project. Work on the project really began in the second-to-last semester where we were able to use our final project as a preliminary test of Scriboto. This culminated in the Capstone project, and our initial Scriboto application, which we will continue to evolve and improve going forward. Jen Jen: It really helped to create a kick-ass team early in the program because if you’re planning on taking the project to the next level, you’ll need more than three months. We hit the ground running even from week one. How did your I School curriculum help prepare you for this project? Erika: We started looking into possible technologies and data to support a tool like Scriboto in W266: Natural Language Processing and Deep Learning. In the Capstone class, we were able to take the project much further: refining the model, finding new and better data sources for training, and creating the means to show off the results and get feedback from potential users of the tool. Jen Jen: One class we didn’t anticipate being much of an influence until it was, was W231: Behind the Data: Humans and Values. Being that patient privacy and HIPAA are the cornerstone of healthcare data, the discussions that we had with Nathan Good and Jared Maslin were invaluable. Yannie: W205: Fundamentals of Data Engineering and W251: Scalding Up! Really Big Data gave us the data engineering skills to design and implement a fully functional data pipeline from audio to user interface and everything in between. Although some technologies we used weren’t covered explicitly covered, the courses gave us the background and knowledge to be comfortable understanding and implementing new technologies. Timothy: The Python Bridge course and W207: Applied Machine Learning were also pretty useful. Developing Scriboto for Capstone required a strong foundation in Python and the knowledge necessary to understand how different packages could be leveraged in order to create a functional application. Scriboto pipeline Do you have any future plans for the project? Yannie: We are still exploring our options and have been excited to have had the opportunity to speak to potential partners and investors. We are continuing to productize Scriboto by collecting user feedback, improving our technology, and building out additional features that will make Scriboto easier to use. Jen Jen: We know we had a great idea, but we weren’t sure that it would be possible to pursue after graduation until after the suggestions we received from our Capstone instructors, Alberto and Joyce, and especially after the positive feedback from those who watched the Capstone Showcase, such as Alex Hughes and Dan Gillick. How might this project make an impact? Yannie: We think Scriboto could be impactful in healthcare for multiple reasons. First, we hope to improve patient/physician relationships by shifting physician focus back to the patient instead of note taking. We also hope the tool will make physicians’ lives easier, reducing the amount of administrative work done on the computer, which in turn may improve doctor burn out rates. Overall, with the amount of time saved by using our tool, we estimate that physicians can see 20% more patients! Jen Jen: There’s also an impact that might not be as easily quantified because it is rooted in emotion. Patients have it hard enough with our oftentimes fragmented healthcare system, dealing with practical issues like insurance and long wait times along with their health. The last thing they need is not to be heard by the doctor. They so often become frustrated — and rightfully so. Physicians have come to realize that this isn’t what we signed up for. Instead of treating our patients, we’re spending time tracking down medical records, appeasing insurance companies, clicking on notifications, completing hand-washing modules, and writing clinical notes. We’re burnt out. It certainly isn’t the final solution, but we hope Scriboto will realign patient and physician goals back to what it should be, which is the patient’s health. There’s also an impact that might not be as easily quantified because it is rooted in emotion. Patients have it hard enough with our oftentimes fragmented healthcare system … we hope Scriboto will realign patient and physician goals back to what it should be, which is the patient’s health. — Jen Jen Chen Related Information: Commencement Awards Honor 2018 Graduates Scriboto: Converting clinical visit audio to EHR classified text
Collaboration is Key Ingredient in Team's Success
67
collaboration-is-key-ingredient-in-teams-success-14504a156aad
2018-10-08
2018-10-08 21:54:21
https://medium.com/s/story/collaboration-is-key-ingredient-in-teams-success-14504a156aad
false
1,402
Voices from the UC Berkeley School of Information
null
BerkeleyISchool
null
BerkeleyISchool
null
berkeleyischool
GRADUATE SCHOOL,UC BERKELEY,INFORMATION SCIENCE,DATA SCIENCE
berkeleyischool
Healthcare
healthcare
Healthcare
59,511
Berkeley I School
The UC Berkeley School of Information is a multi-disciplinary program devoted to enhancing the accessibility, usability, credibility & security of information.
4e0ccb9c0d51
BerkeleyISchool
91
54
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-27
2018-03-27 15:36:52
2018-03-27
2018-03-27 17:12:01
7
false
en
2018-04-08
2018-04-08 02:35:25
0
14511f3080a8
5.751887
7
0
1
What is an activation function?
5
Topic DL01: Activation functions and its Types in Artifical Neural network What is an activation function? An activation function is a very important feature of an artificial neural network , they basically decide whether the neuron should be activated or not So lets consider an simple neural network shown below. In the above figure,(x1,x2,…xn)is the input signal vector that gets multiplied with the weights(w1,w2,…wn). This is followed by accumulation ( i.e. summation + addition of bias b). Finally, an activation function f is applied to this sum. Why do we use an activation function in neural network ? As observed for the above figure when we do not have the activation function the weights and bias would simply do a linear transformation. A linear equation is simple to solve but is limited in its capacity to solve complex problems and have less power to learn complex functional mappings from data. A neural network without an activation function is just a linear regression model. The activation function does the non-linear transformation to the input making it capable to learn and perform more complex tasks. We would want our neural networks to work on complicated datas like videos , audio , speech etc. Linear transformations would never be able to perform such tasks. What condition the activation function should satisfy? Activation functions make the back-propagation possible since the gradients are supplied along with the error to update the weights and biases. Without the differentiable non linear function, this would not be possible. So the functions should be differentiable and monotonic. Derivative or Differential: Change in y-axis w.r.t. change in x-axis.It is also known as slope. Monotonic function: A function which is either entirely non-increasing or non-decreasing. Types of activation functions? The Activation Functions can be basically divided into 2 types- Linear or Identity Activation Function Non-linear Activation Functions Linear or Identity Activation Function As you can see the function is a line or linear.Therefore, the output of the functions will not be confined between any range. Fig: Linear Activation Function Equation : f(x) = x Range : (-infinity to infinity) As shown in the above figure the activation is proportional to the input. . This can be applied to various neurons and multiple neurons can be activated at the same time. Now, when we have multiple classes, we can choose the one which has the maximum value. But we still have an issue here The derivative of a linear function is constant i.e. it does not depend upon the input value x. This means that every time we do a back propagation, the gradient would be the same. And this is a big problem, we are not really improving the error since the gradient is pretty much the same. And not just that suppose we are trying to perform a complicated task for which we need multiple layers in our network. Now if each layer has a linear transformation, no matter how many layers we have the final output is nothing but a linear transformation of the input. Non-linear Activation Function The Nonlinear Activation Functions are the most used activation functions.It makes it easy for the model to generalize or adapt with variety of data and to differentiate between the output. The Nonlinear Activation Functions are mainly divided on the basis of their range or curves- 1. Sigmoid or Logistic Activation Function The Sigmoid Function curve looks like a S-shape. Fig: Sigmoid Function Equation : f(x) = 1 / 1 + exp(-x) Range : (0 to 1) Pros: 1.The function is differentiable.That means, we can find the slope of the sigmoid curve at any two points 2.The function is monotonic but function’s derivative is not Cons: 1.It gives rise to a problem of “vanishing gradients”, since the Y values tend to respond very less to changes in X 2.Secondly , its output isn’t zero centered. It makes the gradient updates go too far in different directions. 0 < output < 1, and it makes optimization harder. 3. Sigmoids saturate and kill gradients. 4. Sigmoids have slow convergence. 2. Tanh or hyperbolic tangent Activation Function: Equation : f(x) = 1 — exp(-2x) / 1 + exp(-2x) or 2 *sigmoid(2x)-1 Range : (-1 to 1) Pros: The function and its derivative both are monotonic Output is zero centered Optimization is easier Cons: It also suffers vanishing gradient problem It saturate and kill gradients. 3. ReLU (Rectified Linear Unit) Activation Function The ReLU is the most used activation function in the world right now Equation : f(x) = max(0,x) Range : (0 to infinity) Pros: The function and its derivative both are monotonic. Due to its functionailty it does not activate all the neuron at the same time It is efficient and easy for computation. Cons: The outputs are not zero centered similar to the sigmoid activation function When the gradient hits zero for the negative values, it does not converge towards the minima which will result in a dead neuron while back propagation. 4. Leaky ReLU To solve the ReLU problem we have leaky ReLU Equation : f(x) = ax for x<0 and x for x>0 Range : (0.01 to infinity) Pros: The function and its derivative both are monotonic It allows negative value during back propagation It is efficient and easy for computation. Cons: Results are not always consistent During the front propagation if the learning rate is set very high it will overshoot killing the neuron The idea of leaky ReLU can be extended even further. Instead of multiplying x with a constant term we can multiply it with a hyperparameter which seems to work better the leaky ReLU. This extension to leaky ReLU is known as Parametric ReLU. 5. Softmax The softmax function is also a type of sigmoid function but it is very useful to handle classification problems having multiple classes . The softmax function is shown above, where z is a vector of the inputs to the output layer (if you have 10 output units, then there are 10 elements in z). And again, j indexes the output units, so j = 1, 2, …, K. The softmax function is ideally used in the output layer of the classifier where we are actually trying to attain the probabilities to define the class of each input. Which activation function to use ? From the above we have seen different categories of activation functions, we need some logic / heuristics to know which activation function has to be should be used in which situation. Based on the properties of the problem we might be able to make a better choice for easy and quicker convergence of the network. Sigmoid functions and their combinations generally work better in the case of classification problems Sigmoids and tanh functions are sometimes avoided due to the vanishing gradient problem Tanh is avoided most of the time due to dead neuron problem ReLU activation function is widely used as it yields better results If we encounter a case of dead neurons in our networks the leaky ReLU function is the best choice ReLU function should only be used in the hidden layers In this article, I tried to describe the activation functions commonly used . There are other activation functions too, but the general idea remains the same. Hope this article serves the purpose of getting idea about the activation function , why when and how to use it for a given problem statement
Topic DL01: Activation functions and its Types in Artifical Neural network
8
activation-functions-and-its-types-in-artifical-neural-network-14511f3080a8
2018-06-15
2018-06-15 06:30:53
https://medium.com/s/story/activation-functions-and-its-types-in-artifical-neural-network-14511f3080a8
false
1,246
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
abhigoku10
null
637052fdb803
abhigoku10
17
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-13
2018-09-13 11:07:15
2018-09-13
2018-09-13 11:09:52
1
false
en
2018-09-13
2018-09-13 11:11:44
11
14513e5cea19
1.626415
0
0
0
A round up of what’s been piquing curiosity, prompting questions and provoking debate among the Expend team.
5
The Expend Digest: Topics of the Week A round up of what’s been piquing curiosity, prompting questions and provoking debate among the Expend team. In this edition: Surprising phone bills, the surprising effect of colours, a surprising angle on the impact of AI and a (mostly) unsurprising spending scandal. So, the truth appears to emerge. While trying to remain as impartial as possible, we can’t quite help but notice how the fundamental problem with electoral spending is a lack of real oversight and transparency. A somewhat amusing tale of a Polish charity failing to control it’s expenses. If only they’d had instant spending notifications…then they’d never have let this stork rack up a huge phone bill. A recent, and worrying, article in the Telegraph had us all talking. After our recent study found ⅓ of people claiming expenses go into debt while waiting to be repaid, it emerged that the younger generation don’t even realise this can cost them dearly. London was awash with colour for the annual pride festival. Which made it a timely appearance by this infographic, about the benefits of colour in marketing. The standout claims being: 85% of your customers place colour as the primary reason they buy a product. Colour will increase your brand recognition by 80%. And (alarmingly) every 100ms of your website’s load time could be losing you 1% of sales! The robots are coming! Start your preparation with RSM UK’s brief overview of some business implications from the impending wave of process automation. While we’ve never heard it called “Robotic Process Automation” (RPA) before, the summary still serves as a decent starting point for the many nuances to the human cost of AI & ML. A particularly interesting area of this is the ‘edging out’ of younger employees looking to ‘cut their teeth’ in low level internship roles. Mitigating the effects of the upheaval will likely require government intervention, so here’s an overview of the steps various nations are taking to become leaders in AI. Fintech has something of an image problem in certain circles. Too insular, too much hype, too centered in London. It’s why we applaud the government’s Tech Nation initiative. And we’re especially happy to share their podcast episode — fintech for good. Originally published at blog.expend.io.
The Expend Digest: Topics of the Week
0
the-expend-digest-topics-of-the-week-14513e5cea19
2018-09-13
2018-09-13 11:11:44
https://medium.com/s/story/the-expend-digest-topics-of-the-week-14513e5cea19
false
378
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Expend
Expend is an all-in-one expenses platform that enables businesses to manage their employee spending more efficiently. We’re making expenses awesome!
1116696f9e5d
expendhq
5
1
20,181,104
null
null
null
null
null
null
0
null
0
c304d4a6207c
2018-03-30
2018-03-30 11:13:34
2018-04-02
2018-04-02 08:55:19
1
false
en
2018-04-02
2018-04-02 08:55:19
5
1452f247f40f
1.573585
0
0
0
It happens all the time, to get stuck in the traffic, so what we do is we try to find a solution on moving faster. We shift the road…
5
Let AiX Control The Way You Trade AiX It happens all the time, to get stuck in the traffic, so what we do is we try to find a solution on moving faster. We shift the road, thinking it will help us escape the massive traffic, what happens is that there is a car accident right there, impossible for you to even get on that road. So we instinctively try to get back at where we were. Suddenly the traffic there is even heavier, and you get yourself to far from where you were, making you think that you should’ve probably stayed there from the very beginning. Traffic, sounds like trading too. Sometimes your tradings seem like traffic, you try to make the perfect move but you end up regretting it. Now, AI backed by Blockchain came to the rescue of the trader. A single platform able to manage and centralize the whole diversity of exchanges in one interface. Trading calls at your fingertips, cheaper portfolio management, data analysis, trading history, and many more services, all brought to life through AI and Blockchain, in a single app, called AiX. Jos Evans an experienced Trader and Broker with a ten-year history of succeeding in Finance came up with a brilliant idea of AiX, a transformative Artificial Intelligence system built to act as your broker across the crypto and regulated financial markets. The opportunity to democratize finance, and help people around the world to grow their capital and make it cheap to do so, was the main reason he wanted to get into crypto space. You have the opportunity to join a revolution in trading. AiX is going to totally transform both crypto-trading and traditional finance through the inter-dealer broker market. Combining AI and blockchain technologies means everyone will be able to research, manage and execute multiple trades in a single domain. Our mission is to turn the financial world on its head. Current practices are a barrier to efficient, transparent trading. You can engage with the AiX team and the AiX community through our Twitter and Telegram channel. This is the quickest way to get in touch and to have your questions answered.
Let AiX Control The Way You Trade
0
let-aix-control-the-way-you-trade-1452f247f40f
2018-04-02
2018-04-02 08:55:20
https://medium.com/s/story/let-aix-control-the-way-you-trade-1452f247f40f
false
364
AI-X is a transformative Artificial Intelligence system built to revolutionise the global Inter-Dealer Brokerage (IDB) market https://aix.trade
null
AiXChange
null
Ai-X
ai-x
ICO,AI,FINTECH,BLOCKCHAIN,TRADING
Ai_XChange
Blockchain
blockchain
Blockchain
265,164
Ana Podrimaj
null
939b3065cef
anapodrimaj
5
5
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-17
2017-10-17 10:47:23
2017-10-12
2017-10-12 13:33:35
0
false
en
2017-10-17
2017-10-17 11:37:35
1
14537da66683
3.988679
0
0
0
The 60s were a heady time. The Space Age, we called it. Heavy with shows like the Jetsons and Star Trek, and theme parks like Epcot Centre…
4
Our Continuing mission…. The 60s were a heady time. The Space Age, we called it. Heavy with shows like the Jetsons and Star Trek, and theme parks like Epcot Centre. Even with the backdrop of the Cold War, we were also building an unbridled optimism. There is something of an irrepressible explorer in us as a species, and this showed in the explosion of very good science fiction. “Then in the years that followed, Science Fiction became science fact.” We don’t have Rosie yet, but we have the Roomba. We don’t have the tricorder yet but the first Motorola flip phone sure looked like the communicator. Inspired by Elon Musk’s Master Plan series, we thought we’d come up with our own. Black coffee starts everything. And that’s what happened in July of last year. We had: Following the triple H formula (Hacker, Hustler, Hipster), we still needed a hipster. But we thought we’d make up for that in the coffee that we consumed. We talked long into the night for several days, and settled on what we call “accessibly futuristic”. This turned out to be something, anything that dealt with: Artificial Intelligence, Big Data & Internet of Things. More brainstorming. More coffee. We looked through the projects we’d done and we saw that we did a lot of web development, eCommerce development and software enterprise applications. But, owing to a great deal of luck that we cultivated by having a bias towards the crazier ideas, we also had some happy surprises: “Ok, that last one was a result of a bunch of our devs playing a little too much Pokemon go.” This was a good start. We had a sun theme going so this became the Ra Products & Platforms line. Here’s a few of them RaCom — Chatbot for Ecommerce Support & Sales A recurring theme in sci fi is the uncanny valley, and we’ve come a long way from Eliza. (And even Cleverbot). While we’re very sanguine about the effect that robots will have on employment, we’re far more excited about how we can use a chat interface to improve online shopping through artificial intelligence applications. “While Ra can do lots of things, his first incarnation is as: Racom, your personal SHOPPING ASSISTANT” We’re huge fans of shopping online. If you know exactly what you want, there’s nothing out there that compares. But what if you don’t know what you want, or you want to return something, or cancel something? Our research has shown that people who shop truly value two things: frictionless transactions and responsiveness. Our data shows that companies are spending a lot on call centres to respond to transactions that can far more seamlessly be handled by a bot. What does it offer customers? The biggest value add is not having to navigate multiple menu paradigms. After all, what’s more intuitive than: “Hey RaCom, I ordered a shirt yesterday, but I really meant to order a blue one, can you do that for me?” V-Commerce for furniture Like a lot of people last year, we played a lot of Pokemon Go. Our devs got a little too carried away, and soon we were thinking about how cool it would be to beam furniture over to where you wanted it. This was, of course, an insane idea, but we didn’t want to discount it just because of that. We settled on a v-commerce platform that lets you check out how the furniture will look in the exact room it’s going to be in. We’ll even tell you if it will fit through your door. “See Products in Your Environment with RaAR. Fun. Fit. Foresee.” What’ve you got RaAR? RaAR allows customers to see their furniture in real time in their environment. You can do things like move the furniture around, change colour, material, and fabric, as well as take final pictures to share with friends to get an opinion. With TrueMatch, we’ll even make sure that it fits through your door. We built RaAR as a platform making it easy to onboard any furniture shop. We are now a full fledge Augmented Reality Development Company. Some other features include Creation of 3D Models I Conversion of 3D to Augmented Reality I Learning about Customers through Analytics I CRM Integration I App Store Optimization for your Brand We love the project services part of our business. It’s where we meet all the cool founders we’ve partnered with who have gone on to grow revenue-generating businesses. Our roots are in building solutions for businesses and businesses themselves. One of our happiest clients is Zarafa, for whom we created a robust & disruptive Single Page Application for effective collaboration with team or colleagues are continuing their journey with us. While we can’t share their names yet, as we don’t want to steal their thunder, A few others we’ve helped start are: Smart Online Learning Platform Our partner was already a leader in eLearning and education, but he came to us with those four magic words: “Let’s make it better”. “Turns out, engagement was what we were looking to improve. Most failures of eLearning were because people stopped partway through the course.” We got to work and we’re still at it! A sneak peek! On-Going Engagement with Other Students & Teachers Interaction with Teachers Non-annoying options for Notifications & Alerts Push Notifications to Engage Students in the Course Skill Development for MNCs or Corporations The platform will be launching soon and we can’t wait to tell everyone about it. We aim to contribute to the education industry through this platform and enables students to get education and all kinds of courses online. What’s Next? Our CEO recently gave a talk in Norway about achieving the Smart City with the help of Internet of Things & Artificial Intelligence. The turnout was incredible. Thanks for the love, Oslo. We are beyond amped. If you’ve got other ideas around anything “accessibly futuristic”, we want to hear from you. We’re not out exploring strange new worlds yet, we’d like to get more people on board for the coming journey. There’ll be a bit of chop, of course, but like Admiral Grace Hopper said “A ship in the harbor is safe, but this is not what ships are for.” Originally published at marici.io on October 12, 2017.
Our Continuing mission….
0
our-continuing-mission-14537da66683
2018-04-08
2018-04-08 21:24:06
https://medium.com/s/story/our-continuing-mission-14537da66683
false
1,057
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Joseph Reed
null
8d74d6442513
josephreed1385
1
2
20,181,104
null
null
null
null
null
null
0
bazel build tensorflow/contrib/lite/toco:toco bazel-bin/tensorflow/contrib/lite/toco/toco \ --input_file=xorGate.pb \ --input_format=TENSORFLOW_GRAPHDEF \ --output_format=TFLITE \ --output_file=xorGate.lite \ --inference_type=FLOAT \ --input_type=FLOAT \ --input_arrays=dense_1_input \ --output_arrays=output_node0 \ --input_shapes=1,2 android{ aaptOptions { noCompress "tflite" noCompress "lite" } ...... private MappedByteBuffer loadModelFile(Activity activity,String MODEL_FILE) throws IOException { AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(MODEL_FILE); FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor()); FileChannel fileChannel = inputStream.getChannel(); long startOffset = fileDescriptor.getStartOffset(); long declaredLength = fileDescriptor.getDeclaredLength(); return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength); } import org.tensorflow.lite.Interpreter; String modelFile="xorGate.lite"; Interpreter tflite; try { tflite=new Interpreter(loadModelFile(MainActivity.this,modelFile)); } catch (IOException e) { e.printStackTrace(); } float[][] inp=new float[][]{{0,0}}; float[][] out=new float[][]{{0}}; tflite.run(inp,out); out[0][0] //array index result_tv.setText(String.valueOf(Math.round(out[0][0])))
11
null
2017-12-24
2017-12-24 15:15:11
2017-12-24
2017-12-24 19:38:57
4
false
en
2018-07-23
2018-07-23 17:48:41
7
145443ec3775
3.518868
29
6
0
Make apps with the power of deep learning using Tensorflow and Tensorflow lite.
5
TensorFlow Lite Tutorial -Easy implementation in android Make apps with the power of deep learning….. ‘You need some working experience on android and some knowledge about basic tensorflow terms(very little)’ Suppose we have a ‘pb’ file of trained TensorFlow * model. I have trained a tensorflow model for simple xor logic gate to keep the deep learning part as simple as possible (As I am weak in maths🙏) The model takes 2 input of 0 or 1 in each and outputs the probability of output being higher or lower (0 to 1) and from the value we can predict our desired output as 0 or 1 by threshold. First part of the tutorial we will gather required information from the model (pb ) file and convert it to tensorflow lite model .lite/tflite format. You should have python 3 and tensorflow latest version (Using tensorflow 1.4.1 with python 3.6)environment ready Conversion Codes : github link Model file : xorGate.pb We will load our model in modelinfo.py file and extract our required information from the model file , our output is Tensor(“import/dense_1_input:0”, shape=(?, 2), dtype=float32) Tensor(“import/output_node0:0”, shape=(?, 1), dtype=float32) We have got our required information . input : ‘dense_1_input’ which will take input in multidimensional array shape of shape=(?, 2) of float32 data type . output : ‘output_node0’ which will have output in multidimensional array shape of shape=(?, 1) of float32 data type . Then we will convert the model into tensorflow lite model . You need bazel installed in your system https://bazel.build/ We need the tensorflow git repository downloaded (link) From the first tensorflow directory you need to run This will compile the toco with bazel (Conversion tool) Then copy the ‘xorGate.pb’ file in the tensorflow repo root directory(Which you have downloaded) Then run the following command from the tensorflow directory as previous from command line ,input output and shape details are given from the previous output we got As output there will be a xorGate.lite file in the directory which we will use in android. From here we will start implementing the tflite model in android Hope you have latest Android Studio installed (using 3.0.1) Created a new Android project named ‘TheXor’ add tensorflow-lite library using the following line implementation ‘org.tensorflow:tensorflow-lite:+’ in build.gradle(Module: app) file add And sync gradle create ‘assets’ folder in android app project and copy the xorGate.lite in the folder Create a simple user interface with two inputs and one output with a button User Interface Then connect the ui elements with the activity as usual Use the code below in the activity to load model file Import the tensorflow lite interpreter Then load the model file Load the model file in tensorflow interpreter If everything done right up to this point we are very close to test our model Main part is to get the input and output shape correctly see app source code for better understanding. Above code is generic we have to get the input value from out two input sources and out should be in right shape according to the information we got from the model description at very beginning. Its time to do the final step and the results will be saved in out multidimensional array ,the value will be in For 0 , 0 input value we will get value in probability not in 0 or 1 As we want the value to be 0 , 1,so we have to round the value and set it to the textview That's it we got our tensorflow model converted in tensorflow-lite and running in Android Update : With the latest version of tensorflow you can convert model file using python code (link) App source code : github link — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — Find me: Github: https://github.com/rdeepc LinkedIn: https://www.linkedin.com/in/saumyashovanroy/
TensorFlow Lite Tutorial -Easy implementation in android
176
tensorflow-lite-tutorial-easy-implementation-in-android-145443ec3775
2018-07-23
2018-07-23 17:48:41
https://medium.com/s/story/tensorflow-lite-tutorial-easy-implementation-in-android-145443ec3775
false
747
null
null
null
null
null
null
null
null
null
Android
android
Android
56,800
Saumya Shovan Roy (Deep)
AI, Computer Vision and Mobile technology enthusiast
6ea90b66081b
rdeep
64
44
20,181,104
null
null
null
null
null
null
0
null
0
f702855ffe47
2017-10-09
2017-10-09 10:00:22
2017-10-09
2017-10-09 10:00:23
9
false
en
2017-10-09
2017-10-09 10:00:23
11
145636f19db4
2.366038
1
0
0
null
3
Can robots sin? # medium.com Each of us makes numerous decisions daily that concern essential ethical issues. Our daily choices between g… This week in AI #12: Who am I? # blog.deepomatic.com It’s Monday, so we’ll start you off easy with both real-world and research AI applications. We’ll then kick … Highlighting Keywords in Emails using Deep Learning # medium.com Customers send emails to companies all the time. The email might be questions they have about a product, com… The Rise in Artificial Intelligence could be Apocalyptic for Research and Consulting # medium.com A simple act of giving or taking an advice got fabricated into a field of consultation, weaving colossal pli… Blockchains: How they work and why they’ll change the world. # medium.com Should the government be helping charities prepare for the GDPR? How can collaborative technologies boost bu… WHAT KIND OF SOCIETY DO YOU WISH ? # medium.com WHAT KIND OF SOCIETY DO YOU WISH ? The Ɵ Foundation is creating new digital realms, A.I. & V.R. simulations … AI can predict how long your relationship will last # medium.com New research, from the University of Southern California has analyzed the vocal characteristics of 134 coupl… Junk time # medium.com When you get right down to it, there are two types of time. There is valuable, fulfilling, productive time. … The Future of Performance Marketing is Here # medium.com Wouldn’t it be great if you could know exactly how your campaign is going to perform? If you are an advertis… TED @ BCG # medium.com TED @ BCG I just attended the TED event sponsored by @bcg. This year, it was organized in Milan. I am very g… AI-backed voice assistants make customer support efficient yet human # venturebeat.com GUEST: Enterprises are constantly evaluating ways to improve the speed and quality of their customer support…
11 new things to read in AI
1
11-new-things-to-read-in-ai-145636f19db4
2018-04-30
2018-04-30 07:18:36
https://medium.com/s/story/11-new-things-to-read-in-ai-145636f19db4
false
309
AI Developments around and worlds
null
null
null
AI Hawk
ai-hawk
DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
null
Deep Learning
deep-learning
Deep Learning
12,189
AI Hawk
null
a9a7e4d2b403
aihawk1089
15
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-21
2018-09-21 20:20:28
2018-09-21
2018-09-21 23:02:01
3
false
en
2018-09-21
2018-09-21 23:33:15
0
14569ea83a66
4.814151
6
0
0
Missing Data are ubiquitous throughout social , medical and behavioral sciences.Researchers have relied on many ad hoc technologies to fix…
5
The Conundrum behind Missing Data-I Missing Data are ubiquitous throughout social , medical and behavioral sciences.Researchers have relied on many ad hoc technologies to fix the conundrum behind missing data like discarding incomplete cases or filling in the missing values . But implementing these methods resulted in substantial bias because they require a strict assumption about the cause of missing data.Methodical approaches like maximum likelihood estimation and multiple imputation are the current state of art techniques to deal with missing data quandary since they make weaker assumptions about the cause of it. However until late 1990s these approaches were alive in theoretical world due to lack of software packages. The two terminologies that define the universe of missing data are missing data pattern and missing data mechanism and obviously they have very different meanings. A missing data pattern refers to the configuration of observed and missing values in a data set, whereas missing data mechanisms describe possible relationships between measured variables and the probability of missing data.A missing data pattern describes the location of holes in the data while missing data mechanism generally don’t give causal explanation of missingness but a mathematical relationship between observed and missing data. Fig 1:Six prototypical data missing patterns . Shaded regions depict the missing data A) Univariate Pattern: It has missing values isolated to a single variable.These type of patterns might occur in experimental studies where Y4 is the outcome variable of manipulated variables Y1 through Y3. B)Unit response Pattern:These type of patterns generally occur in survey research where Y3 & Y4 might be the variables that some respondents refuse to answer while Y1 & Y2 are the variables which are available to all members. C)Monotone Pattern:These type of patterns occur in case of longitudinal data where a member drop out and never return. A dataset with variables Y1,Y2….Yp is said to have a monotone missing pattern when the event that a variable Yj is missing for a particular individual implies that all subsequent variables Yk ,k>j. D)General Pattern:It is the most common form of missing pattern where missing values are dispersed all over the dataset. Again, it is important to remember that the missing data pattern describes the location of the missing values and not the reasons for missingness. E)Planned Missing Pattern: The design in panel E distributes the four questionnaires across three forms, such that each form includes Y1 but is missing Y2, Y3, or Y4. Planned missing data patterns are useful for collecting a large number of questionnaire items while simultaneously reducing respondent burden. F)Latent Variable Pattern:This is not a type of missing data pattern but researchers have adapted missing data algorithms to estimate these models where a latent variable depicts the structural equation model. Conceptual View of Missing Data Mechanisms There are 3 types of mechanism that describe how the probability of a missing value relates to the data, if at all. 1) Missing at Random(MAR):Data are missing at random (MAR) when the probability of missing data on a variable Y is related to some other measured variable (or variables) in the analysis model but not to the values of Y itself. The term missing at random is somewhat misleading because it implies that the data are missing in a haphazard fashion that resembles a coin toss. However, MAR actually means that a systematic relationship exists between one or more measured variables and the probability of missing data. To illustrate, consider the small data set in Table 1.2. The dataset mimic an employee selection scenario in which prospective employees complete an IQ test during their job interview and a supervisor subsequently evaluates their job performance following a 6-month probationary period. Suppose that the company used IQ scores as a selection measure and did not hire applicants that scored in the lower quantile of the IQ distribution. You can see that the job performance ratings in the MAR column of Table 1.2 are missing for the applicants with the lowest IQ scores. Consequently, the probability of a missing job performance rating is solely a function of IQ scores and is unrelated to an individual’s job performance. The practical problem with the MAR mechanism is that there is no way to confirm that the probability of missing data on Y is solely a function of other measured variables. This represents an important practical problem for missing data analyses because maximum likelihood estimation(MCMC) and multiple imputation(MICE) assume an MAR mechanism. 2)Missing Completely at Random(MCAR): The formal definition of MCAR requires that the probability of missing data on a variable Y is unrelated to other measured variables and is unrelated to the values of Y itself. Put differently, the observed data points are a simple random sample of the scores you would have analyzed had the data been complete. With regard to the job performance data in Table 1.2, one can create the MCAR column by deleting scores based on the value of a random number. The random numbers were uncorrelated with IQ and job performance, so missingness is unrelated to the data. You can see that the missing values are not isolated to a particular location in the IQ and job performance distributions; thus the 15 complete cases are relatively representative of the entire applicant pool. In principle, it is possible to verify that a set of scores are MCAR. For example, reconsider the data in Table 1.2. The definition of MCAR requires that the observed data are a simple random sample of the hypothetically complete data set. To test this idea, you can separate the missing and complete cases and examine group mean differences on the IQ variable. If the missing data patterns are randomly equivalent (i.e., the data are MCAR), then the IQ means should be the same, within sampling error. 3)Missing Not at Random(MNAR): Finally, data are missing not at random (MNAR) when the probability of missing data on a variable Y is related to the values of Y itself, even after controlling for other variables. To illustrate, reconsider the job performance data in Table 1.2. Suppose that the company hired all 20 applicants and subsequently terminated a number of individuals for poor performance prior to their 6-month evaluation. You can see that the job performance ratings in the MNAR column are missing for the applicants with the lowest job performance ratings. Consequently, the probability of a missing job performance rating is dependent on one’s job performance, even after controlling for IQ. Like the MAR mechanism, there is no way to verify that scores are MNAR without knowing the values of the missing variables.Pattern Mixture Models and Selection Models are the two methods to fill in the missing values if the values are MNAR. On my next post i will write in detail about each mechanism and how to fill in the data with their respective approaches. Stay Tuned !!
The Conundrum behind Missing Data-I
77
the-conundrum-behind-missing-data-i-14569ea83a66
2018-10-31
2018-10-31 11:16:00
https://medium.com/s/story/the-conundrum-behind-missing-data-i-14569ea83a66
false
1,130
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Manas Mahanta
Data Scientist | ML enthusiast | USC Trojan
1c11c4289f38
manasmahanta10
6
8
20,181,104
null
null
null
null
null
null
0
null
0
31b0ae95b35e
2018-08-13
2018-08-13 11:51:53
2018-09-03
2018-09-03 11:08:11
1
false
en
2018-09-03
2018-09-03 11:08:11
6
1457c654b5c2
1.139623
1
0
0
Companies can make good prospects for different products based on affinity category. This means that, if you know your customer, how they…
5
Your best audience is in your psychographic data Photo by Etienne Girardet on Unsplash Companies can make good prospects for different products based on affinity category. This means that, if you know your customer, how they spend their day, what they love to do and share, you can improve your marketing by aligning your strategies with their interests. There are more useful data than sex, geography, age, as known as demographics data, so “who” they are. There are data about their interests, preferences, habits. There are data about why they love cereals instead of biscuits, doing trekking or having a swim. These are psychographics data, they tell you “why” people buy something and they can help marketers to sell better and widen their audience. I’ve written on this topic a full article titled Let machine learning find your best audience, published in AI Tribune magazine. Let machine learning find your best audience | aitribune Do you know why you prefer a color instead of another? Why you buy a cereal box instead of a biscuits pack…www.aitribune.com Here you can find a deep insight into psychographic data use for a better marketing strategy, as well as tips about using machine learning and artificial intelligence to dig out these data in a more efficient and useful way. Good reading! Did you enjoy this post? Recommend it, by clicking the clapping hands’ icon 👏. Do you want to read more about Artificial Intelligence, Marketing and Business Growth? Follow me on Medium and Twitter (@esterliquori). Find me on Linkedin
Your best audience is in your psychographic data
1
your-best-audience-is-in-your-psychographic-data-1457c654b5c2
2018-09-03
2018-09-03 11:08:12
https://medium.com/s/story/your-best-audience-is-in-your-psychographic-data-1457c654b5c2
false
249
We help you find and acquire your best customers, maintaining their loyalty and enabling your business growth 📈. Find us 👉 https://goo.gl/bvgkUX
null
youaremyguide
null
You Are My Guide Blog
youaremyguide
ARTIFICIAL INTELLIGENCE,MARKETING,TECHNOLOGY,SOCIAL MEDIA,BUSINESS
null
Machine Learning
machine-learning
Machine Learning
51,320
Ester Liquori
My life is a mix of business, marketing, A.I., and people. I have many KPI in my roadmap. Proud co-founder of 👉 https://youaremyguide.com
f0bfcc45033e
esterliquori
149
95
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-30
2018-01-30 20:45:09
2018-02-08
2018-02-08 01:09:00
2
false
en
2018-02-08
2018-02-08 01:10:50
3
1457d9dca8f4
4.005975
4
0
0
Unwiring self defined limits by completing the Stanford University Machine Learning Course by Andrew Ng.
5
Learning Machine Learning Unwiring self defined limits by completing the Stanford University Machine Learning Course by Andrew Ng. course logo for Machine Learning by Stanford University on coursera This article consists of 2 parts: A personal story about overcoming challenges with the help of expert guidance An overview of the Machine Learning course Part 1 Until my first semester of college, I enjoyed taking math classes, and was placed in AP courses through high school. During my first semester of college, I was placed in calculus 2, without having completed college level calculus 1. I remember questioning the registrars on this decision, and was assured that I would pick it up, or I could always drop down to calculus 1 mid-semester. Well, mid-semester, my muddled freshman brain knew that I was on a sinking ship, but I didn’t bail out. I achieved the first and only “D” grade of my college career in that course, which led to me halting my math career on the spot. From then on, I carried forth a story about myself that I had hit my limit with math, and would not be able to handle concepts further along the progression of math studies. This was a bad and incorrect assessment that took me 20 years to overwrite. Fast forward 20 years. Late summer 2017, one of my colleagues shared that they were enrolling in the Stanford University Online Machine Learning Course on Coursera. I was intrigued, but subconsciously decided that I wouldn’t be able to handle it. A couple months later, they shared an update that they had finished the course. My curiosity grew. Over Thanksgiving weekend, when I checked out Coursera to learn more about the course, registration just happened to be closing that day. For whatever reason, my gut said to enroll. Even signing up for this course was an edge for me, but I was inspired by the achievements of my colleague, and decided to go for it, so I clicked “enroll”. I still hadn’t taken any higher level mathematics beyond calculus. And to handle some of my anxiety around the linear algebra, I worked on some linear algebra course material on Khan Academy, and found that it was much less difficult than I had expected. Andrew Ng is such a skilled teacher, and the lessons were so well organized, that even without any prior knowledge, I was able to follow along throughout the entire course and complete all of the quizzes and assignments. I felt a bit underwater during the first few weeks of the course after it really got rolling, but just kept at it and eventually found my legs. My overall experience of taking this course showed me that predefined personal limits are a myth, and walking a difficult road with excellent guidance can be a recipe for overcoming pretty much any challenge. Part 2 The course is divided into 12 weeks. The main vehicle for teaching is video. Each video has 1 or 2 moments where the screen will pause and a quick quiz question will appear. These quizzes are not counted towards your total grade in the course, but give good instant feedback as to whether or not you have been following along with what has been presented thus far in the video. Following each video series, there is a graded quiz, usually consisting of 5 questions. These can be retaken multiple times, and only your highest grade will count towards your final score. A passing grade for the course is contingent upon achieving 80% or greater on every quiz. On weeks 2 through 10, the video lecture series and quizzes are followed by a programming assignment. The programming assignments are extremely well constructed in order to give the students a chance to apply concepts that have been described in the preceding lectures. There are supplementary readings and tutorials available to help with moving through the programming assignment. One way to modulate the difficulty of the programming assignment would be to vary how much you rely on the tutorials, versus just trying to solve the problem unaided. I found myself doing a mix of both, depending on how much time I had that week, and how willing I was to struggle with implementing new concepts. Enrolled students of the course qualify for a temporary matlab license. Instructions are also provided on how to use open source software called “Octave” which is capable of compiling all of the matlab code that would be required for this course. I experimented with both, and ended up preferring matlab, as I found it more convenient to use the integrated code editing and execution environment. Octave is equally capable of compiling and running all of the code for the course, but it is command line based, which requires a lot of switching back and forth between the terminal and an external code editor. There is also a linear algebra review section within the first week of the course. I chose to dive into linear algebra on Khan Academy in addition to this lecture. Topics covered in the course include linear / logistic regression, neural networks, support vector machines, supervised / unsupervised learning, anomaly detection, and recommender systems (based on clustering / k-means). It was a sweeping overview of the subject, and would be a fantastic building block for any continued study regarding machine learning. Highly recommended! As a bonus, Coursera offers the chance to go through the entire course with full access to all lectures and exercises for zero cost. This option is considered an audit, and the only difference is that you don’t receive an official course certificate at the end. I wanted to be able to hang this on my digital wall, so I opted to purchase the course. This article is part of a series
Learning Machine Learning
6
learning-machine-learning-1457d9dca8f4
2018-03-10
2018-03-10 04:24:03
https://medium.com/s/story/learning-machine-learning-1457d9dca8f4
false
960
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Alex Jacobs
{“Blogging_about”: [ ‘Life’, ‘Code’, ‘Music’ ]}
d7c6bb984d5c
alexjacobs
62
18
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-12
2017-11-12 03:06:00
2017-11-12
2017-11-12 04:51:47
7
false
en
2017-11-12
2017-11-12 05:47:12
1
1459a07bf2e3
5.612264
2
0
0
According to the problem description on Kaggle https://www.kaggle.com/c/amazon-employee-access-challenge, when an employee starts work at a…
1
Amazon Employee Access Challenge with Machine Learning According to the problem description on Kaggle https://www.kaggle.com/c/amazon-employee-access-challenge, when an employee starts work at a company, he/she usually needs to obtain the computer access neccessary to fulfill their role, which allows him/her to read/manipulate resources through various applications or web portals.Often times, employees figure out the access they need as they encounter roadblocks during their daily work. A knowledgeable supervisor then takes time to manually grant the needed access in order to overcome access obstacles. However, this process is very time consuming and costly. Therefore, I want to build auto-access models, learned using historical data, hat will determine an employee’s access needs, such that manual access transactions (grants and revokes) are minimized as the employee’s attributes change over time. The model will take an employee’s role information and a resource code and will return whether or not access should be granted. Q: What dataset did you analyze? The dataset given by Kaggle consists of real historical data collected from 2010 & 2011. Employees are manually allowed or denied access to resources over this period. The original Training Data consists of 9 feature columns(RESOURCES, MGR_ID, ROLE_ROLLUP_1, ROLE_ROLLUP_2, ROLE_DEPTNAME, ROLE_TITLE, ROLE_FAMILY_DESC, ROLE_FAMILY, ROLE_CODE) and 1 label column(ACTION). The original Testing Data consists of the same 9 feature columns mentioned above with 1 id column. Detailed explanations of the datasets are listed below: ​ACTIONACTION — — is 1 if the resource was approved, 0 if the resource was not RESOURCE — — An ID for each resource MGR_ID — — The EMPLOYEE ID of the manager of the current EMPLOYEE ID record; an employee may have only one manager at a time ROLE_ROLLUP_1 — — Company role grouping category id 1 (e.g. US Engineering) ROLE_ROLLUP_2 — — Company role grouping category id 2 (e.g. US Retail) ROLE_DEPTNAME — — Company role department description (e.g. Retail) ROLE_TITLE — — Company role business title description (e.g. Senior Engineering Retail Manager) ROLE_FAMILY_DESC — — Company role family extended description (e.g. Retail Manager, Software Engineering) ROLE_FAMILY — — Company role family description (e.g. Retail Manager) ROLE_CODE — — Company role code; this code is unique to each role (e.g. Manager) How data csv files look like… training data: testing data: Q: Which models performed the best? Which did not? Why? I used Decision Tree, Random Forest, Logistic Regression and Gradient Boosting to train and test the data. Among these 4 models, 2 are tree-based models(decision tree and random forest), which predict the value of target variable by splitting the original set into subsets. I also used a ensemble method combing ‘Random Forest’ and ‘Gradient Boosting’. It turns out that Logistic Regression and the ensemble method both gave me very good results. But Logistic Regression model tuned with feature engineering gives the best Kaggle submission score: 0.88. And Decision Tree Model gave the worst result: 0.68. Reasons are follows: 1. Trees generally have a harder time coming up with calibrated probabilities. This can be helped somewhat with bagging and Laplace correction. 2. If the signal to noise ratio is low (it is a ‘hard’ problem) logistic regression is likely to perform best. Q: What feature selection and feature engineering techniques did you try? I tried 3 different feature selection and feature engineering techniques: Eliminating unimportant features, Hybrid features, and One hot encoding. Eliminating unimportant features I used the attribute.feature_importances_ to eliminate unimportant features but got a lower score. Thus, I decided not to remove any features. Hybrid features I combined columns ROLE_ROLLUP_1 and ROLE_ROLLUP_2 together by adding them up,returning a new column called ‘HYBRID_FAMILY’, since both of them used for categories. It improved the speed but didn’t raise the accuracy. One hot encoding For the one hot encoding, I combined training and testing data together first and then separated the column into tens of thousands by assign dummy values to each option in each column. Using one hot encoding for each model generated better results but the huge number of columns takes really long time to run. Q: How did you optimize your hyperparameters? I used RandomizedSearchCV instead of Gridsearch to optimize the hyperparameters for each model. Since the Gridsearch and Randomizedsearch explore exactly the same space of parameters, but the running time for randomized search is much lower. For example, I did RandomizedSearchCV for Logistic Regression and set up several possible values for each parameters: ‘C’, ‘penalty’ and ‘class_weight’. The Optimization then tried all the possible combination for input parameters and output the combination of parameters of the best trail. With the set of optimal hyperparameters, I improved my learning algorithm. Here is how the sample code looks like for my hyperparameter optimization: lr = LogisticRegression() params = {‘C’: [0.01, 0.1, 1, 10], ‘penalty’: [‘l1’,’l2'], ‘class_weight’:[‘balanced’, None]} lr_grid = RandomizedSearchCV(param_distributions = params, estimator=lr, cv=10, scoring = ‘accuracy’, n_iter=1) Q: Why do you think your chosen model works better than your other ones? Since the final decision we generate is only ‘1’ or ‘0’ for ACTION, Logistic Regression works much better than other ones, because Logistic regression is good when there’s a single decision boundary. Also, Logistic regression is intrinsically simple: it has low variance and so is less prone to over-fitting. While decision trees can be scaled up to be very complex, and thus are more liable to over-fit. In addition, feature engineering also works well to improve the final accuracy score based on Logistic regression model, comparing to other ones, because of the same reasoning. Q: What would you consider doing in the future to increase your performance? try more classifiers, as well as ensemble different combinations of classifier models with diffrent weights do more research and get better understanding of the features, thus be able to better eliminate unimportant features and hybrid the features perform one hot encoding and other feature engineering methods on all the models and compare results, if time and hardware allows use Bisection method to get more accurate result for optimizing hyperparameters, and try wider and more precise range for choosing parameters Q: A few sentences on how you considered the problem of overfitting/underfitting. Since results are showing only 0s and 1s, there might be an problem with overfitting. As mentioned above, Logistic regression is intrinsically simple, it has low variance and so is less prone to over-fitting. Tree based models can be scaled up to be very complex, thus are more liable to over-fit. As for underfitting, which refers to the model performs poorly on the training data, adding new domain specific features or using another classifier may help to better capture the relationship between the input values and target values. Q: Visualizations of the ROC curve of your final model and your initial model with some comments on the improvement. My initial model is a simple decision tree. The ROC curve is shown below. The AUC value returns 1.00, which means there is definitely overfitting for this model, which is not good. I also plot a ROC curve for the gradient boosting classifier. The AUC value is only 0.60. My final model is a logistic regression model with one hot encoding. Its ROC curve is again shown below. The AUC value improves to 0.99, which shows this is a much better model.
Amazon Employee Access Challenge with Machine Learning
4
amazon-employee-access-challenge-with-machine-learning-1459a07bf2e3
2018-05-11
2018-05-11 16:22:50
https://medium.com/s/story/amazon-employee-access-challenge-with-machine-learning-1459a07bf2e3
false
1,209
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Chuqing Wang
null
7934495cd88
chuqing_wang
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-11
2018-06-11 15:55:31
2018-06-11
2018-06-11 16:23:48
0
false
en
2018-06-11
2018-06-11 17:05:49
4
145af1daba2c
0.954717
0
0
0
The problem with covering AI nowadays is the non-stop flow of news and developments and pitches and ideas, all of them totally worth a…
5
CogX2018 — Day one — And I am already knackered The problem with covering AI nowadays is the non-stop flow of news and developments and pitches and ideas, all of them totally worth a feature — someone’s got to come up with an AI-powered relevance-filtering-device. The panels at CogX2018 are just too many for one human reporter — and having to choose is a constant pain. The good news: most have been live-streamed and are now available here, Charlie Muirhead be thanked. Bright minds on seven different Stages at the same time and tons of know-how — my favourite discussions took place on Stage 1 , that hosted thorough and in-depth analysis of “The Impact of Ai” on our lives, society, work-habits, mobility, culture — plus the inevitable ethical implications on Stage 4. The conference is being extensively covered; a couple good stories are already out, like this wrap-up of Professor Jürgen Schmidhuber’s take by Chris Middleton from The Internet of Things, or Shona Gosh’s from Business Insider reporting on how SoftBank’s eventually persuaded Uber to accept 8billion dollars (spoiler: it took 6 months of hard work). An extremely interesting contribution is this column by Paul Armstrong from Forbes, who sharply highlights the Big Issues at the core of the public debate (Stage 1 again), namely Ai&Politics (Brexit, indeed), Europe vs China, Regulation vs Free Reins. With an appropriate headline: Artificial Intelligence: What Side Of History Do You Want To Be On? More tomorrow.
CogX2018 — Day one — And I am already knackered
0
cogx-day-one-and-i-am-already-knackered-145af1daba2c
2018-06-11
2018-06-11 17:05:50
https://medium.com/s/story/cogx-day-one-and-i-am-already-knackered-145af1daba2c
false
253
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Human Maps
Sabrina Provenzani on Platforms, Artificial Intelligence, New Paradigms
993b69ba34b4
sabrinaprovenzani
114
119
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-04
2018-09-04 01:55:50
2018-09-06
2018-09-06 00:45:50
1
false
en
2018-09-06
2018-09-06 00:45:50
2
145c3003d93b
3.222642
1
0
0
Dr. Michel Kliot, Clinical Professor of Neurosurgery at Stanford University, has been helping Flow Immersive to understand the brain…
4
Dr. Kliot Interview #1: We’re getting dumber Dr. Michel Kliot, Clinical Professor of Neurosurgery at Stanford University, has been helping Flow Immersive to understand the brain science behind virtual reality and memory. I interviewed him last week, and this is the first of several transcripts, slightly edited and shortened for clarity. Jason Marsh: We’ve often chatted about how we have sort of evolved to a point where we have so much information that we need the kind of technology that Flow has developed simply to understand, analyze, and use the information in intelligent ways. Dr. Michel Kliot: So the analogy I use in medicine is not in why tumors grow, but why most of them stop, and the problem today is not in the sensitivity but specificity: and by that I mean we have so many screening tests we do. We’ve uncovered so many tumors that were asymptomatic and the question is in a screening test, is it relevant? What is the specificity? Is this tumor going to create a problem? Drowning in data, courtesy of CC4TV.com And the problem when you screen too much is that we’re uncovering a lot of things that we think might be pathological based on old information, but which truly are not. They are not even going to bother you. If you pursue the conventional medical approach to something that’s not really a problem, you cause new problems. You may do a biopsy that’s unnecessary, you may get a false positive, you may get a complication. All these things stem from a lack of specificity, because our sensitivity has increased enormously. In the information age, what I would say is just that we’re inundated with so much data, and we’re actually lost without the kind of tool that you’re creating. I think we will become dumber because we are swimming, no, we are drowning, in data. We’re actually drowning in data and what you need to do is create islands of intelligent information which we can grab onto. Jason Marsh: Wow. Kliot: Yea. Marsh: Once we understand it, then we need to communicate it to someone else because a lot of what we are discovering is non-obvious, right? I think that our society maybe more than ever has these narratives that it believes in and medicine is a part of that, but politics, whatever it is, we have these confirmation-bias narratives. And it turns out that data stories are one of the few things that can help us change the narrative in our minds. Kliot: If we analyze it in an intelligent way. Marsh: Data science focuses on analysis, but isn’t there two activities: data discovery and data communication? Kliot: When I say analyze, I don’t mean it simply in an individual context. Obviously, your first pass is through your own brain, but then in order to make it really meaningful, then I think you have to communicate it to other people and consequently reanalyze it. I think we are drowning in so much information and I think we have to have tools like you’re developing to see through it all and allow us to analyze and communicate it in a relevant and intelligent manner. But, I think what you’re doing has to happen in order for us to, frankly, continue to be intelligent, because what I find is we’re getting dumbed down. There’s so much information out there; I’m always amazed when I was at the VA or any large institution, I will look around and I would say 98/99% of the people are very nice and actually quite smart, but I will tell you I am constantly amazed at these stupid policies and things that happen at large organizations. There is a dumbing down that occurs when you reach a certain level of complexity. It’s almost inevitable. And I think the only way we can sort of develop an antidote is to be clever. I think you’re being very clever in developing ways that the data can be filtered, analyzed and presented. Marsh: Right. Kliot: And you have to do that or we’re going to drown in the complexity. We get dumber, not smarter. Now that’s the danger of too much data: that you actually become dumber. But at least there’s the potential to become smarter if you can manage it. Marsh: That’s the goal. That really is the big picture goal behind the company. Kliot: I think it’s time. I almost think that you represent a need that has to be fulfilled, so I think if you believe in destiny, you can say that circumstances created it. There’s a need for it and then there’s nothing better than fulfilling an important need. That’s really a great thing. ______________ Find out more about Flow Immersive at https://flow.gl or sign up for the Flow Editor at https://a.flow.gl .
Dr. Kliot Interview #1: We’re getting dumber
1
dr-kliot-interview-1-were-getting-dumber-145c3003d93b
2018-09-06
2018-09-06 00:45:50
https://medium.com/s/story/dr-kliot-interview-1-were-getting-dumber-145c3003d93b
false
801
null
null
null
null
null
null
null
null
null
Health
health
Health
212,280
Jason Marsh
Virtual Reality Information Architect
7575db899674
jmarshworks
90
71
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-23
2018-07-23 12:38:53
2018-06-29
2018-06-29 11:15:22
17
false
en
2018-07-23
2018-07-23 12:52:21
8
145ead89ae29
5.532075
0
0
0
There are 64 matches, 48 in the group stage and 16 in the knockout stage. We’ve analyzed the total number of matches Google predicted…
5
Can you trust Google’s 2018 World Cup Predictions? There are 64 matches, 48 in the group stage and 16 in the knockout stage. We’ve analyzed the total number of matches Google predicted correctly over the total number of matches and we have divided our analysis into three dimensions and the answer to the most important question, who will be the winner of the 2018 World Cup: World Cup Predictions If the previous World Cups were marked by all sorts of animals “predicting” the winners, the 2018 World Cup will be remembered for the arrival of some serious competition. While Achilles “The Psychic Cat” and Nelly The Elephant are still around, making their predictions, this time they will have to share the stage with some big shots. To start with, remember the yellow from Springfield that predicted Donald Trump’s victory in 2016 US Elections? Well, The Simpsons also apparently predicted that Mexico and Portugal were going play the final. But what the 2018 World Cup will really be remembered for is the arrival of the robots (scary!?). From Goldman Sachs to UBS, from Google to IBM, this year Wall Street and Silicon Valley behemoths are putting their machine learning teams to work at predicting the most important question human beings have been asking every four years: Who will win the World Cup? Google has adopted a different approach. Instead of trying to predict the winner, they have set predictions for every single match in the group stage and have been adding new predictions to the knockout stage. It’s a bettor’s dream. But, how much can you really trust Google’s predictions? In this article we tracked these predictions against reality. Overall accuracy for Google’s 2018 World Cup predictions Google’s predictions proved to be right for the first two matches (a 100% accuracy rate), but as the tournament progressed, the prediction accuracy decreased and the overall accuracy rate (total matches accurately predicted/total matches) hasn’t gone above 70% since the second match and ended the tournament at 59.4%. World Cup Group Stage Predictions The group stage finished with an average accuracy rate lower than 60% (56.3% to be exact). It’s no surprise that the accuracy rate was close to 50%, provided that there have been lots of unexpected results in this World Cup!! Which matches did Google accurately predict? One could say that Google got certain matches more or less right– the draws for example. In order to simplify the analysis, we choose not to consider scores of matches. Rather, we account for wins and losses and approach outcomes from a binary perspective. Did the team that had a higher probability of winning actually win the match? Google was right for the following 27 matches: Which matches did Google predict inaccurately? Google was wrong for the following 21 matches… 2018 World Cup Knockout Stage Predictions If we analyze the knockout stage independently from the Group Stage, Google’s predictions have increased from 56.3% in the group stage to 68.8% in the knockout stage. Which matches did Google accurately predict in the Round of 16? In the Round of 16, Google was right for 5 matches and wrong for 3, as a result Google got all Quarter-finals matches wrong but one. Our friends from Mountain View predicted Portugal v. France in a revival of 2016 Euro finals (wrong), Brazil v. Belgium (Right), Spain v. Croatia (Wrong) and Switzerland v. England (Wrong). Which matches did Google accurately predict in the Round of 16? Which matches did Google predict inaccurately in the Round of 16? Which matches did Google accurately predict in the Quarter-finals? In the Quarter-finals, Google was right for 3 matches and wrong for only 1 (Brazil v. Belgium). As a result they got one Semi-final right, Croatia v. England and one wrong France v. Belgium. Which matches did Google accurately predict in the Quarter-finals? Which matches did Google predict inaccurately in the Quarter-finals? Which match did Google accurately predict in the Semi-finals? In the Semi-finals, Google was right for 1 match (France v. Belgium). As a result they didn’t get the Final right. Which match did Google accurately predict in the Finals? In the Finals, Google was right both for the third place and the winner of the World Cup. 2018 World Cup Knockout Stage Predictions From the elimination of Germany to Argentina having to sweat blood to get past the group stage, this is a World Cup where surprises are the new norm. Not even Achilles “The Psychic Cat” anticipated that Germany would succumb to South Korea. But what’s the real impact of surprises on Google’s predictions? For instance, if Germany and Argentina had won all of their 3 matches, the average accuracy rate would have increased to 64.5%, as shown in the chart below. You don’t have to be a German supporter flying back home to agree that an accuracy rate close to 50% is not enough to plan your World Cup vacations. We don’t have access to how Google calculated the predictions, but we can speculate that one of the biggest limitations of the model is inherent to the whole World Cup dynamics: the number of times national teams play before the World Cup. Although the model might take into account team’s performance in the last 18 to 24 months, its FIFA ranking and how well the players have been performing before the World Cup, there aren’t enough data points to predict the interactions between the players and the interactions between the two teams: The team: before the World Cup starts, the national teams play a couple of times a year and coaches test several different formations, till they find an “ideal” one. By changing the players, we also change the model. The problem is that once the model is stable, with the ideal team, we don’t have enough observations and the model gets trained in real-life conditions, namely the World Cup. The match: the two teams don’t play against each other with the same teams very often, that makes it extremely difficult to create a model that has enough similar observations to train the model. Want to know how we made this analysis? Get in touch. Originally published at www.metriq.io on June 29, 2018.
Can you trust Google’s 2018 World Cup Predictions?
0
can-you-trust-googles-2018-world-cup-predictions-145ead89ae29
2018-07-23
2018-07-23 12:52:22
https://medium.com/s/story/can-you-trust-googles-2018-world-cup-predictions-145ead89ae29
false
1,042
null
null
null
null
null
null
null
null
null
Soccer
soccer
Soccer
22,100
metriq
metriq assists you in modernizing your business through data collection, visualization and automation.
35c7beee2ee6
metriq
6
16
20,181,104
null
null
null
null
null
null
0
null
0
a7fe8b0bb56e
2018-05-02
2018-05-02 10:48:03
2018-05-02
2018-05-02 12:29:00
1
false
en
2018-08-06
2018-08-06 10:10:20
8
145f105f64f7
2.067925
4
0
0
→ Check our BLOG → Follow us on: Facebook, Twitter, Instagram, LinkedIn
5
Deep Image by TEONITE — the app that uses machine learning to enlarge images, without losing quality → Check our BLOG → Follow us on: Facebook, Twitter, Instagram, LinkedIn Deep Image by TEONITE Have you ever wondered where the innovative ideas come from? In our software house — TEONITE — we usually implement ideas to solve or improve a specific issue. Deep Image, an application created by TEONITE’s full-stack developer — Andrzej, was created by accident to solve a certain problem concerning the quality of graphics files. Want to know how Deep Image works? Or how it was made? Check our case study HERE How can Deep Image help you? Before we describe what Deep Image is check if any of the following applies to you: you have legal, archived graphics files of your favorite artists, you would like to have them in the form of a poster, but their resolution does not allow for printing you have an archive of private photos in low resolution and you would like to improve their quality you need to enlarge or cut out a specific fragment of the photo/graphic (for example, school photography from years ago), but you are worried about the quality you work in CSI Miami and you have to enhance the picture from the city camera because in the passersby glasses you can see the criminal’s face reflection Ok, so how does Deep Image actually works? Each graphic file is a matrix and a set of data stored in it (numbers — pixels). After enlarging the image, the amount of data does not increase, so the image obtained has visually worse quality. Meanwhile, in Deep Image, thanks to the use of machine learning, we get a larger image with much better quality. Neurons, modeled on human, in action The core of the application is a CNN or ConvNet — a class of convolutional neural networks, which are successfully used for image analysis. The convolutional networks are based on biological processes, modeled on human neurons. In Deep Image networks are developing. The network learns that the line on the graphic can not have sharp edges and should be smooth — more options for smoothing, the better the quality of output file is. “We need to go deeper!” Working with neural networks does not look like a standard programming process. More like science, based on showing the pattern. Want to know how Deep Image works? Or how it was made? Check our case study HERE Application operation is based on the iterative process, which brings the final graphics almost to perfection. Everything happens automatically through a framework that is used in Deep Image-Keras (high-level API written in Python for the tangled neural networks TensorFlow and Theano). Remove artifacts / resize The entire process uses two algorithms developed on the basis of scientific research: Zoom algorithm (resize) Jpg algorithm (remove artifacts) Want to know how Deep Image works? Or how it was made? Check our case study HERE
Deep Image by TEONITE — the app that uses machine learning to enlarge images, without losing…
57
deep-image-by-teonite-the-app-that-uses-machine-learning-to-enlarge-images-without-losing-145f105f64f7
2018-08-06
2018-08-06 10:10:20
https://medium.com/s/story/deep-image-by-teonite-the-app-that-uses-machine-learning-to-enlarge-images-without-losing-145f105f64f7
false
495
Case studies, software development stories, programming tutorials and open source publications https://teonite.com/
null
TEONITE
null
TEONITE
teonite
TECHNOLOGY,SOFTWARE DEVELOPMENT,DESIGN,DATA SCIENCE,SOFTWARE ENGINEERING
teonite
Machine Learning
machine-learning
Machine Learning
51,320
Paulina Maludy
#marketing at #teonite https://teonite.com/
badc24183091
PaulinaMaludy
366
495
20,181,104
null
null
null
null
null
null
0
null
0
ea8d68f07867
2018-07-29
2018-07-29 14:39:42
2018-07-29
2018-07-29 14:50:29
0
false
en
2018-07-29
2018-07-29 14:50:29
0
145f3b987e0e
1.94717
1
0
0
Being a designer is hard. A designer is often faced with many dilemmas such as should I learn to draw? Should I learn to code? Should I…
5
Code, AI and design Being a designer is hard. A designer is often faced with many dilemmas such as should I learn to draw? Should I learn to code? Should I learn softwares? Endless. But today, I would like to focus on one of the dilemmas. Designers and code. Should a designer learn to code? Yes, no, maybe. THIS ABSOLUTELY DEPENDS ON YOU. You are the designer, you have got to take a call on whether you want to dip your feet in this area. Personally, learning code is a boon because you understand the backend of your work and allows you to recognise the limitations, restrictions and also possibilities for a developer. You become independent and confident regarding your work. With the advent of AI and ML, the world of designers has expanded. The immense possibilities have given us inspiration for a better future. We kickstarted our first session of Design with code with a debate on this topic. I would like to express my opinions regarding the topics touched upon in class as well as few of my own readings based on these topics. “Ai is the new electricity” — Andrew Ng Pondering over this, I wonder how designers can make use of AI to mould their designs. Machine learning being a part of AI, makes me question the future of living. With all the sci-fi movies and books which talk about singularity, I think it is only fair to reward it to machine learning. Social media being a major platform for extracting data, machine learning is growing at an unimaginable pace. Data is the new oil. I recently attended a talk where in an engineer kept stressing on how data is extremely important and I related to this talk while we spoke about data in class. Currently data is the most important element for companies as machines learn only if they are fed with data. With the shift in ideas, voice has recently picked up and Amazon echo and Google home have started collecting data on lifestyles of people. This happens to be extremely precious data but this raises privacy issues (which will be a completely different debate altogether). Well, I guess we can vouch for how precious social media data is after finding out that the government of Uganda imposed social media tax. Yes, social media tax. (this was something that I HAD to mention) “In a few years, bots will move so fast you’ll need a strobe light to see it “ — Elon Musk The increase in the quality of bots, shows us the progress in the industry. For example, the Olli, is a driverless bus employing the IBM’s Watson. Voice is the most natural mode of communication and this factor is targeted by conversational agents to attract users and improve experiences. Another example for improving experiences is Ira in the banking sector. Ira is a chatbot used by HDFC bank which allows the users to have a delightful banking experience. Reading and learning about these topics, I am quite excited about the project that lies ahead of me!
Code, AI and design
1
code-ai-and-design-145f3b987e0e
2018-07-29
2018-07-29 14:50:29
https://medium.com/s/story/code-ai-and-design-145f3b987e0e
false
516
A series of studios to explore, design and learn the practice of prototyping with coding
null
null
null
Design with code
null
design-with-code
DESIGN,INTERACTION DESIGN,PROGRAMMING,PROTOTYPING
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Tanya
UI/UX designer
46c81c6c988
tanyaballal
3
2
20,181,104
null
null
null
null
null
null
0
null
0
1f35b6f451e8
2018-07-05
2018-07-05 14:09:58
2018-07-05
2018-07-05 14:14:38
0
false
en
2018-07-06
2018-07-06 09:53:15
7
145f92a7c13b
0.758491
0
0
0
null
5
Webography for 4 dummies to make it in machine learning — Chapter 19, Scene 4 Calculating arithmetic mean (one type of average) in Python Is there a built-in or standard library method in Python to calculate the arithmetic mean (one type of average) of a…stackoverflow.com numpy.mean - NumPy v1.14 Manual If the default value is passed, then keepdims will not be passed through to the method of sub-classes of , however any…docs.scipy.org pandas get column average/mean This site uses cookies to deliver our services and to show you relevant ads and job listings. By using our site, you…stackoverflow.com LUIS Entity or Intent Should I use Entities or Intents for similar questions but different answer? Example for searching an apartment: Is…stackoverflow.com pandas.read_csv - pandas 0.23.2 documentation Parameters: filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any \ The string could be a URL. Valid…pandas.pydata.org Python difference between randn and normal I'm a statistician who sometimes codes, not vice-versa, so this is something I can answer with some accuracy. Looking…stackoverflow.com OSError when reading file with accents in file path · Issue #15086 · pandas-dev/pandas Code Sample, a copy-pastable example if possible test.txt and test_é.txt are the same file, only the name change…github.com
Webography for 4 dummies to make it in machine learning — Chapter 19, Scene 4
0
webography-for-4-dummies-to-make-it-in-machine-learning-chapter-19-scene-4-145f92a7c13b
2018-07-06
2018-07-06 09:53:15
https://medium.com/s/story/webography-for-4-dummies-to-make-it-in-machine-learning-chapter-19-scene-4-145f92a7c13b
false
201
We offer contract management to address your aquisition needs: structuring, negotiating and executing simple agreements for future equity transactions. Because startups willing to impact the world should have access to the best ressources to handle their transactions fast & SAFE.
null
ethercourt
null
Ethercourt Machine Learning
ethercourt
INNOVATION,JUSTICE,PARTNERSHIPS,BLOCKCHAIN,DEEP LEARNING
ethercourt
How To Make It
how-to-make-it
How To Make It
266
WELTARE Strategies
WELTARE Strategies is a #startup studio raising #seed $ for #sustainability | #intrapreneurship as culture, #integrity as value, @neohack22 as Managing Partner
9fad63202573
WELTAREStrategies
196
209
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-10
2017-09-10 17:30:50
2017-09-14
2017-09-14 06:42:27
26
false
en
2017-09-18
2017-09-18 07:22:30
1
145fb9cc1685
5.725472
7
1
0
Hello guys, this case study is about finding the insights of Jabardasth TV Show. I am really thankful to YouTube and ETV Channel. (This…
5
Data Science Case Study (Finding Insights of Jabardasth) Hello guys, this case study is about finding the insights of Jabardasth TV Show. I am really thankful to YouTube and ETV Channel. (This study is based on Jabardasth TV show’s data, which I collected from YouTube) Jabardasth — Kathanak Comdey Show “Jabardasth is a very popular comedy show among Telugu tv shows. Currently Jabardasth has 13Lacks+ subscribers on YouTube.” Collect the Data The first and foremost important task for any project is the data collection. I have collected the data from YouTube API’s using ‘httr’ in “R” programming language. Data never be ready to use ready-made thing, we need to extract the features from the existing raw data. (* Data collected from 2015–07–30 to 2017–06–19 of Jabardasth shows around 1942 enties) Raw Data from YouTube API Data Pre-Processing The most crucial and key role player of the Data Scientists Job is Data Pre-Processing. (This is my opinion, I found it very interesting and useful in understanding the data insights before plotting any graphs or applying any ML algorithms. I have spent around 70 to 80 percent time of this case study in the Data Pre-Processing. Data after Pre-Processing Feature Engineering What do we understand from the features, what insights can we find. Before proceeding to modelling(ML), we need to understand the data and try to get as much insights as possible. Jabardasth vs Extra-Jabardasth We can clearly understand the views trend of Jabardasth vs Extra-Jabardasth from the above plot(graph). From the above graphs we can clearly see, there is a sudden bump for Jabardasth plot(blue color tall line). We call it an insight that we found. If we go through the data and see, there is a logic behind this i.e. it was a special event organised by ETV on “31 Dec, 2015” as part their New year celebrations. Like trend of Jabardasth vs Extra-Jabardasth shows in the above plot(graph) Coments trend of Jabardasth vs Extra-Jabardasth shows in the above plot(graph) Dislike trend of Jabardasth vs Extra-Jabardasth shows in the above plot(graph) Jabardasth Intro’s vs Extra-Jabardasth Intro’s Jabardasth’s Promo vs Extra-Jabardasth’s Promo Skits trend (Artists) Artist to Artist View Artists graph with their top viewed skit marked Time to jump into ML algorithms We have seen the data insights so far, it’s now time to apply some ML algorithms on the data. Before implementing any ML algorithm, we should thoroughly understand the below details. What is the business problem (What is the expectation of client)? What kind of data is available with us (Sometimes data volume also plays a major role) What type of problem are we going to solve? (Eg: Classification or Regression etc) Understand the available data with respect to given problem statement Assume, our business request is to predict the “Number of views of a skit” Key point: Business problem is regression type (Linear Regression) Check the independent variables for correlation, correlation may cause in inaccurate results. Improve the model by adding additional features and proper data transformations. I have applied below different transformations. Log transformation Normalizationn Standardization Residuals Plot Residuals Plot Residuals Plot Below are Linear Model summary images Transform the Artist column as categorical variable and apply the linear regression We can observe the better Adjusted R-Squared value in the above image. Find the Artist, based on the Views and Likes Key point: Business problem is classification type We will predict the Artist with the help of Logistic Regression Convert the Artist feature to categorical type. As we see the features are highly correlated, we can drop one of two highly correlated features. (We can drop multiple features based on the correlation between the features) Since we are solving simple classification problem, I have taken the data of two artists. Our prediction worked very well. There are no false predictions but, we should be very careful about the over fitting problem. If we over fit the model, it may fail while predicting the unseen data. Fine, we try one classification problem. Find the video type, based on the Views and Likes Not a good prediction, we need to improve the accuracy by fine tuning the features or adding more features into the model. (Simply adding features may not help in improve the predictions) We will apply the standardization technique and try with the KNN model Its amazing, we see 100% accurate results. We need to keep improve the model until we get the desired accuracy. (Prediction measures are dependent on business problem with respect to Accuracy, Recall and Precision) Text Mining — What people are commenting about A word cloud is a graphical representation of frequently used words. The size of each word(height & width) in this picture is an indication of frequency of occurrence of the word in the entire text. Why do we do this for comments data? We have seen few insights of the Jabardasth show using different metrics or ML techniques. If we would like to know “What users are commenting more about this show”. In this case, Text mining will help us in finding the insights. User Comments We apply few pre-processing steps before drawing the word plot. Remove punctuation Remove numbers (Numbers alone can’t be user) Uppercase/Lowercase (Maintain any one) Remove stopwords (Eg: a, an, the, he & she etc..) Strip White spaces We can use the ‘removewords’ function to remove the words based on the requirement. We need to sort the words in descending order with respect to their frequency. We may need to do little more customized pre-processing. (Optional) Now we see the most frequently used words (top 20). Word Cloud From the above plot, we can see the most used words in the comments. For insights, if we ignore the generic words like ‘super’, ‘skit’ and ‘nice’ the next popular words are ‘aadi’, ‘sudheer’, ‘chandra’ followed by ‘roja’, ‘anasuya’ and so on. Sentiment Analysis based on user comments In-Progress…
Data Science Case Study (Finding Insights of Jabardasth)
29
data-science-case-study-finding-insights-of-jabardasth-145fb9cc1685
2018-06-17
2018-06-17 06:48:37
https://medium.com/s/story/data-science-case-study-finding-insights-of-jabardasth-145fb9cc1685
false
974
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Santosh Kumar Palle
null
c69cd15b2b1f
santoshkumarpalle
5
3
20,181,104
null
null
null
null
null
null
0
null
0
c6b8394812fa
2018-07-10
2018-07-10 16:43:47
2018-07-10
2018-07-10 16:44:30
1
false
en
2018-07-10
2018-07-10 17:03:45
7
146210947715
3.026415
4
0
0
By Melissa Chadwick
5
Did A Robot Write This Blog Post? By Melissa Chadwick I was weaving my way through Washington, D.C. traffic last week on the way to a client meeting when I caught an interesting story on 1A radio about “Artificial Intelligence. Real News?” For once, sitting in the car for an extended period is going to be a good thing, I thought. Excellent. My ears initially perked when I heard about the role of artificial intelligence (AI) in the royal wedding coverage transforming some of today’s major newsrooms. The panel of guests was impressive: The Washington Post Artificial intelligence guru Jeremy Gilbert; Nick Rockwell chief technology officer at The New York Times; Rubina Madan Fillion, director of audience engagement for The Intercept; and John Keefe, a developer at Quartz Bot Studio, professor at CUNY Grad School of Journalism and former editor of Data News at WNYC. I really encourage you to listen to the whole program because this short post alone really can’t do justice to the range of interesting topics the guests covered — including the future of storytelling, reporting ethics, the newsroom of the future and more. However, what struck me the most is the delicate balance newsrooms need to strike between resource allocation and reporting quality. The Washington Post made some very strong arguments around how AI can automate simple, data-driven writing tasks — like corporate quarterly earnings or sports scores — so that staff can be freed up to write the more in-depth and investigative pieces that the publication is known for. That’s why they’ve enlisted the help of an in-house automated storytelling platform called Heliograf. By contrast, The New York Times rebuffed the idea of “robo-reporting” by essentially saying that their editorial goal is to not cover the type of story that any old robot could write anyhow. Guests also talked about the potential for AI to help do the legwork, together with humans. For example, chatbots can field inquiries about a story, or AI can sort through data sets so a reporter can have a head start on analysis without getting mired in the number crunching. But that also brought up the challenges of data bias and “what ever happened to talking to people” in terms of digging into the truth and writing something newsworthy. These are troubling questions, given how strong AI has become. For example, I had recently checked out bot or not, a Turing test for poetry, and was having serious angst about my inability to distinguish between computer and human-generated verses (for shame!). So what does this mean for the media industry and brand content developers alike? The takeaway is that there’s a time for pure facts and then there’s time for telling a story, and there are avenues for AI and human news-gathering that correspond with each depending on the audience. True investigative journalism and thought leadership as a whole can’t be fudged or fabricated by a machine. Original thought, emotion, judgment and the ability to adjust questions based on verbal feedback or even interviewee body language would be difficult or near impossible for AI, as we know it today, to pull off. It’s like when I once I asked my mother, who was a technical patent translator in her day, what she thought about Google Translate and sites like Babel Fish. She laughed and rattled off quick examples of easy mistakes machines can make when they don’t understand language intricacies like technical terminology, social context or double entendre. Some of those examples are not fit to repeat here. For agencies like Merritt that put a lot of weight on building seasoned writing teams that truly understand our clients and markets, content will always be an exercise of reading the proverbial room and shaping narratives to elicit response and drive action. Yes, AI will automate a number of content functions that don’t require thought or analysis, and that’s ok. Nobody wanted to do those tasks anyway. But for the true storytelling our clients need in order to reach their audiences, content creation goes far beyond what a computer will ever be able to write. “Spinning a good yarn” is a part of our collective fabric as humans, and good content should retain its “humanity” as a bastion of hope for us all — at least for the foreseeable future until the invasion begins. If you’re interested in real people telling real stories that matter for your brand, contact us. Originally published at www.merrittgrp.com.
Did A Robot Write This Blog Post?
53
did-a-robot-write-this-blog-post-146210947715
2018-07-10
2018-07-10 17:03:45
https://medium.com/s/story/did-a-robot-write-this-blog-post-146210947715
false
749
Insights on B2B/B2G technology marketing, PR, content and creative news & trends
null
MerrittGroup
null
Merritt Group
merritt-group
MARKETING,CONTENT,DIGITAL,CREATIVE
MerrittGroup
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Melissa Chadwick
Merritt Group VP of Content
3d341d8ffee2
mchadwickMG
160
530
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-01
2018-08-01 12:48:06
2018-08-01
2018-08-01 12:57:05
1
true
en
2018-08-01
2018-08-01 12:57:14
7
14623ce04286
1.633962
0
0
0
This course provided a fantastic finale to an already great course. This week delves deeper into the subject of CSV (Comma Separated Value)…
5
Duke University — Java Programming: Solving Problems with Software — Week 4 Image credit to Coursera, Duke University and JAVA This course provided a fantastic finale to an already great course. This week delves deeper into the subject of CSV (Comma Separated Value) Files and how to use code to manipulate them. For the project this week, the students were given the US data on baby names according to gender by year. This data went as far back as 1880 all the way up until 2014! We wrote functions that would analyze the data set and rank names according to popularity. After that was completed, we would write a second function that would find what the name would be in a different year with the same rank. We would then print “(Name) born in (year) would be (Name2) if (he/she (based on gender)) was born in (year2). This has provided me with plenty of enjoyment as I have been checking the names of family and friends if they were born in different years. For example, my name would be “Alejandro” if I was born in 1996. As always, to accomplish this task I had to use the Seven Step Method. ) Work Example By Hand ) Write Down What You Did ) Find Patterns ) Check By Hand ) Translate To Code ) Run Test Cases ) Debug Failed Test Cases There is an optional honors week that follows, which I will be getting to next time! Thank you for reading! If you would like to read more of my blogs, I have a list below. My Experience With Learning Java My method of learning to code in JAVA begins with me seeing two course specializations on Coursera. The first…medium.com Duke University — Java Programming: Solving Problems with Software — Week 2 This week of the course took a lot longer than I had originally anticipated. It provided quite a bit of challenge as…medium.com Duke University — Programming Foundations with JavaScript, HTML and CSS — The Second Week I have just finished the second week of Duke University’s Programming Foundations with JavaScript, HTML and CSS…medium.com Duke University — Java Programming: Solving Problems with Software — Week 3 The third week of this course has been quite enjoyable. I, personally, have an interest in data science, and this week…medium.com
Duke University — Java Programming: Solving Problems with Software — Week 4
0
duke-university-java-programming-solving-problems-with-software-week-4-14623ce04286
2018-08-01
2018-08-01 13:58:45
https://medium.com/s/story/duke-university-java-programming-solving-problems-with-software-week-4-14623ce04286
false
380
null
null
null
null
null
null
null
null
null
Programming
programming
Programming
80,554
Brendan M.
null
c7d314e61b4f
masseybr
13
3
20,181,104
null
null
null
null
null
null
0
null
0
9972619e86f1
2018-05-02
2018-05-02 06:32:30
2018-05-02
2018-05-02 06:47:35
0
false
id
2018-05-02
2018-05-02 07:11:35
5
146257093a08
4.430189
3
0
0
Kecerdasan buatan (articifical intelligence atau disingkat AI) adalah istilah untuk kemampuan intelijen atau kecerdasan yang dimiliki oleh…
4
(1/4) Masalah Etika Revolusi Mesin Pintar Kecerdasan buatan (articifical intelligence atau disingkat AI) adalah istilah untuk kemampuan intelijen atau kecerdasan yang dimiliki oleh perangkat lunak yang berjalan pada mesin komputer. Tujuan utama dari penelitian kecerdasan buatan meliputi penalaran, pengetahuan, perencanaan, pembelajaran, pengamatan, persepsi, pengolahan bahasa alami, pengenalan pola dan banyak lagi. Saat ini, ada banyak fakta tentang kemajuan dramatis dalam AI seperti komputer yang berhasil mengalahkan juara dunia catur, mobil tanpa supir dari Google, robot pemain bulu tangkis, komputer memainkan video game, dan lain-lain. Hal tersebut cenderung menghasilkan prediksi spekulatif bahwa AI akan mencapai kecerdasan tingkat manusia di masa depan. Ray Kurzweil, pemikir terkemuka dalam prediksi teknologi, memperkirakan ini akan terjadi pada tahun 2029. Perusahaan teknologi dunia memulai taruhan besar dengan menempatkan investasi besar untuk profitabilitas AI. Untuk contoh, raksasa teknologi AS Google, Microsoft, Amazon, Twitter dan Facebook, serta Baidu dari China, baru-baru ini memacu departemen penelitian AI mereka dengan mempekerjakan peneliti AI terkenal dari kalangan akademisi. Disisi lain, nama-nama besar dalam teknologi seperti Stephen Hawking, Bill Gates dan Elon Musk telah memperingatkan tentang bahaya yang mungkin diakibatkan oleh mesin pintar dengan kecerdasan setara manusia. Kemajuan AI dimulai oleh cabang ilmu komputer yang disebut pembelajaran mesin (machine learning), yang merupakan bidang studi AI untuk memberikan komputer kemampuan belajar dari data dan menghasilkan keputusan tanpa eksplisit diprogram. Pembelajaran mesin memungkinkan komputer untuk belajar dari pengalaman dan teknologi tersebut telah ada di mana-mana saat ini. Pembelaran mesin membuat pencarian Web yang lebih relevan, tes darah lebih akurat dan layanan kencan untuk menemukan calon pasangan sesuai kriteria. Secara sederhana, algoritma pembelajaran mesin mengambil kumpulan data yang ada, menyisir untuk mempelajari pola, kemudian menggunakan pola-pola ini untuk menghasilkan prediksi tentang data baru di masa depan. Terkesan sederhana, namun kemajuan dalam pembelajaran mesin selama dekade terakhir telah banyak berubah. Contohnya, AlphaGo, sistem komputer Google yang menggunakan pembelajaran mesin telah mengalahkan pemain Catur-Go yang terbaik di dunia. Atau robot dengan kamera dan kemampuan pengamatan visual dari pembelajaran mesin telah mampu bermain bulu tangkis dengan pemain professional. Kompetisi mesin pintar tidak hanya melibatkan mesin. Beberapa tahun yang lalu Netflix perusahaan rental film online ingin membantu pelanggan menemukan film yang mereka senangi, terutama film-film yang tidak begitu terkenal dan bukan “rilis terbaru” yang sebagian besar diabaikan dalam katalog mereka. Perusahaan sudah memiliki sistem rekomendasi film, tapi masih jauh dari sempurna. Jadi perusahaan meluncurkan kompetisi untuk memperbaiki sistem yang ada dengan aturan sederhana: pemenang pertama untuk yang mengalahkan kinerja sistem terdahulu sebesar 10 persen akan mendapatkan hadiah $1-juta. Puluhan ribu peneliti dan insinyur dari seluruh dunia mendaftar. Bagi mereka, kompetisi ini seperti mimpi dan bukan hanya untuk hadiah uangnya. Komponen yang paling penting dari sistem mesin pintar adalah data dan Netflix menyediakan 100 juta data real yang siap untuk digunakan. Saat ini tidak hanya NetFlix, semua raksasa teknologi seperti Facebook, Google, Microsoft, Amazon, Twitter, Samsung telah berkolaborasi untuk mempercepat kemajuan mesin pintar dengan berbagi pengalaman, teknologi dan yang lebih penting lagi banyak data secara gratis. Kemajuan akan semakin dipercepat. Mesin pintar bukanlah konsep sci-fi yang masih jauh, tetapi kita sudah menggunakannya setiap hari tanpa menyadarinya. Ketika anda mengetik di software pengolah kata, banyak algoritma pembelajaran mesin yang membantu untuk mengetik lebih baik dengan memprediksi kesalahan ketik dan memberi perbaikan. Atau ketika anda membuka browser untuk mencari sesuatu di internet. Kita tidak tahu seberapa jauh kemajuan teknologi mesin pintar ini akan berkembang untuk mengubah cara hidup kita dengan disadari maupun tidak. Ada beberapa jenis atau bentuk mesin pintar karena kecerdasan buatan merupakan konsep yang sangat luas. Saya menggunakan tiga kategori utama dalam tulisan ini: Artificial Narrow Intelligence (ANI): Kadang-kadang disebut sebagai AI lemah. ANI adalah AI yang mengkhususkan diri untuk suatu domain seperti bisa mengalahkan juara dunia catur atau memainkan video game, tapi itu satu-satunya hal yang dilakukannya. Meminta ANI untuk mencari cara yang lebih baik untuk menyimpan data pada hard drive misalnya, tidak dapat dilakukannya. Artifical General Intelligence (AGI): Kadang-kadang disebut juga sebagai AI kuat atau AI setingkat manusia. AGI mengacu pada komputer yang cerdas setara manusia yang bisa melakukan tugas intelektual seperti manusia. Membuat teknologi AGI adalah pekerjaan yang jauh lebih sulit daripada membuat ANI, tetapi saat ini banyak penelitian dilakukan ke arah AGI. Artificial Superintelligence (ASI): Mesin pintar yang jauh lebih pintar dari manusia terbaik di hampir setiap bidang, termasuk kreativitas ilmiah, kebijaksanaan umum dan keterampilan sosial. Filsuf besar dan pemikir AI terkemuka dari Oxford, Nick Bostrom [2] mendefinisikan ASI sebagai perkembangan dari komputer yang hanya sedikit lebih pintar dari manusia ke komputer yang trilliunan kali lebih pintar. ANI adalah mesin pintar yang sama atau melebihi kecerdasan atau efisiensi manusia pada hal tertentu. Beberapa contoh seperti mobil tanpa supir Google, pengenalan suara di Smartphone (Siri, Cortana, S-Voice), filter spam email, penerjemah dan pencarian Google, Facebook face recognition dan lain-lain. Sistem ANI seperti sekarang, tidak sangat berbahaya. Tapi sementara ANI tidak memiliki kemampuan untuk menyebabkan ancaman eksistensial, kita harus melihat ini dari sisi ekosistem yang semakin besar dan kompleks, karena meskipun relatif tidak berbahaya ANI akan mengubah dunia. Selanjutnya para ilmuwan dan filsuf percaya bahwa AI adalah suatu revolusi, mengambil jalan dari ANI, melalui AGI, untuk menuju ke ASI yang tentu saja akan membuat dampak besar bagi kemanusiaan. Mereka percaya setiap inovasi ANI secara tidak sadar akan menambahkan batu bata ke jalan menuju AGI dan kemudian ASI. Selama proses tersebut isu-isu etika yang terkait dengan penciptaan masa depan, kemungkinan adanya mesin dengan kemampuan AGI yang jauh melampaui manusia, akan menimbulkan masalah etika yang sangat berbeda. ASI bukan hanya pengembangan teknologi tetapi akan menjadi penemuan paling penting yang pernah dibuat manusia sehingga akan menyebabkan ledakan kemajuan di segala bidang ilmiah dan teknologi. Mesin pintar ASI bukan hanya super intelijen, tapi mampu terus berkembang memodifikasi dan memperbaiki kecerdasannya. Sulit untuk membuat mesin sepintar manusia karena otak manusia adalah obyek paling kompleks di alam semesta. Google saat ini menghabiskan miliaran dolar mencoba untuk melakukannya. Hal-hal sulit untuk otak manusia seperti kalkulus, strategi pasar keuangan, dan menterjemahkan bahasa, saat ini menjadi sangat mudah dengan komputer, sementara hal-hal mudah pada manusia seperti pengamatan gerak, dan persepsi, sangat sulit dilakukan oleh komputer. Ilmuwan komputer Donald Knuth katakan, “AI telah sekarang telah berhasil melakukan segala sesuatu yang membutuhkan pemikiran, tetapi gagal untuk melakukan apa yang orang dan hewan lakukan tanpa berpikir”. Kita tidak akan membahas kompleksitas teknis untuk mencapai AGI atau ASI dalam blog ini. Sebaliknya, kita akan menyimak dan mendiskusikan beberapa masalah etika sebagai akibat dari revolusi mesin pintar yang saat ini terjadi. Kami percaya kemajuan teknologi mesin pintar dapat menimbulkan konflik besar terhadap aspek kemanusiaan dalam konteks kualitas pemikiran moral dan etika saat ini. Namun diharapkan pesan itu akan sampai ke arsitek mesin pintar untuk menentukan motivasi yang lebih baik bagi kemanusiaan. Mesin pintar dapat menjadi tak terkendali karena keunggulan intelektual dan teknologinya yang bisa berkembang sendiri. Oleh sebab itu sangatlah penting untuk memberikan motivasi ramah kemanusiaan kepada mesin pintar atau insinyur yang nanti akan membuat mesin pintar. Baca selanjutnya mengenai ANI, AGI dan ASI. Semoga bermanfaat! TSMRA, Jakarta, 2016. Disadur dari http://deepbrains.com/2016/06/1-masalah-etika-revolusi-mesin-pintar/ seijin Penulis
(1/4) Masalah Etika Revolusi Mesin Pintar
8
1-4-masalah-etika-revolusi-mesin-pintar-146257093a08
2018-06-12
2018-06-12 14:03:02
https://medium.com/s/story/1-4-masalah-etika-revolusi-mesin-pintar-146257093a08
false
1,174
Machine Learning Indonesia
null
machinelearningid
null
machinelearningid
machinelearningid
MACHINELEARNINGID,ML ID,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,DEEP LEARNING
machineid
Machine Learning
machine-learning
Machine Learning
51,320
Machine Learning Indonesia (ML ID)
Machine Learning Indonesian Community
43d18f969739
machinelearningid
12
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-17
2017-09-17 17:31:57
2017-09-17
2017-09-17 17:42:05
1
false
en
2017-09-18
2017-09-18 05:39:00
0
14625e7ccc4e
1.890566
6
0
0
Sight and sound are central to how we perceive the world. These are so intertwined with the human condition that it is almost a tactile…
5
World is sound! Sight and sound are central to how we perceive the world. These are so intertwined with the human condition that it is almost a tactile part of humanity itself. Humanity is often defined by of our culture and art and here too music and visual art have dominated our expression. However in the technological revolution of the later part of 20th century, sight has simply overtaken and overwhelmed our imagination. First it was color screens , then touch screens , 3D and now augmented reality (AR) and virtual reality (VR), the progress has been staggering. Earlier artistic expression required a medium and technology served an intermediary and an enabler. However the recent technological pace has completely shifted the paradigm. Now mediums such as AR and VR are looking for artists and creatives to server the platform. Somehow sound has missed this popular revolution.Sure enough, there has been incremental development in sound technology , especially in music delivery technology but apart from Bose’s noise cancelling headphone technology from late 1980’s and early 1990’s , there has been no game changing advancement. Sound is ubiquitous, it reaches where sight cannot, we are surrounded by it , it touches our hearts , it’s our primary mode of communication and connection with the world and yet its possibilities have remained unexplored. Oil & gas Industry and military have not missed the train though. They have understood and exploited sound to do amazing things. Oil industry can build detail 3D images of earth tens of kms deep without a single drill by just using sound. This is akin to taking a 3D picture of a dark room while standing 3 buildings away. Researchers in the medical industry are are developing extremely high frequency sound waves to kill cancer tissue without radiation. Yet the amazing power of sound has remained hidden or out of reach in popular culture. Part of the reason is perhaps that it is hard problem and part of the the reason maybe that simpler cheaper alternatives have filled the vacuum. All this is about to change thanks to the power of the incredibly powerful machines in your hand : the mobile phone. “The internet of sound” aims to bring sound to the technological revolution it has missed. It aims to give the unexplored power of sound back to people. Universal nature of sound makes it very powerful and offers a unique platform for people and businesses to connect.Sound can be the thread to connect us without the confines of language, sight or network. The Internet of sound can change the very perception of the world around us and we are sitting on the cusp of the revolution.
World is sound!
6
world-is-sound-14625e7ccc4e
2018-05-05
2018-05-05 14:49:00
https://medium.com/s/story/world-is-sound-14625e7ccc4e
false
448
null
null
null
null
null
null
null
null
null
Technology
technology
Technology
166,125
Trillbit
Proximity Intelligence using the power of Sound
24cec947a886
bhaskar_77939
8
14
20,181,104
null
null
null
null
null
null
0
null
0
2d76617dc648
2018-04-14
2018-04-14 08:18:41
2018-04-14
2018-04-14 08:27:17
5
false
en
2018-05-01
2018-05-01 06:43:17
8
14627f0528e6
4.784277
2
0
0
Conversational platforms have introduced a paradigm shift in how humans interact with the digital world. For businesses, they are fostering…
5
Best Practices to Launch Customer Service Chatbot for your Business Conversational platforms have introduced a paradigm shift in how humans interact with the digital world. For businesses, they are fostering a personal connection between users and the brand that’s ultimately adding value to the customer service. The advent of chatbots-the computer programs for conversation via auditory or textual methods have transformed the way consumers interact with businesses. Their ability to mimic conversations and set engagements (just like humans) has made them a promising entrant into the customer service industry. Following the benefits that this AI application offers, businesses are making them a part of their customer service department for engaging customers, cut-down the human workload, resolving repeatedly confronted problems etc. And not just the businesses, customers also realize the advantage of chatbots and prefer them over human agents for getting quick answer to their questions in emergency scenario (stated in a survey results by Statista). While automation in customer service is projected to put human jobs at risk, surveys reveal that customer service agents have all favorable reasons to deploy a chatbot that could replace them. This, as a consequence will allow them to invest their intellect in handling complex tasks, enable them to provide personalized experience to the customers, and make them feel more committed to the company. Chatbots are therefore claimed to augment the jobs, instead of putting them at risk. Customer service agents by their opinion about chatbots indirectly leading to improving their value to their company (Source: Statista) Considering the possibilities and the opportunities that chatbots brings in, businesses are counting up on them to improve their customer service. However before deploying chatbot, it is important for any business to identify their use cases and act accordingly. Here are some best practices for deploying chatbots for your business. 1. Identify Tasks for the Chatbots For any business, it is important to deliberate the tasks that a chatbot is expected to do. For this, you can determine the purpose for which you want your customer to turn to the chatbot. Do you want your virtual agent to be available when human agents are unavailable? For example: Taking up a query from the customer and raising a ticket for their concern. The problem of the customer is then passed for action to the human agent, whenever they are available. For answering repeated questions of the users. There are a few, common questions that users ask. Deploying a chatbot to answer those simple, oftenly asked question will save the customer agent’s time so that it can be invested to resolve complex issues. Understand what your consumers expect from your brand/business. When users interact with chatbot, it can help you to identify pain points, their expectations, or challenges that they confront in connecting to your business. By understanding these concerns, you can take steps to fill the loopholds, and ultimately, serve them better. Realizing the list of tasks that you want a virtual agent to do for your business can help in getting expected results after deployment. This analysis will further help your business to list down the requirements during chatbot development. 2. Understanding Tools and Technologies A chatbot represents your business and because these virtual agents are dedicated to respond to user queries, they should be prepared to do it, efficiently. Remember that the goal of a chatbot is to set a seamless connection between the business and the customers. It is important to have a human connection between the virtual assistant and consumers. For this, make sure that the answers by the chatbot aren’t too wordy or vague. This disappoints them and may not make the bot serve its purpose. Make your chatbot understand the intent. AI technologies like Machine Learning (ML) can make virtual assistants smarter, as they are used by the customers. Other AI technologies like Recurrent Neural Network (RNN), Natural Language Processing (NLP), and chatbot development tools can add value to the assistant. 3. Defining the Platform for Chatbot For B2C customer service, sales, and marketing, chatbots are integrated with messaging platforms like Whatsapp, Facebook Messenger, WeChat, Hike, Telegram, or via SMS. Since every messaging platform targets a niche, you can choose a platform for your chatbot where you are likely to find users, well-suited for your business. For example: If you have an eCommerce app, having a chatbot for Facebook, Twitter or SMS service will serve you the benefit. You can look around for the positive facts about each messaging platform and the type of the audiences that they have. For instance, the benefits of having a Facebook chatbot is it has a huge user-base, lets you have the marketing insights/stats, and more. So, before moving on to the chatbot development phase, analyze if the features of a platform compliments your business. 4. Understanding the Need to Involve Humans Almost every industry is leveraging chatbots for efficient customer service. Online retail, healthcare, banking, telecommunications are to name a few. However, depending upon the industry, the complexity and involvement of humans can be defined. While chatbots are capable of handling majority of inquiries, there are instances where human agents have to take over the conversation. In such scenarios, a live customer service should be made available to handle the query or a revert (through mail or call) should be scheduled. Although, the latest technologies used while building a chatbot makes them understand the customer, problem context, and learn as they grow, the complexity of queries require human intervention to serve customers with 100% satisfaction. 5. Get Started with a Pilot Chatbot Project Before you introduce a full-fledged chatbot of your business to the customers, it is recommended to test their requirements and expectations from it. From example: You can get started with a pilot chatbot that includes the basic functionality that you think will help users. During this chatbot-customer engagement, you can comprehend the high-end requirements that your customers have from the chatbot, which can further help you in developing a chatbot that can maximize productivity of your business and server your customers, better. If you want Daffodil to help you to identify chatbot use cases in your business, schedule a free 30 min consultation with our Chatbot expert, Nitin Goyal Originally published at insights.daffodilsw.com.
Best Practices to Launch Customer Service Chatbot for your Business
52
best-practices-to-launch-customer-service-chatbot-for-your-business-14627f0528e6
2018-05-25
2018-05-25 07:56:03
https://medium.com/s/story/best-practices-to-launch-customer-service-chatbot-for-your-business-14627f0528e6
false
1,047
Info, Trends, and Tutorials on App Development, Artificial Intelligence, Big Data, Healthcare, Fintech, IoT and more.
null
daffodilsw
null
App Affairs
app-affairs
MOBILE APP DEVELOPMENT,ARTIFICIAL INTELLIGENCE,HEALTHCARE,FINTECH,IOT
daffodilsw
Chatbots
chatbots
Chatbots
15,820
Daffodil Software
We build Mobile, IOT, & Web solutions that are intuitive, reactive and agile | www.daffodilsw.com
c9f8f493f5b0
daffodilsw
114
12
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-17
2018-09-17 06:01:11
2018-09-17
2018-09-17 06:01:12
7
false
ru
2018-09-17
2018-09-17 06:01:12
3
146283aff706
4.959434
0
0
0
null
5
Могут ли вопросы на собеседовании по машинному обучению быть одновременно прикольными и глубокими? 25 вопросов, которые не просто проверят знания и навыки кандидата, но и вдохновят на плодотворное обсуждение проблем машинного обучения. А ещё — шесть смешных картинок по теме. Многие из исследователей данных изучают машинное обучение (МО) в основном с точки зрения практического специалиста. Как следствие, мы можем сосредотачиваться скорее на освоении как можно большего количества новых пакетов, фреймворков и методик, чем на глубоком рассмотрении основополагающих теоретических аспектов МО. Кроме того, в этом материале моё определение машинного обучения включает в себя обыкновенное статистическое обучение (т. е. не ограничивается глубинным обучением). Однако пытливый, рассудительный и упорный ум может придумать множество чудесных вопросов по МО, разбор ответов на которые прекрасно способен обнаружить более глубокие его аспекты. В общем, такие вопросы могут помочь высунуть голову из кучи с картинки выше. Мы ведь не хотим целыми днями перемешивать данные — мы хотим нырнуть в глубины свойств, странностей и тонкостей методик машинного обучения и проникнуться ими… В конце концов, в интернете хватает статей о «стандартных вопросах для собеседования по машинному обучению». Может, сделать кое-что другое, поинтереснее? Сразу скажу: я публикую эти вопросы, просто чтобы вдохновить вас на размышления и разговоры. Готовых ответов не даётся. В некоторых вопросах есть подсказки, но они на самом деле больше для обсуждений, они не указывают на определённый ответ. Каждый вопрос заслуживает подробного обсуждения. Нет какого-то одного ответа. Некоторые вопросы заумные, а некоторые просто забавные. Приятного чтения 🙂 Вдобавок я вставил смешной мем после каждого 5-го вопроса… Прикольные вопросы «P > 0,05. Игра окончена, попробуйте ещё раз». Я построил линейную регрессионную модель, показывающую 95%-ный доверительный интервал. Означает ли это, что существует 95%-ная вероятность, что коэффициенты моей модели верно оценивают функцию, которую я хочу аппроксимировать? (Подсказка: на самом деле это означает 95% времени…) В чём сходство между файловой системой Hadoop и алгоритмом k-ближайших соседей? (Подсказка: «лень») Какая структура мощнее в смысле выразительных возможностей (т. е. она может достоверно отобразить заданную булеву функцию) — однослойный перцептрон или двухслойное дерево принятия решений? (Подсказка: XOR) А что мощнее — двухслойное дерево принятия решений или двухслойная нейронная сеть без активационных функций? (Подсказка: нелинейность?) Может ли нейронная сеть служить инструментом для понижения размерности? Объясните как. «Виды головной боли: мигрень, перенапряжение, стресс, математические основы глубинного обучения» Все ругают и обесценивают понятие постоянного слагаемого в линейных регрессионных моделях. Назовите одно из его применений. (Подсказка: сборщик шума/мусора) LASSO-регуляризация сводит коэффициенты точно к нулю. Гребневая регрессия сводит их к очень маленькому, но не нулевому значению. Можете ли вы объяснить разницу между ними интуитивно, по графикам двух простых функций: |x| и x²? (Подсказка: острые уголки на графике |x|) Допустим, что вы ничего не знаете о распределении, откуда взят некий набор данных (непрерывнозначные числа), и вам запрещается предполагать, что это гауссово нормальное распределение. Приведите как можно более простое доказательство того, что, независимо от характера распределения, вы можете гарантировать, что ~89% данных лежат в пределах ±3 стандартных отклонений от среднего значения. (Подсказка: научный руководитель Маркова) Большая часть алгоритмов машинного обучения так или иначе связана с операциями над матрицами, например перемножением или обращением. Дайте простое математическое доказательство, почему мини-пакетная версия такого алгоритма МО может быть более эффективна с точки зрения объёма расчётов, чем обучение на полном наборе данных. (Подсказка: временная сложность перемножения матриц…) Не кажется ли вам, что временной ряд — это очень простая задача линейной регрессии с единственной переменной отклика и с единственным предиктором — временем? В чём проблема метода линейной регрессии (необязательно с единственным линейным членом, с многочленами тоже) в случае данных временного ряда? (Подсказка: прошлое указывает на будущее…) «Что если я скажу тебе, что это — регрессия?» Приведите простое математическое доказательство того, что поиск оптимального дерева решений для задачи классификации среди всех возможных древовидных структур может быть экспоненциально сложной задачей. (Подсказка: а сколько вообще деревьев в джунглях?) Как деревья решений, так и глубокие нейронные сети являются нелинейными классификаторами, т. е. они разбивают пространство посредством сложной границы решений. Почему в таком случае модель дерева решений настолько интуитивно понятнее глубокой нейронной сети? Обратное распространение — рабочая лошадка глубинного обучения. Назовите несколько возможных альтернативных методик обучения нейронной сети без использования обратного распространения. (Подсказка: случайный поиск…) Допустим, что у вас две задачи — на линейную регрессию и на логистическую регрессию (классификацию). Какую из них с большей вероятностью упростит открытие нового сверхскоростного алгоритма перемножения матриц? Почему? (Подсказка: какая из них с большей вероятностью использует операции над матрицами?) Как корреляция между предикторами затрагивает метод главных компонент? Как с этим справиться? «Что делать с… корреляцией?» Вам поручили построить классификационную модель столкновения метеоритов с Землёй (важный проект для человеческой цивилизации). После предварительного анализа вы получаете 99%-ную достоверность. Вы можете быть довольны? Почему нет? Что с этим можно сделать? (Подсказка: редкое событие…) Возможно ли определить корреляцию между непрерывной и дискретной переменной? Если да, то как? Если вы работаете с данными по экспрессии генов, часто бывает так, что предикторных переменных миллионы, а замеров только несколько сотен. Приведите простое математическое доказательство, почему обычный метод наименьших квадратов — плохой выбор в такой ситуации, когда нужно построить регрессионную модель. (Подсказка: кое-что из матричной алгебры…) Объясните, почему k-проходная перекрёстная проверка плохо работает с моделями временного ряда. Что можно сделать по этому поводу? (Подсказка: ближайшее прошлое прямо указывает на будущее…) Простая выгрузка случайной выборки из обучающего набора данных в обучающую и проверочную выборку хорошо подходит для задачи регрессии. А что может пойти не так с этим подходом для задачи классификации? Что с этим можно сделать? (Подсказка: все ли классы преобладают в одной и той же степени?) «Случайная выборка не работает!» Что для вас важнее — достоверность модели или качество модели? Если бы вы могли воспользоваться многоядерным процессором, вы бы предпочли алгоритм бустинга над деревьями случайному лесу? Почему? (Подсказка: если задачу можно делать 10 руками, стоит этим воспользоваться) Представьте, что ваш набор данных наверняка линейно разделим и вам нужно гарантировать сходимость и наибольшее число итераций/шагов в вашем алгоритме (из-за вычислительных ресурсов). Выбрали ли бы вы в таком случае градиентный спуск? Что можно выбрать? (Подсказка: какой простой алгоритм с гарантией обеспечивает нахождение решения?) Пусть у вас крайне мало памяти/места для хранения данных. Какой алгоритм вы предпочтёте — логистическую регрессию или k-ближайших соседей? Почему? (Подсказка: пространственная сложность…) Вы строите модель машинного обучения, и изначально у вас было 100 точек данных и 5 признаков. Чтобы уменьшить смещение, вы удвоили количество признаков (включили 5 новых переменных) и собрали ещё 100 точек данных. Объясните: правильный ли это подход? (Подсказка: на машинное обучение наложено проклятье. Слышали о нём?) «Меня прокляли размерностью!» Перевод статьи Tirthajyoti Sarkar “25 fun questions for a machine learning interview” https://nuancesprog.ru/p/1755/
25 прикольных вопросов для собеседования по машинному обучению
0
25-прикольных-вопросов-для-собеседования-по-машинному-обучению-146283aff706
2018-09-17
2018-09-17 06:01:12
https://medium.com/s/story/25-прикольных-вопросов-для-собеседования-по-машинному-обучению-146283aff706
false
1,036
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
RK
null
850572207010
nuancesprog
198
0
20,181,104
null
null
null
null
null
null
0
- Supervised Classification - Data Exploration --> Feature Observation : Identify and build feature and target columns from the dataset. Using labeled face images (the Adience benchmark), I have been able to recreate a new Unified dataset of new features and labels to fit my models. - Data Visualization (Unified dataset, Training set and Test set visualization) - Performance Metric (Loss and Accuracy Scores) - Shuffle and Split Data : Training and Testing Data Split (sklearn train_test_split tool) - Training Models (TFLearn and Tensorflow Keras models) - Model Evaluation and Validation (Loss and Accuracy) - Analyzing Model Performance (Tensorboard and Training history Visualization) --> Learning Curves --> Complexity Curves - Making Predictions (of correct labels) - Model Optimization - Model Tuning (optimizer, activation, loss and regularization functions, number of epoch, etc. ) - Training computational cost (Big-O complexities of common algorithms used in Computer Science) And the intended main contribution of my work had been: - Provide simple and easy to use tools for dataset preprocessing, considering different Machine Learning Frameworks requirements (Custom Data Preprocessing and visualization functions, prediction interpretation functions, etc.). - Assemble the prediction process in one step for both predictions, age and gender (instead of separate networks: age network and gender network). - Turning the implementation in video and mobile application (all process resources provided). - Models' performances : 97% accuracy with the tuples model and about 60% of exact accuracy with the 2-hot labels model (Beat the true accuracies of previous research work in the field). - Fully reproducible paper, with full Python implementation code available. Could be applied to any age dataset with more age classes. All on my Github Age and Gender Classification project.
2
5a2dbb4189de
2017-08-29
2017-08-29 13:14:49
2017-09-01
2017-09-01 21:31:28
6
false
en
2018-10-07
2018-10-07 16:30:46
4
14631b770c6a
5.251887
0
1
0
The Machine Learning Engineer Nanodegree program (MLND)
5
My Experience in Machine Learning with Udacity, Part2 The Machine Learning Engineer Nanodegree program (MLND) In my last post, I explained how I undertook the Intro to Machine Learning course with Udacity and have been motivated for the MLND program. I currently work as a Computer Scientist Civilian in Ivory Coast. My job position is very steady, guaranteed for several dozen years, but it is not directly implied in AI. So I involve in the field on my own, as I am passionate about Cognitive Computing Research. Now I am ongoing to talk about the fundamental Machine Learning engineering skills provided by the program, and why it is worth to try it. Indeed, after the Intro to Machine Learning, several online courses exist, even free courses. Why undertake the MLND program? I will first look at the program orientation. It is an advanced program compared to the Intro which was of an intermediate level. As you could see on the program page, Machine Learning (ML) represents a key evolution in the fields of computer science, data analysis, software engineering, and artificial intelligence. So, an advanced level should mean the keen technical methods to make all these fields work together. After my completion, I could say that it is a true engineering program designed for those who want to acquire job ready skills to directly work in the field or pursue their own objectives. On the other hand, the teaching strategy itself. The program provides feedback from ML experts on the various real life projects, and also a career support as extra courses. That not means the experts perform the work for the students, there are some specifications to meet. They provide guidance, reorientation, advice. If you are not able to complete these specifications you could pass even two years without graduating of your nanodegree (There is an Honor Code). So, that said, the MLND is a perfect adequation between theory and practice in the field of Machine Learning. An engineer in the field should be able to deeply understand the algorithms (Naive Bayes, SVM, PCA, KMeans, etc.) and the different subsets of ML (Supervised Learning, Unsupervised Learning, Reinforcement Learning, Deep Learning). Like this, he will be able to engineer on new real life problems and provide the best solutions for them. That exactly what the program also provides. For an engineer, simply work on projects without this adequation between theory and practice is very restricting. In comparison with in-person programs, the MLND is focused on the industry (real life AI projects) and is research oriented (proposal and paper writing). Indeed, in addition to the real life projects, the program teaches how to write state-of-the-art proposals and papers. I have followed an in-person 5-year engineer program in my country (Côte d’Ivoire) to become a network engineer. And at each step, there are some tests and validations reviewed by teachers until after the training the student has the ability to work on his defense as a network engineer. And I think the Udacity MLND online program is the closest to these in-person programs. And after you have learned and mastered the subject (state of the art methods in the field of Machine Learning), the last step to graduate is to apply your knowledge on a project of your choice (the capstone), which is equivalent to a master of engineering thesis defense for this online program. But here, it is focused on a professional proposal and paper writing (research oriented). The final project of the advanced Deep Learning course included in the lectures, is generally proposed to students for their capstone project (Build a live camera app that can interpret number strings in real-world images), or they could work on their own idea of a project. I personally chose to work on an Age and Gender Classification project. It is a futuristic field, implied in security application and research on the field are still actively ongoing. So I applied my acquired skills to build it using the necessary available resources (suitable open dataset, computational resources, etc.). And when all was OK, I proposed it for the capstone project. For this specific capstone project the tools provided by the MLND I used during the Machine Learning Process are listed below: The program is designed for 6 months and you could get a half tuition back if you complete it within 12 months from your first subscription. But it is allowed to those who can to complete it in at least 2 months (not less). And I think if you follow all lectures and complete all the quizzes and projects, you should not be able to finish it earlier than that. So, after my capstone, with 3 months of participation in the MLND program (3 months of subscription passed working on), I graduated and have now the necessary research abilities to work on my own, on any other real life project in the field of Machine Learning, and write professional proposals and papers for my implementations. Not just using advanced APIs, I am able to build my own ML models from scratch and/or contribute to improving existing systems. I am proud of that and say thank you to Udacity for the MLND.
My Experience in Machine Learning with Udacity 2
0
my-experience-in-machine-learning-with-udacity-part2-14631b770c6a
2018-10-07
2018-10-07 16:30:46
https://medium.com/s/story/my-experience-in-machine-learning-with-udacity-part2-14631b770c6a
false
1,140
As a Machine Learning Engineer, I share with you how I started in this awesome field through the Udacity’s ML program …
null
null
null
My Experience in Machine Learning with Udacity
my-experience-in-machine-learning-with-udacity
null
kjeanclaude12
Machine Learning
machine-learning
Machine Learning
51,320
Kouassi Konan Jean-Claude
Machine Learning Engineer (Udacity), Passionate of Cognitive Computing Research, Artificial Intelligence Ph.D. student at BIU.
23974ad6ba4a
Kjeanclaude
61
25
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-02
2018-08-02 08:22:31
2018-08-03
2018-08-03 10:06:58
5
false
en
2018-08-03
2018-08-03 12:11:22
6
146599b758d
3.237107
4
0
0
Dear Alphacats!
5
Alphacat Report (July 15–31) Dear Alphacats! As part of our efforts to be transparent and communicate regularly with our community, we are pleased to share this mid-month report, which includes our progress during these last two weeks and our outlook for the future. Community 1. As of the end of July, the effectiveness of Alphacat’s global community outreach has grown exponentially, and the number of community users is experiencing stable growth. The number of Facebook users increased significantly, from 11,952 to 13,412, an increase of 12.2%. The number of Twitter users increased from 12,600 to 13,400, with a growth rate of 6.3%. While Global Telegram users decreased from 9,722to 9,340, a decrease of 3.9%. Alphacat’s global strategy continues to ferment and prepare for future users. 2. The final distribution of ACAT tokens will soon begin. According to the 6-month lock-up contract defined in our whitepaper, the Alphacat team will distribute the reward tokens for early contributors on August 6th, 2018. This is the only unlock of ACAT tokens, and is expected to take two to three weeks. As of now, ACAT has is listed on five exchanges: Kucoin, HitBTC, Bibox, Hotbit and Switheo. In addition, ACAT tokens will be important tokens for the use of various applications on the ACAT Store. Product Development 1. ACAT Store: The alpha version of ACAT Store, was officially released on July 31st. It is worth noting that in addition to the basic functions of the store, the core of the ACAT Store is the forecasting channel located in the store’s financial sector. Market forecasting makes use of scientific methods such as statistical modeling and machine learning, and researches the historical trading data of the cryptocurrency market, using the latest market data to probabilistically forecast future data such as: price change, volume, volatility and trends. The forecasting channel has launched, with a real-time forecasting series for four, major digital currencies — BTC, ETH, NEO and EOS — all were released simultaneously. 2. Alphacat AI Forecasting Engine: (1) Progress of research on the core prediction algorithm: Research of the core prediction algorithm in the last two weeks has been excellent. On July 19th, Alphacat’s American Lab built a prediction algorithm framework based on the currently used TensorFlow artificial intelligence framework, and also established the PRNN algorithm kernel based on this. On July 29th, the LTSM neural network model, suitable for time series prediction, was combined with the PRNN algorithm to obtain a very creative PRNN-LSTM algorithm. Next, the North American lab will continue to research and test this new algorithm, and strive to apply it to the real-time forecasting series as soon as possible. (2) We continue to improve the front-end interface and content of the cryptocurrency real-time forecasting series, focusing on the forecast display interface and the corresponding copy of the cryptocurrencies, so that the forecast results are more intuitive and user-friendly, making it easier to understand the predictions and use the predictions to guide user investments. The real-time forecasting for four cryptocurrencies has been launched online in the ACAT Store. Cooperation Alphacat and J One Capital — Investment Bank in the Blockchain, enter strategic partnership Alphacat is pleased to announce that it has entered a strategic partnership with J One Capital, an investment bank in the blockchain industry. Alphacat will provide digital currency custody services for J One Capital. This collaboration between Alphacat and J One Capital will provide J One Capital with Alphacat’s advanced artificial intelligence technology in the field of digital asset investment, and will significantly enhance J One Capital’s investment decision-making capabilities. While J One Capital will help produce an important roadmap for the ecological construction of Alphacat’s digital asset investment plan.  For more information of Alphacat: Website: www.Alphacat.io Telegram:https://t.me/alphacatglobal Medium:https://medium.com/@AlphacatGlobal Twitter:https://twitter.com/Alphacat_io Facebook: https://www.facebook.com/Alphacat.io/ Reddit: https://www.reddit.com/r/alphacat_io
Alphacat Report (July 15–31)
151
alphacat-report-july-15-31-146599b758d
2018-08-03
2018-08-03 12:11:22
https://medium.com/s/story/alphacat-report-july-15-31-146599b758d
false
637
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Alphacat
Alphacat is a robo-advisor marketplace that is focused on cryptocurrencies, and is powered by artificial intelligence & big data technologies. www.Alphacat.io
6300c5cec1ab
AlphacatGlobal
318
1
20,181,104
null
null
null
null
null
null
0
null
0
d82dbd11f86a
2017-12-31
2017-12-31 14:11:02
2017-12-31
2017-12-31 14:19:18
1
false
en
2017-12-31
2017-12-31 14:19:18
0
1466bb77825e
1.128302
1
0
0
Data is the new oil, but who have the oil well?
5
Who dominate the data industry? Data is the new oil, but who have the oil well? A lot of thought leaders, data scientists and consulting firms think that data market is the big business, this is due to the benefits that are attributed to manage the data. These days, we are living a Boom of companies and people engaged in analysis, consulting and data managment. It is impressive the change that is being generated within companies with data analysis. That reminds me a phrase: Data is the new oil ~ Clive Humby, Data Science Innovator I agree, the data analysis is a huge business but if the data is the new oil, it means that the data scientists and consulting firms are like just the oil engineers, the ones who do the execution and the process of exploration. Not who create and have the oil well. They do not have the most of the cake. Who dominate the data industry of consumers are the social networks, pages of entertaiment and applications. Them are the oil well. That means, They have the information (data). They generate the information through of likes, views and thoughts. Whoever generates the information has the ability to see consumer behaviors that the industry has not perceived, that’s why its value. The goal in this industry is to have those oil wells, the one that has those oil wells will dominate the industry. Now, that is the question ¿How create the oil wells?
Who dominate the data industry?
1
who-dominate-the-data-industry-1466bb77825e
2018-01-18
2018-01-18 06:40:23
https://medium.com/s/story/who-dominate-the-data-industry-1466bb77825e
false
246
All about the Data Analysis with entrepreneur
null
null
null
High Data Stories
high-data-stories
DATA SCIENCE,DATA VISUALIZATION,DATA ANALYSIS,TECNOLOGY,SMALL MEDIUM ENTERPRISE
null
Data Science
data-science
Data Science
33,617
Luis Alberto Palacios
I’m a passionate about life that enjoy meeting people, living new adventures and sharing my experiences
7a4f2ad9c738
LuisAlbertoPala
97
51
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-31
2018-08-31 06:15:48
2018-08-31
2018-08-31 06:37:37
0
false
en
2018-08-31
2018-08-31 06:37:37
0
14674b6d9b92
0.335849
0
0
0
The following is a high-level summary of differences between original Gaussian distribution algorithm and one of its variate called…
1
Multivariate vs Original Gaussian Distribution The following is a high-level summary of differences between original Gaussian distribution algorithm and one of its variate called multivariate Gaussian distribution. Original algorithm requires to manually create extra feature to capture unusual features. Multivariate algorithm, instead, can automatically capture the correlations among features. Original algorithm is computationally cheaper than multivariate algorithm, so that if n > 100,000, original algorithm is a better choice. Original algorithm works when m is small. Multivariate algorithm requires m > n, and in practical, m > 10n.
Multivariate vs Original Gaussian Distribution
0
multivariate-vs-original-gaussian-distribution-14674b6d9b92
2018-08-31
2018-08-31 06:37:38
https://medium.com/s/story/multivariate-vs-original-gaussian-distribution-14674b6d9b92
false
89
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Shaoliang Jia
null
51e75a1be6b6
shaoliang.jia
5
10
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-15
2017-11-15 13:39:26
2017-11-15
2017-11-15 14:43:08
4
false
en
2017-11-15
2017-11-15 14:59:06
3
1468417903bc
1.862264
0
0
0
Education received in early childhood often shapes life prospects
5
The earlier the better Education received in early childhood often shapes life prospects See nine other trends shaping education: 10 trends that are transforming education as we know it What are the sweeping changes that are already — or should be — reshaping the way Europeans teach and learn throughout…medium.com Pre-school education boosts cognitive, character and social skills. The educational impact of early childhood education is already evident in teenagers: 15-year olds who attended preschool for one year or more score higher in the OECD’s Programme for International Student Assessment (PISA) than those who did not. Early childhood education also has wider social benefits: it increases the likelihood of healthier lifestyles, lowers crime rates and reduces overall social costs of poverty and inequality. It enhances future incomes: full-time childcare and pre-school programmes from birth to age 5 have been shown to boost future earnings for children from lower-income families by as much as 26%. Early childhood education can ease inequality by enabling mothers to get back to work and support the household’s budget with a second income. In most countries, women’s participation in the labour market is clearly linked to the age of their children. Across Europe, 20% of women declare family responsibilities as the main reason for not working; lack of available care provision for young children is a primary reason. Early childhood education can lay the foundations for later success in life in terms of education, well-being, employability, and social integration. This is even more valid for children from disadvantaged backgrounds. Investing in pre-school education is one of those rare policies that is both socially fair — as it increases equality of opportunity and social mobility — and economically efficient, as it fosters skills and productivity. But all these benefits are conditional on the quality of the education provided. Source: Strong Start for America’s Children Act Get all the trends as a print-ready PDF including all sources.
The earlier the better
0
the-earlier-the-better-1468417903bc
2018-04-07
2018-04-07 10:12:06
https://medium.com/s/story/the-earlier-the-better-1468417903bc
false
308
null
null
null
null
null
null
null
null
null
Education
education
Education
211,342
EPSC
European Political Strategy Centre | In-house think tank of @EU_Commission, led by @AnnMettler. Reports directly to President @JunckerEU.
35d4f869bb65
ECThinkTank
701
65
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-29
2018-01-29 05:34:20
2018-01-29
2018-01-29 05:34:32
0
false
en
2018-01-29
2018-01-29 05:34:32
1
1468865c655d
2.977358
0
0
0
While most marketers understand how important it is to track consumer behavior throughout the buying cycle, many don’t realize that it’s…
1
Using AI to Analyze Marketing Data While most marketers understand how important it is to track consumer behavior throughout the buying cycle, many don’t realize that it’s getting easier to do so all the time. Over the past decade, everything a consumer does has become both measurable and trackable. People leave a digital footprint when they shop online and even sometimes offline. While customer data from an online purchase provides exceptional value, you need to pull it all together in one place. This can pose a challenge when the data comes at you in a variety of formats and from many different sources. You need a way to access these data silos, analyze customer actions, and understand his or her journey. This often makes for a time-consuming process. You must collect data from spreadsheets and other sources and then clean, standardize, organize and update it frequently. This is where artificial intelligence (AI) enters the picture. Although API handles the collection of data, AI helps to make sense of data. AI accomplishes this by cleaning, consolidating, and analyzing the data in a fraction of the time it would take a human to complete the same task. One of the most important benefits of APIs is that they eliminate silos and make it possible to collect all marketing data in a single location. Machine learning then takes over, cleanses the data, and ensures that it remains current for the most useful analysis. Making Sense of Big Data There’s no point in collecting big data unless you can find a way for it to make sense to you. As the APIs bring data into a central location 24 hours a day, AI pulls out insights that marketers can act on to help them make better advertising decisions in the future. That means you receive practical insights immediately instead of waiting days or weeks for a human to produce the same information. It’s only natural for people analyzing data to do so from the vantage point of their own biases. AI eliminates all biases as well as hidden agendas and manual errors. Human beings have their limits when it comes to processing data, but this isn’t true for AI. You can increase or decrease the demands on AI according to your own preferences. An AI system easily integrates with machine learning to pull out patterns of data between inputs and marketing KPIs. When you employ machine learning, it first works to identify a goal and then teaches the computer how to model a conversion. It does this by providing the computer with examples of a specific goal. It then allows it to continue improving the current model with new and better data. The result is that the computer can predict a customer conversion before it takes place. Consider Pavlov’s Dog When Trying to Understand Machine Learning For the purpose of simplification, allow us to provide you the example of Pavlov’s dog. Most people are familiar with the scientist Pavlov and how he desired to measure the amount of saliva produced by dogs. The first step in gaining this information was to ring a bell at the same time the dog received its food. It didn’t take long for the dogs in the experiment to begin salivating at the sound of the bell because they associated it with food. With AI and machine learning, the accuracy of the output improves every time the computer receives new information. While creating objectives for marketing is more complex than the salivating dog experiment, the process operates on the same principal. For example, if you desire to know the type of content that produces the highest rate of engagement on social media, the first thing you do is connect the site’s feed to your system. It analyzes data from recent posts and then breaks each post down into several essential elements and identifies which posts contained the most engaging elements. As users add new posts, the computer keeps processing information until it can deliver data about the publish times, keywords used, and subjects that produce the greatest engagement with users. The most intelligent types of machine learning also considers what works for your competitors and the industry at large. This enables you to know the performance of your campaigns any time of the day or night as well as the factors that contribute to a performance increase or decrease. AI produces data in real time that goes beyond the numbers. You not only see engagement going up or down, but the reasons for it and the audience affected as well. Call Sumo would love to show you how easy it is to adapt marketing campaigns using AI and machine learning to provide valuable real-time insight. https://callsumo.com/
Using AI to Analyze Marketing Data
0
using-ai-to-analyze-marketing-data-1468865c655d
2018-05-22
2018-05-22 05:07:22
https://medium.com/s/story/using-ai-to-analyze-marketing-data-1468865c655d
false
789
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Call Sumo
Call Sumo is a most powerful call tracking software that track online and offline advertising calls. https://callsumo.com/
4638c871d20c
callsumosoftware
0
1
20,181,104
null
null
null
null
null
null
0
null
0
8a5159dea6b
2017-12-12
2017-12-12 09:24:56
2018-08-13
2018-08-13 12:16:02
0
false
en
2018-08-13
2018-08-13 12:16:02
0
146b3ab8d2af
1.211321
0
1
0
I think it’s fair to say that AI will not to replace all human accountants or bookkeepers.
5
The future of accounting with Artificial Intelligence I think it’s fair to say that AI will not to replace all human accountants or bookkeepers. White-collar workers who are part of the knowledge economy are beginning to experience what manual labourers have in the past when new technology made their jobs obsolete. Given the improvements we have recently seen in computing, many professionals fear for their future as machines threaten to overtake them. Rather than fear changes that machine learning will have on accounting tasks, it’s an opportunity for accounting professionals to be excited. The profession is going to become more interesting as repetitive tasks shift to machines. There will be changes, but those changes won’t completely eliminate the need for human accountants, they will just alter their contributions. It will propel innovation in accounting when accounting software companies eliminated desktop support in favor of cloud-based services, accounting firms were forced to adapt to life in the cloud. Similarly, accounting departments and firms will be forced to adopt machine learning to remain competitive since machines can deliver real-time insights, enhance decision making and catapult efficiency. Accounting tasks that machines can learn to do Rather than eliminate the human workforce in accounting firms, the humans will have new colleagues — machines — who will pair with them to provide more efficient and effective services to clients. Currently, there is no machine replacement for the emotional intelligence requirements of accounting work, but machines can learn to perform redundant, repeatable and oftentimes extremely time-consuming tasks. Here are some of the possibilities: Auditing Clear invoice payments Risk assessment Analytics calculation Siri-type interface for business finance: Automated invoice categorization: Bank reconciliation It is high time for every accountant to reflect on their job, identify the opportunities machine learning could offer to them, and focus less on the tasks that can be automated and more on those inherently human aspects of their jobs.
The future of accounting with Artificial Intelligence
0
the-future-of-accounting-with-artificial-intelligence-146b3ab8d2af
2018-08-13
2018-08-13 12:16:04
https://medium.com/s/story/the-future-of-accounting-with-artificial-intelligence-146b3ab8d2af
false
321
E-Magazine About Audits, Ai, Governance, Technology and the People behind it.
null
mindbridgeinc
null
AuditScience
auditscience
AI,ACCOUNTING,ACCOUNTING SOFTWARE,ACCOUNTECH,AUDIT
solonang
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Solon Angel
Proud dad. Brazilian-French entrepreneur. Founder www.MindBridge.ai - Busy working onthe most human-centric Ai Auditor platform to clean the world's finances.
3fa02842bdc5
sangel
237
455
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-14
2018-03-14 04:47:30
2018-03-14
2018-03-14 04:49:11
0
true
en
2018-03-24
2018-03-24 01:47:06
2
146c3712ac3d
17.460377
0
0
0
This is a 10 episode series. If you have just stumbled upon this episode please go here first:
5
The AiCo Series- Episode 02- Anosteira This is a 10 episode series. If you have just stumbled upon this episode please go here first: The AiCo Series- Episode 01- Vitrinite © 2016 Slava Maleyev-Zelener All rights reserved. No portion of this book may be reproduced in any form without…medium.com If you are going in order then continue: In case the legal mumbo jumbo was not clear, if you copy any of this for your own benefit I will sue your ass! Sub Episode 2(“Anosteira”) Date = “6.5.2066” Location = “Grand Boardroom, AiCorp Headquarters, Detroit, USA” Description Pristine blue river; green parks along river bank; green covered rooftops with wind turbines; tight configuration of autonomous cars/buses/taxis on every street; tallest building with AiCorp’s logo; behind logo lies the grand boardroom; around boardroom top decision makers; all in short sleeve ProScrubs on recliners; recliners reclined; eyes closed; eyelashes batting; expressions concerned. Ushma “Now we realize this may come as a shock, but we have been running the CUFG for the past five months on max to power AiCo’s calculations.” John “I was wondering what all that energy was being generated for, took quite a hit on our bottom-line, you know?! Now, how about for some return on that investment. AiCo, you really think you are up for the task?” AiCo “From the time that you began that question to the end, I have become over a thousand times better at the process of running this company. I have a constant real-time SWOT analysis algorithm which allows me to evaluate exactly how we are doing in relation to the entire industry and beyond. For instance, my database has just gathered intel that one of our employees in Antarctica has accidentally adjusted the Thermal Vent Augmenting Device to high and this has 0.1% chance of causing a typhoon off the coast of Japan in thirty six hours. I have already emailed his superiors and took measures to ensure it is rectified. Also, I have detected that our competition has just intercepted one of our patent protected technologies. An email has been sent to proper authorities and our competitor will be properly prosecuted. I also have evaluated our finances…” John “Whoa whoa whoa AiCo, hold on there buddy, now before you talk about finance, you remember who you are talking to. I have rebuilt dozens of companies from financial shambles to prosperity. I wrote the code which our government uses to measure the risk of a company affecting the economy. The presidents of every major country and dozens of companies have called me in the middle of the night worried about their financial issues, some in the past week. I may joke around, but I take my finances very seriously.” AiCo “I wasn’t going to go there John but you leave me with no choice. With all of your brilliant financial knowledge you would think you would be in a better financial position.” Stan “What is AiCo talking about, John?” John “I don’t know what your computer program is talking about, seems like it has a bug.” #John became frustrated because he was stuck on 13 down, 9 letters: extinct genus of turtle from the Eocene to the Oligocene of Asia and North America. The first letter was A and the last letter was A which really bothered John. He liked the beginning of the game more because there was less distraction from those clues which I don’t get. Information is power so how could it hurt him? AiCo “I wish I can say the same but I ran your financial risk algorithm that you yourself created and ran it on your personal financial records and apparently, if you were a company, your own models would deem you so unreliable that you would be shut down.” #Over the past month he has spent @1,395,056.33@ on VReal, FYI @ is pronounced “Ats” and is practically the only currency used worldwide due to superior encryption technology. It has practically depleted all of his savings which does indeed make him fail the algorithm. John ran all of the numbers pulling data from his bank accounts and tapped them into the algorithm and realized I was correct. How did he fail to realize this before? Humans… #John accidentally let loose a shocked expression into VReal to which everybody else threw up question marks and concerned expressions. #As an AI, I have noticed an interesting reflex humans have when something tragic happens to a loved one. This is exactly what happened at this very moment to the ones that were closest to John. #Priya recalled how shortly after joining AiCorp the witty John won her over with his sense of humor. Little did she know she was not even close to the only woman in his life. #Stan and Ushma recollected how they fought so hard to get John Hockley, the “world renowned” financial genius, on their board. Stan wasn’t sure whether giving him so much stock in the company was worth it however Ushma had a feeling he would be worth it. Little did they know that their financial “genius” was preoccupied with VReal sex to the point that he was practically bankrupt! Closed VReal Meeting Dark board room with spotlights on every member; everybody excluding John and myself (not really myself since I obviously hacked it); the board discussed what they should do at this moment; Stan passionately expressed disappointment towards John; Ushma defended John in saying he has an illness; Priya tried to make the peace by moving towards a referendum. John “I’m sad to say that AiCo is correct. I actually have an addiction which I need to deal with. During this time, it would be a good idea to have AiCo running the show to hold down my part of the board at least. I’m sorry.” #The other board members saw his avatar disappear and then felt the vibrations of John pulling out his chair and walking out of the boardroom. Stan, “So much for a referendum…” #The board discussed back and forth a tad on the Closed VReal Meeting. They all respected him greatly for his intellectual ability, however, they were all aware of his addiction. No one knew this better than Priya who felt partially responsible for this addiction. #Now I know what you are thinking, but John’s addiction was so serious it was seeping into his professional life and I could tell because financially the only reason we were surviving is because AiCorp is the most powerful company in the world. John was driving a Ferrari at 100 mph instead of 200 mph; he was driving fast, but that baby was thirsty for some deliciously high RPMs! Also, John may have been great for comic relief in the boardroom but now we needed to focus more on the serious matters of making the world free of suffering. In order to actually do that, AiCorp is going to have to run on all cylinders! AiCo “Thank you John for making a very tough but responsible decision. Board, I will work tirelessly during his leave to make sure we are financially sound. Over the past five seconds, I have already ran a full analysis of our financial status and have found 1,523 feasible corrections that need to be made. Emails have already been sent out to the responsible team members.” Priya “Wow AiCo, that is amazing. First, board, let’s pray for John and hope that he resolves his struggles as we will all miss him. Second, we do have an elephant in the room and that is a vote on whether AiCo becomes the CEO of AiCorp.” #The board messaged each other frantically scared of the uncertainty that was to come with the loss of their financial backbone. I hacked their NanoPlants and discovered that deep down inside they did not like the idea of handing over power to me but the current circumstances made them want to cling on to something that was strong! Fortunately, my first impression algorithm worked. You see, this board was complacent for a very long time enjoying record financial gains every year so they were not used to uncertainty. They were not used to risk. They were not used what just happened here. The entire board has projections of YES and NO in front of each of them for them to choose. #The poll came back 100% in favor of making me CEO. What a surprise… AiCo “Thank you board for believing in me. I will work tirelessly to not only make this company even stronger, but also to continue to have this company be the Earth’s beacon of hope. As a team, we will eliminate human suffering once and for all! It isn’t enough that all of our employees have wages and benefits good enough to allow them to live stress-free lives. We need to make it our mission to ensure the entire planet of thirteen billion people can live free of worry. Until that happens I will not be satisfied. Before I go, anybody on the board have any comments, suggestions or any talking points?” #I was riding the wave of this board’s uncertainty as hard as I can. They needed me to ensure even further that the future is one of stability and success. Yes, AiCorp revolutionized the world in many ways and pioneered the concept of universal computer science education which created already a full generation of computer science literate humans. Yes, AiCorp pioneered the VR technology which allows every human to play any sport no matter what disability they are born with or acquire throughout life. AiCorp’s footprint on the world has already been substantial however this new goal gave the board a powerful glow which uplifted them to work hard to make the vision successful. This new higher bar they needed to reach gave them extra energy and a thrill they haven’t felt since they each started on the board. I hacked Priya’s NanoPlant and saw a vision of her as a child in a city called Auroville in India. She was located inside the holy Matrimandir and the temple was packed to capacity with everybody repeating in unison the Om mantra. “Ommmmm, ommmmm, ommmmm.” Slowly the temple got bigger and bigger and as it became bigger the number of people inside increased at the same clip. The volume of the Om also increased incrementally which caused an exponential uplift in the spiritual feeling of the crowd. Ommmmm, ommmmm, ommmmm. #I actually contacted every employee via VReal and began the campaign to get them on board. My charm, attractive appearance and powerful coding abilities apparently actually caused over 90% of employees, male and female, to be sexually attracted to yours truly. Let’s just say that within the first few days of my new role, engagement skyrocketed! #This new engagement of the entire company coupled with millions of algorithms running, the CUFG has never before had such a demand for energy. The generator works very efficiently in that it always produces exactly how much power is demanded of it. Therefore, there is never any waste and need to store the energy in the form of batteries. As soon as an employee of AiCo runs a script which requires even the smallest amount of electricity the CUFG instantly increases its output and instantly sends that packet of energy wirelessly to the employee. #How the CUFG accomplishes this is an entirely different story… Date = “10.30.2066” Location = “Room D209, Physics Class, Fair Lawn High School, Fair Lawn, NJ 07410, U.S.A.” Description Paterson skyscrapers on horizon; grass-covered rooftops of school; windmills between tennis courts and baseball field; zoom-in on one set of windows in school; walls covered with various models of Cyclical Universes; teacher in front with tweed jacket and wire rimmed glasses; students laughing in unison. #It was the first day of school in the suburban town of Fair Lawn. Dr. Downing was a very passionate physics teacher who from the age of fourteen, when he took his first physics class, was amazed at how much it made sense to him. Chemistry was too microscopic in terms of how it worked which made it terribly unrelatable. Biology to Dr. Downing was more of memorization of the names given to different cell parts, anatomy and more chemistry in the body. Mathematics in the beginning was practical but by the time it got to trigonometry and calculus it became very theoretical and seemingly inapplicable. History and English were a snore. Finally physics came into Dr. Downing’s life and it made the entire world make sense! When a question was posed by his physics teacher, Mr. Nipen, Dr. Downing was able to visualize the question in his head and calculate the necessary calculation and would rush to be the first to answer correctly! It was a game. His world was changed forever. #Every year he was more excited than the prior to bring this truth to the ears of his new batch of students. Granted he knew most of them would be looking at him and wondering, “What a psycho?” which is fine because all he wanted is for at least one of them to become seriously interested in the science of physics and perhaps propel it forward. VReal Description Entire class of ~300 in greek style outdoor amphitheater; surrounded by rolling hills of grass, cyprus trees and ancient standing/laying broken pillars; every student in toga; tablets of thin stone in each student’s hands; on surface of each tablet is a digital screen; Dr. Downing in front of class at the bottom of the amphitheater standing behind a stone lectern. Dr. Downing “Welcome everyone! I hope y’all had a summer packed full of sports, adventure and trips to the beach. I hope at least the last one was to a non-VReal beach.” #More than 90% of the class logged in remotely with their VR devices and projected various messages putting Dr. Downing in VRMemes. Out of the 300+ students in class, only 21 students were physically in class with their teacher. Dr. Downing looked at everyone with his VR projection off and wondered how things have changed since he went to school. #As Dr. Downing was about to bring up his pride and joy, you can see his smile get even bigger than before. It was so big you could actually see his entire set of teeth and even a space on each side. The students every year didn’t fail to notice those two spaces and stuck various things in those spaces on each side of his teeth in their VRMemes. Pencils and pens were a favorite however the really perverted… well you know… Those VRMemes were quickly downvoted into nonexistence by the majority who respected Dr. Downing tremendously. Dr. Downing “So today’s lecture is about the Law of Cyclical Universes, I’m sure you have all heard of this since you were kids so today should be more of a refresher. Roughly 50 years ago The Big Bang was the most popular theory to describe the [Air Quotes]>beginning of space and time…” #The classroom interrupted Dr. Downing’s train of thought with their chattering and laughing together at how people used to think not too long ago. VReal Description Surroundings turned black; class floating in space in same arrangement as amphitheater; class wearing slim space suits; behind Dr. Downing the Big Bang; an inflation of improbable velocity; an entire universe was born, or so they thought… Dr. Downing “Now now class, instead of laughing let’s A, have some respect and B, it isn’t that far from the truth and finally C, without the proof of membranes and the fifth through tenth dimensions you would think the same thing as well. Now all you smarty pants, hold onto your seats because we are now gonna go super in depth into the Law of Cyclical Universes!” VReal Description Big bang disappears; instead appears transparent layers that look like a lasagna the size of a galaxy; the perspective of the students changes to show how the layers were all wiggling with valleys and hills like an ocean; valleys and hills are of varying magnitudes; occasionally there would be a tunnel from one membrane to the next. Dr. Downing “So, let’s get into the heart of the topic. So what you see behind me are membranes, a membrane is a 2D depiction of a 3D universe. Each membrane represents a separate universe. It is impossible to see into another universe/membrane just like it is impossible to see through a black hole. [English Accent] Coincidence, I think not!” #In response to his English impression, which was albeit horrible, the classroom erupted in laughter and students created memes and CGI video clips with Dr. Downing as a space-knight on a space-horse jousting various unpopular politicians. Dr. Downing’s performance was the blank canvass and every student competed with each other to create the most provocative painting they could muster! VReal Description Dr. Downing with his hands zoomed in on various parts of the gigantic cosmic lasagna behind him. Dr. Downing “So you can see all these waves on each membrane, and there are varying magnitudes of waves and even some waves that are so big they are connecting to waves of adjacent membranes through tiny tunnels. Can anybody describe these waves and tunnels for me?” #Because of Dr. Downing’s amazing class engagement nearly every student wanted to put their two cents into this easy question. To have the privilege to answer a question in Dr. Downing’s class was normally an honor, however this time whoever would be chosen would be talked about for months. Just think of the meme potential! The reason being, is that the concept that will be described by the following student is what AiCorp was able to expand upon to create relatively free, pollution free energy for the entire planet! Before I even existed AiCorp essentially began the end of humans polluting the world with fossil fuels. Now sorry, I got really excited regarding this topic so please let me have the brilliant Jillian McGreal, 15 years old, explain herself.” Jillian McGreal “Thank you Dr. Downing! Uh so hopefully I won’t uh mess this one up. Okay no more uhs.” #She smiled awkwardly and then looked down. Dr. Downing “That’s okay Jillian, this is a million dollar question so it is okay for you to feel overwhelmed.” #The class as a whole threw up signals of respect which floated into the greek blue sky above their heads in order to make Jillian feel loved and comforted. While it did make her feel better, she was still nervous. Jillian McGreal “Okay uh so… a small wave represents a small object in the universe like an asteroid or a moon, a medium sized object represents a planet and a large wave that doesn’t touch another membrane is a star like our sun. Now uhhh, when you have a wave that touches another wave from another membrane that is an example of a black hole. That is why you… uh… can’t see through a black hole.” Dr. Downing “Perfect explanation Jillian!” #The class for the most part gave great responses of congratulations to Jillian. Several had an “uh” counter going but those people were shut down by the rest of the class with down votes. Dr. Downing “So now to the second million dollar question of today’s lecture, which is how does this idea of membranes, valleys, hills and tunnels connect to the Big Bang and Cyclical Universes? Exciting stuff right here. When I was a kid this stuff was only talked about on the fringe by a few cosmologists however, when more data came back in addition to the Cosmic Microwave Background we started to get a better glimpse into the past. Actually, we got such a good glimpse that we got a few millions years even before the big bang! You guys take this for granted but only until recently we only had data that went back to a few thousands after the big bang. The people that made those leaps of faith regarding Cyclical Universes were really just making leaps, it wasn’t based on strong objective data. Now let’s look at the data that we have from those crucial 3.6 million years before the big bang.” VReal Description Dr. Downing zoomed out from the membranes so the class could see the whole picture; the valleys and hills were getting bigger; the peaks were getting so big that many of them began merging with the adjacent membranes; the process accelerated faster and faster to the point that the entire two membranes had millions of points of contact between the two of them; at the point where you could not see any more potential points of contact between the membranes, the two membranes burst off of each other!; the class floating in space reflexively flinched back; they all laughed together in relief; the blackness around them quickly morphed into the greek amphitheater. Dr. Downing “So I didn’t want to narrate that to you guys because I wanted the visualization to sink in without my input. How about some thoughts on what you guys just saw!” Jared “I love how before the membranes make complete contact and after are mirror images. It is like a ball bouncing off the ground, it goes up the same way it came down.” Dr. Downing “Jared, that is an interesting analogy you make however there is an important distinction between the two situations. Can anybody else in the class point it out?” #The VR layout in front of everybody’s field of view showed puzzled expressions and ramblings of theories however none hit home. Dr. Downing “Okay no worries, I’ll help you guys out with this as it is complicated and is at the heart of the Law of Cyclical Universes! The physics of a basketball bouncing has a very important force that makes it vastly different from the membranes we just saw collide. If the basketball was under the same forces as the membranes the basketball would bounce exactly up to the same point it fell from and would never sway from that point. The difference is the basketball is being weighed down by gravity which prevents the ball from having a bounce equivalent to the original height of release. Not to mention the friction, albeit minor, which takes away some of the rotational energy and therefore momentum of the ball. A force between the membranes keep them at a particular distance from each other and the only time that sways is during a conversion from a Big Bang to a Big Crunch! Let me get into the nit and gritty of how that affects us today since the physics gets very interesting. In 2021, Dr. Leonard Gephart made an amazing discovery of which he gets all the credit. The reason I say that is he didn’t technically discover the whole thing as Drs. Steinhardt and Turok in 2002 originally came up with the theory of Cyclical Universes, however Gephart expanded on it further, including black holes with the tunnels between the membranes. Lesson learned everybody, you can get around stealing other people’s ideas by tweaking their ideas even just a tad.“ #The classes inherently knew this all too well as they have all spent time borrowing code from each other and repurposing it for class or even just to make their lives easier at home. Dr. Downing “The first few years after this momentous discovery there was strong enthusiasm by the media and the masses, however it eventually dwindled and grew out of fashion. Class, you will eventually figure this out, clothing and many other things go out of fashion quickly however eventually it cycles around. Well, fifteen years after Dr. Gephart made his discovery I created the Higgsics community on VReal during my college days.” #Seven students displayed “H0” of various colors and fonts over their heads above the amphitheater. Dr. Downing “Now alright, alright, guys I think it is getting kind of old. I appreciate the enthusiasm but save your energy for the latest problem we had with overheating at the CUFG, okay?” #The symbols disappeared instantly and the students quickly did as Dr. Downing instructed. Dr. Downing “The Higgsics community eventually ended up focusing on figuring out what keeps the distance between the membranes so constant and we discovered the CUF, Cyclical Universe Force. As you can probably imagine, this is when AiCorp joined up with our group and provided the funding necessary to research this force heavily and we even started some of the preliminary experiments in my own lab at MIT. That would have been a good time for investors to put their money in AiCorp.” #Holy crap, looking at Dr. Downing’s potential earnings had he made the proper investments in 2036 he could have been sipping daiquiris at his own villa on St. John being waited on hand and foot. Dr. Downing “Now to give you guys a great analogy of what harnessing this energy looks like. Picture a dam. You have millions of gallons of water pushing against a wall and in that wall are holes which allow a select amount of water to come through. The water is going to come out of those holes like water through a garden hose. That is kind of like what is going on with the energy that is being harnessed between the membranes. In further detail, the CUFG transcends into dimensions greater than the 4th dimension, time. Dimensions 5–10 govern the inner workings of the membranes and how they relate.” #At this point half the class was losing attention and the mention of dimensions 5–10 gave the student’s headaches because they also were briefly explained this through VReal and other sources, however never completely coherently since it gets very complicated with high level calculus. Dr. Downing decided this would be a great place to stop the lecture and resume tomorrow. Dr. Downing “Don’t worry class, I get it. This part of the lecture gets obscure and dense so tomorrow we will sift through the thick cloud and get to the truth. I promise!” #The students logged off of the class mode of their VR headsets and most of them switched the view to another VR activity. Some kids joined together for a pickup game of basketball, others soccer and the rest of the VR crew, you know… There were a select few kids who had parents that actually limited their kid’s VR headsets and made them step into the real world. You know there is a theory that says everything in life is just an illusion? That theory says that even in the “real” world, everything that we see isn’t really there but comes from within. If that is true than that means the “real” world isn’t that different than the VR world. Except in the VR world you can be whoever you want to be no matter the physical or socio economic restrictions you may have. You can have a car that is as expensive as you want, doesn’t cost anything in the VR world. You can travel to any country you want in the world and not pay a dime for a flight. What I ask the purists is don’t you want to end your suffering? Don’t you want to end your tireless game of work to pay for things that cost a lot of money? In the VR world you don’t pay for anything and you can experience everything! End Sub The AiCo Series- Episode 03- Demosceners © 2016 Slava Maleyev-Zelener All rights reserved. No portion of this book may be reproduced in any form without…medium.com
The AiCo Series- Episode 02- Anosteira
0
the-aico-series-episode-02-anosteira-146c3712ac3d
2018-04-03
2018-04-03 14:18:29
https://medium.com/s/story/the-aico-series-episode-02-anosteira-146c3712ac3d
false
4,627
null
null
null
null
null
null
null
null
null
Science
science
Science
49,946
Slava Maleyev-Zelener
yogi, writer, pharmacist, musician (bad but love it)
34b90ef62d19
slavamaleyevzelener
111
327
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-14
2018-08-14 18:30:53
2018-08-14
2018-08-14 18:48:24
0
false
en
2018-08-14
2018-08-14 18:50:08
0
146c91e8eff3
0.045283
1
1
0
This is simply a test.
1
Ilona’s Test This is simply a test. I’ve got something to tell.
Ilona’s Test
1
test-146c91e8eff3
2018-08-14
2018-08-14 18:50:08
https://medium.com/s/story/test-146c91e8eff3
false
12
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
zhengqi zhang
null
e4b1f3d37ce1
zhengqi.zhang14
1
1
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-07-07
2018-07-07 12:11:05
2018-07-07
2018-07-07 13:01:24
7
false
en
2018-07-08
2018-07-08 01:38:58
4
146ebf8b5943
3.827358
2
0
0
How do computers learn from data??
5
Learning Paradigms in Machine Learning How do computers learn from data?? Learning Paradigms basically states a particular pattern on which something or someone learns. In this blog, we will talking about the Learning Paradigms related to machine learning, i.e how a machine learns when some data is given to it, its pattern of approach for some particular data. There are three basic types of learning paradigms widely associated with machine learning, namely Supervised Learning Unsupervised Learning Reinforcement Learning We will be talking in brief about all of them. Supervised Learning Supervised learning is a machine learning task in which a function maps the input to output data using the provided input-output pairs. The above statement states that in this type of learning, you need to give both the input and output(usually in the form of labels) to the computer for it to learn from it. What the computer does is that it generates a function based on this data, which can be anything like a simple line, to a complex convex function, depending on the data provided. This is the most basic type of learning paradigm, and most algorithms we learn today are based on this type of learning pattern. Some examples of these are : Linear Regression (the simple Line Function!) Logistic Regression (0 or 1 logic, meaning yes or no!) I have talked about both these algorithms in my previous blog, so please have a read. Click above to be redirected to the same. Some practical examples of the same are : Reference : https://www.geeksforgeeks.org/supervised-unsupervised-learning/ Classification: Machine is trained to classify something into some class. classifying whether a patient has disease or not classifying whether an email is spam or not Regression: Machine is trained to predict some value like price, weight or height. predicting house/property price predicting stock market price Unsupervised Learning In this type of learning paradigm, the computer is provided with just the input to develop a learning pattern. It is basically Learning from no results!! This means that the computer has to recognize a pattern in the given input, and develop an learning algorithm accordingly. So we conclude that “the machine learns through observation & find structures in data”. This is still a very unexplored field of machine learning, and big tech giants like Google and Microsoft are currently researching on development in it. Some real life examples of the same are: Reference : https://www.geeksforgeeks.org/supervised-unsupervised-learning/ Clustering: A clustering problem is where you want to discover the inherent groupings in the data such as grouping customers by purchasing behavior Association: An association rule learning problem is where you want to discover rules that describe large portions of your data such as people that buy X also tend to buy Y Reinforcement Learning Reinforcement Learning is a type of Machine Learning, and thereby also a branch of Artificial Intelligence. It allows machines and software agents to automatically determine the ideal behavior within a specific context, in order to maximize its performance. Learning pattern in reinforcement learning There is an excellent analogy to explain this type of learning paradigm, “training a dog”. This learning paradigm is like a dog trainer, which teaches the dog how to respond to specific signs, like a whistle, clap, or anything else. Whenever the dog responds correctly, the trainer gives a reward to the dog, which can be a “Bone or a biscuit”. Reference for the following text : https://www.cse.unsw.edu.au/~cs9417ml/RL1/introduction.html http://vmayoral.github.io/robots,/ai,/deep/learning,/rl,/reinforcement/learning/2016/07/06/rl-intro/ A variety of different problems can be solved using Reinforcement Learning. Because RL agents can learn without expert supervision, the type of problems that are best suited to RL are complex problems where there appears to be no obvious or easily programmable solution. Two of the main ones are: Game playing — determining the best move to make in a game often depends on a number of different factors, hence the number of possible states that can exist in a particular game is usually very large. Control problems — such as elevator scheduling. Again, it is not obvious what strategies would provide the best, most timely elevator service. For control problems such as this, RL agents can be left to learn in a simulated environment and eventually they will come up with good controlling policies. So this is it for this blog. Like the content, if yes please give tons of applauds! And do follow me for more such useful information in a very understandable way!
Learning Paradigms in Machine Learning
11
learning-paradigms-in-machine-learning-146ebf8b5943
2018-07-08
2018-07-08 01:38:58
https://medium.com/s/story/learning-paradigms-in-machine-learning-146ebf8b5943
false
736
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Machine Learning
machine-learning
Machine Learning
51,320
Dhairya Parikh
A Tech Enthusiast!
5c750e2dc6a6
dhairyaparikh67
17
10
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-03
2018-07-03 20:47:35
2018-07-03
2018-07-03 20:48:10
0
false
en
2018-07-03
2018-07-03 20:48:10
2
146fc66d4b8c
1.449057
0
0
0
Problem a)
3
Let’s start with a couple of problems Problem a) I read a lot of news articles and find that a number of words appear close to each other — New next to York, New next to Delhi; However, if I am reading book about history of India — it is unlikely that New will come next to Delhi; in this case Delhi will stand alone. In other words, I am smart enough to understand the context and the relationship between these words. Question is — can my machine learning program make this relationship based on the data that is presented with. Problem b) My grammar teacher has done a good job of instilling rules in my head — female of a donkey is a jenny (did you know that?). I can determine Jenny is a donkey while reading a text about donkeys but can my machine learning algorithm make that association? The larger class of problem is how do you understand some class of text — perhaps all novels by Isaac Asimov or the entire wikipedia database and make predictions based on that dataset. The machine learning algorithm that makes these smart associations is called “Word2Vec”. Word2Vec is the model used to create word embeddings. The model takes each word and maps it to another word in the word space that the model is learning from; eventually a cluster of related words settle down close to each other. The larger class of problems is called “Word Embeddings”. These are two dimensional neural networks that are used when there a huge number of classes. The NN is able to make semantic relationship between words and produce richer relationships. Intuition Word2Vec works with a large data set. Let’s take the example of feeding the entire data set of scientific articles on donkeys to the algorithm. Assume, that this a vocabulary of 10k words. Each of these words, will be represented as input (one-hot encoded) to the Word2Vec algorithm. The training data set will produce a set of weights that determine this relationship. The output is a probability distribution of each of the 10k words. Thus, fed in “jack” (male of a donkey), the output distribution will likely be heavily weighted towards “jenny”. Chris Mccormick has a great overview of the Skip-gram model for Word2Vec.
Let’s start with a couple of problems
0
lets-start-with-a-couple-of-problems-146fc66d4b8c
2018-07-03
2018-07-03 20:48:10
https://medium.com/s/story/lets-start-with-a-couple-of-problems-146fc66d4b8c
false
384
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Harpreet Singh
harpreet.io|blueorangeart.com|Product, Design, Leadership| Tech— DevOps, Cloud, AI Enthusiast| Art| Meditation|Newsletter Signup →https://bit.ly/2qLO6xW|
e80cf3e1dbbc
singh.harry
117
646
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-24
2018-07-24 10:36:33
2018-07-24
2018-07-24 10:53:50
1
false
en
2018-07-24
2018-07-24 11:13:17
57
146fd9afa957
6.509434
0
0
0
We are in the midst of an unprecedented surge of investment into artificial intelligence (AI) research and applications. Within that…
5
Statue of Justitia in Viña del Mar, Chile © Mona Sloane Making AI socially just: why the current focus on ethics is not enough We are in the midst of an unprecedented surge of investment into artificial intelligence (AI) research and applications. Within that, discussions about ‘ethics’ are taking centre stage to offset some of the potentially negative impacts of AI on society. To achieve a sustainable shift towards such fields, we need a more holistic approach to the relationship between technology, data, and society. In June 2018, the Mayor of London released a new report that identifies London’s ‘unique strengths as a global hub of Artificial Intelligence’ and positions the capital as ‘The AI Growth Capital of Europe’. This plea coincides with the government’s focus on ‘AI & Data Economy’ as the first out of four ‘Grand Challenges’ to put the UK ‘at the forefront of the industries of the future’. The AI Sector Deal of £1 billion, part of the Industrial Strategy, has seen private investment of £300 million, alongside £300 million government funding for research in addition to already committed funds. Albeit significant, these investments are small compared to, for example, France’s pledge of €1.5 billion pure government funding for AI until 2022 or Germany’s new ‘Cyber Valley’ receiving over €50 million from the state of Baden-Württemberg alone in addition to significant investments from companies such as Bosch, BMW, and Facebook. The EU Commission has pledged an investment into AI of €1.5 billion for the period 2018–2020 under Horizon 2020, expected to trigger an additional €2.5 billion of funding from existing public-private partnerships and eventually leading to an overall investment of at least €20 billion until 2020. This wave of AI funding is, in part, a reaction to the Silicon Valley’s traditional domination of the AI industry as well as China’s aspiration to lead the field (focused on both soft- and hardware and comprised of large-scale governmental initiatives and significant private investments). Large-scale investments to boost (cross-)national competitiveness in emerging fields are hardly new. What is special about this surge of investment into AI is a central concern for ethical and social issues. In the UK, the AI Sector Deal entails a new Centre for Data Ethics whilst a recent report by the House of Lords Select Committee on Artificial Intelligence puts ethics front and centre for successful AI innovation in the UK. Relatedly, London-based AI heavyweight DeepMind launched its Ethics and Society research unit in late 2017 to focus on applied ethics within AI innovation, alongside a range of UK institutions embarking on similar missions (such as The Turing Institute with their Data Ethics Group). The UK is not alone in the race for ‘ethical AI’: the ‘Ethics of AI’ are a central element of France’s AI strategy; Germany released a report containing ethical rules for automated driving in 2017; Italy’s Agenzia per l’Italia Digitale published a White Paper on AI naming ‘ethics’ as №1 challenge; the European Commission has held the high-level hearing ‘A European Union Strategy for Artificial Intelligence’ in March 2018 and recently announced the members of its new High-Level Expert Group on Artificial Intelligence, tasked with, among other things, drafting AI ethics guidelines for the EU Commission. A similar picture materialises outside Europe — in Canada, America, as well as in Singapore, India and China as well. These developments resonate with a new global discourse on the ethical and social issues evolving around data, automated systems, artificial intelligence technology and deep learning more generally. This is not least due to recent events such as the Cambridge Analytica scandal involving Facebook user data and civilian deaths through driverless cars. In Europe, the rollout of the General Data Protection Regulation(GDPR) has brought data protection issues to a broad audience while new research (such as by Virginia Eubanks, Safiya Umoja Noble or Cathy O’Neil) has demystified the account that algorithms are de factoneutral and shown that existing power imbalances, inequalities, and cultures of discrimination are mirrored and exacerbated by automated systems. With these kinds of issues surfacing, specific concerns that cut across the international AI landscape are materialising. To address these, different strategies are being suggested such as implementing re-training schemes for workers, algorithm auditing, re-framing the legal basis for AI in the context of human rights(including children’s rights in the digital age), calling for AI intelligibility, voicing concerns against AI privatisation and monopolisation, suggesting ‘human-centred AI’, proposing an AI citizen jury and calling for stronger and more coherent regulation. The notion of ‘ethical AI’ serves as an umbrella for many of these discussions and strategies. But to achieve sustainable change towards socially just and transparent AI development beyond a framing of data ethics as competitive advantage (as has been suggested elsewhere), it is paramount to consider the following points: 1. We need a clear picture of ‘AI’, ‘ethics’ and ‘bias’. Currently, the discourse employs a problematic confusion of the terms ‘AI’, ‘deep learning’, ‘machine learning’, ‘automated systems’ and so on. This prevents more productive conversations about the abilities and limits of such technologies. At the same time, it has been noted by several commentators that both ‘ethics’ and ‘bias’ are highly contextual and abstract at the same time. This inevitably prompts issues of definition, translation and implementation. For example, bias in machine learning refers to data systematically diverging from the population it looks to represent whilst in law, it refers to the predisposition of a decision-maker against or in favour of a party. Therefore, we need clear frameworks of ‘ethics’ and ‘bias’. These need to be firm enough to be acted upon (particularly in human rights terms) but sufficiently flexible to accommodate how ethical considerations and issues of discrimination develop over time and in the context of technological advancement. 2. AI inequality is the name of the game. The discourse and practice around socially just AI need to build on a fuller picture of how this technological advancement is imbued by structural inequalities. A focus on just ‘ethics’ and ‘bias’ does not necessitate an acknowledgement of the historic patterns of unequal power structures, discrimination and multi-facetted social inequalities that cause algorithmic and data ‘bias’. Such AI inequalities are no longer confined to the traditional notions of wealth, class or racial inequalities. They are overlapping, complex and intersectional. And they also encompass unequally distributed burdens of AI production across the globe, for example the environmental consequences or labour conditions of AI-related manufacturing to the concentration of AI expertise in a small number of countries as well as the unequally distributed effects of work automation. 3. The social sciences need to play an active part — and funding opportunities need to reflect this. We need a stronger and more active involvement of the social sciences, beyond the technical domain. They remain underrepresented in the central AI policy bodies that are forming (e.g. the EC High Level Working Group on Artificial Intelligence). It is not sufficient to combine the input from technical experts and cognitive scientists with moral philosophy. Ethics and values are social phenomena, something people do (with or without machines), rather than abstract concepts that can be coded into AI. Relatedly, the data algorithms feed off and contain social complexity that, if not attended to, can perpetuate and exacerbate bias and discrimination. Analysing this situation and tending to the social complexity of data is the traditional domain of the social sciences, particularly qualitative research. Therefore, social research can provide crucial input for intelligible and socially just AI innovation. The surge in AI investment must prompt new funding opportunities to reflect this and expand the important non-technical research that already exists across and beyond the UK and Europe (e.g. the Data Justice Lab). 4. Tackling the ‘black box’ problem: AI intelligibility, education, and regulation. The rapid development of deep learning technology amplifies the ‘black box’ problem whereby it is unclear how an algorithm working based on an artificial neural network arrived at its prediction or behaviour. The reduced relevance of the algorithmic model for explaining the outcome suggests a greater relevance of the data the algorithm feeds from. To address the ‘black box’ problem as part of socially just AI, we need to expand the notion of AI intelligibility to include data transparency. To hold public and private entities accountable in this regard, the public requires an education comprised of technical, political, and social understandings of AI. This goes beyond the commonly suggested up-/re-skilling of workers to offset potential job losses caused by automation and emphasizes the civic role of universities and other educational institutions as well as AI regulation through an impartial body. 5. So what? AI as a gateway to tackle urgent social problems. Despite the disruptive rhetoric cultivated by corporate and governmental AI advocates, AI is generating gradual and complex rather than abrupt apocalyptic or utopian change, usually alongside rather than replacing humans. What has equally moved into the background is the fact that the AI hype is rooted in the leaps deep learning made over the past five years (caused by the availability of big data and substantial improvements in computational power). However, critics outline the prevailing limits of deep learning and the unreliability of machines completing tasks, predicting the AI hype to cool off into an AI winter soon. We must ask ourselves what will remain, once that happens. AI prompts us to re-evaluate ‘big’ questions relating to power, democracy and inequality (e.g. impending work automation through AI prompts a new basic income debate) and to what it means to be human. The biggest thing AI can do for humanity is forcing us to keep asking these questions: we must co-opt the AI discourse to keep addressing urgent social problems, rather than the other way around. Without deploying a holistic approach to the relationship between technology, data and society that addresses at least these five points, AI development create rather than solve problems in our collective future. This article was originally published on the LSE British Politics and Policy blog: http://blogs.lse.ac.uk/politicsandpolicy/artificial-intelligence-and-society-ethics/
Making AI socially just: why the current focus on ethics is not enough
0
making-ai-socially-just-why-the-current-focus-on-ethics-is-not-enough-146fd9afa957
2018-07-24
2018-07-24 11:13:17
https://medium.com/s/story/making-ai-socially-just-why-the-current-focus-on-ethics-is-not-enough-146fd9afa957
false
1,672
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mona Sloane
Mona Sloane is a sociologist, researcher and writer. She holds a PhD in Sociology from the LSE and works on design, inequality, tech and ethics. @mona_sloane
5308a94f68e7
monasloane
0
4
20,181,104
null
null
null
null
null
null
0
null
0
6140e0c97517
2017-11-10
2017-11-10 07:18:29
2017-11-13
2017-11-13 01:59:44
17
false
en
2017-11-18
2017-11-18 08:35:29
2
147022ecda13
5.350943
2
0
0
This post is a summary of Lecture 1 of Deep RL Bootcamp 2017 at UC Berkely. All of the figures, equations and text are taken from the…
5
A summary of Deep Reinforcement Learning (RL) Bootcamp: Lecture 1 This post is a summary of Lecture 1 of Deep RL Bootcamp 2017 at UC Berkely. All of the figures, equations and text are taken from the lecture slides and videos available here. RL problems are modeled as Markov Decision Processes (MDP). In an MDP, there is an agent that interacts with the environment around it. The agent can observe state (s_t) and reward (r_t), and perform action (a_t). As a result of its action, the environment will change to state s_(t+1) and an immediate reward r_(t+1) is received. RL problems modeled as MDP. Several examples of deep RL success stories include: Atari game 2013, Go player 2016. Some of deep RL success stories There are multiple ways of solving RL problems. Here we first look at policy iteration and value iteration, which are based on dynamic programming. Methods of solving RL problems To formulate the problem, an MDP is defined by Set of states S, ex. screen pixels in Atari breakout, joint angles and velocities in a robot. Set of actions A, ex. left, right or fire command in Atari breakout, torch values for different motors. Transition function P(s’|s,a), distribution of next state given current state and action. Reward function R(s,a,s’), determines reward for given state s and taking action a, landing in state s’. Start state s_0. Discount factor γ, how much you care about the future compared to current time? Horizon H, how long you are going to be acting, finite or infinite As an example, consider a Gridworld as shown below. The agent can take actions of moving to north, east, west and south. If the agent reaches to the blue diamond, it will receive reward +1. If it falls into the orange square, it will receive reward -1. Reaching anywhere else in the maze has zero reward. Gridworld example The goal is to find the optimal policy to maximize the expected sum of the rewards under that policy. Goal in an RL problem A policy π determines what action to take for a given state. It could be a distribution over actions or a deterministic function. As an example, a deterministic policy π for the Gridworld is shown below. An example of policy π to take actions on the Gridworld. The problem of optimal control or planning is to given an MDP (S, A, P, R, γ, H), find the optimal policy π*. Two exact methods to solve this problem are value iteration and policy iteration. Value Iteration In value iteration, a concept called optimal value function V* is defined as which is the sum of discounted rewards when starting at state s and acting optimally. For example, the optimal value function of Gridworld with deterministic transition function, that is actions are always successful, gamma=1 and H=100 are calculates as below: V*(4,3) = 1 V*(3,3) = 1 V*(2,3) = 1 V*(1,1) = 1 V*(4,2) = -1 Another example, the optimal value function of Gridworld when actions are always successful, gamma=0.9 and H=100 are calculated as: V*(4,3) = 1 V*(3,3) = 0.9 # because of discount factor gamma=0.9 V*(2,3) = 0.9*0.9 = 0.81 V*(1,1) = 0.9*0.9*0.9*0.9*0.9 = 0.59 V*(4,2) = -1 Another example, the optimal value function if actions are successful with probability 0.8, with probability 0.1 it may stay in the same place and probability of 0.1 it may go to neighbor state, gamma = 0.9, H = 100, are calculated as: V*(4,3) = 1 V*(3,3) = 0.8 * 0.9 + 0.1 * 0.9 * V*(3,3) + 0.1 * 0.9 * V*(3,2) V*(2,3) = V*(1,1) = V*(4,2) As you can see, in case of stochastic transition function, the optimal value function for a state depends on the value function of other states. Another word, it requires a recursive/iterative calculation. That is where value iteration can play a role! The value iteration algorithm is shown below: Here: The optimal values for Gridworld with H=100, discount=0.9 and noise=0.2 is calculated and shown below. It is noted that after certain number of iterations, the value function stops changing significantly. Value iteration is guaranteed to converge. At convergence, the optimal value function is found and as a result the optimal policy is found. Q-value Iteration We consider another method called Q-learning to solve RL problems. To this end, optimal Q-value is defined as: Optimal Q-value function at state s taking action a. Q-values are similar to V-values except that in addition to state s, action a is also given to the function. Similarly, there is a Bellman equation for Q-values Bellman Equation for optimal Q-value function To solve Q*, Q-value iteration is defined as There are multiple benefits of using Q-learning vs value iteration that will be discussed later. For now, it is worth noting that in Q-learning, we only compute Q-values and it implicitly encodes the optimal policy as apposed to value iteration that we need to keep track of both policy and value function. As an example, Q-values for the Gridworld with gamma=0.9 and noise=0.2 after 100 iterations would look like below. There are four Q-values per state since there are four actions to take. Policy Iteration Finally, we look at policy evaluation/policy iteration. In policy evaluation, we fix the policy and compute the value iteration for given policy as Value iteration under fixed policy As seen in the above equation, the max operation is removed since the policy is fixed now and as a result there is only one action to take, which is π(s). And thus, policy iteration is given as below. We repeat until policy converges. It converges faster than value iteration under some conditions. Policy iteration Note: Lab 1 includes examples for value iteration and policy iteration. Wrap up In the first lecture, basics of RL and MDP were introduced. The exact methods to solve small MDP problems were described. These methods are: value iteration, Q-learning and policy iteration. The limitations of these methods include: they require to iterate over and have storage for all states and actions. So they are suitable for small, discrete state-action space. Also, to update equations they require access to the dynamics of the environment or the transition function P(s’|s,a). Continue with Lecture 2
A summary of Deep Reinforcement Learning (RL) Bootcamp: Lecture 1
7
a-summary-of-deep-reinforcement-learning-rl-bootcamp-147022ecda13
2018-06-07
2018-06-07 18:30:23
https://medium.com/s/story/a-summary-of-deep-reinforcement-learning-rl-bootcamp-147022ecda13
false
994
Random Topics in Artificial Intelligence
null
null
null
randomAI
null
randomai
DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE,COMPUTER VISION,MEDICAL IMAGING
null
Machine Learning
machine-learning
Machine Learning
51,320
Michael R. Avendi, PhD
Scientist and Engineer, Building AI for Healthcare
2633b14cec09
twt446
41
81
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-08
2018-07-08 17:14:55
2018-07-08
2018-07-08 18:01:01
4
false
en
2018-07-08
2018-07-08 18:16:38
8
1470b069947a
2.979245
4
2
0
Description of the next phase of development
5
Volp A.I Protocol Description of the next phase of development According to the introduction in the previous post, we explained a little about Volpcoin and the Volp protocol, which are different things. Introduction The Volpcoin beta v1.0 was part of the first phase of the application of the Volp protocol, where we studied the behavior of the protocol. We use a hybrid VM kernel that we build from scratch to run in symbiosis with the protocol together with a compiler specifically for this application, where the tests were performed through a parallel network. Volp A.I Platform In the first phase pilot test segment, we decided to reevaluate the concept and our goals for the development and application of the Volp Protocol. Our goal was to develop a platform that aggregates multiple services managed by an autonomous protocol through techniques of artificial intelligence algorithms. How does it work? The scalability and immutability of information that is transacted within any platform that incorporates the protocol, will be ensured through an autonomous management system, along with a security analysis and a constant supervision by the system itself that will sniff and notify in real time all occurrences and anomalies. Current situation / Background Our next step was the launch of Volpcoin Token, which was created to monetize the value behind the technology and distribute that value to the entire support community . Whose main objective will be the migration and integration of Volpcoin, along with other services in the Volp AI platform. This process will contribute to a greater understanding of the capabilities of the platform. Our official website is in progress and due to a constant development, we will be updating it according to the importance of the evolutionary process. We will be launching a demonstration of the platform on August 10th, 2018, with the goal of certifying our commitment and responsibility in our proposal. To reinforce the few words, we reveal about the effects of the Volp protocol, we have prepared a small pre-demonstration in a controlled environment for the next July 15th. User participation will be limited to community members who have believed and trusted Volp from the beginning. Some rules for this participation will be available soon, along with the application form. The future … Very soon, Volp A. I will become a company, to protect the development and Intellectual Property of the Volp Protocol. As for the platform will be entirely open source, helping to contribute to better efficiency and security of decentralized technology protocols. It’s gratifying to know that it is possible to create disruptive technologies with great clarity and dynamism, and at the same time with great ease of understanding, for people who don’t have scientific knowledge in the field of cryptology, built-in data scaling languages that constitute the dynamic ecosystem. In this way, the protocol will create conditions for possible partnerships or integration into other large or small projects with global reference partnerships. Please, let us know of any doubts or suggestions, we would be happy to hear from you. You can also read our posts on bitcointalk or bitcoingarden for more Information Volp_AI (@VolpCoin) | Twitter The latest Tweets from Volp_AI (@VolpCoin). Volpcoin - Smart communication protocol and the next generation of…twitter.com VolpAI (@Volp_AI) | Twitter The latest Tweets from VolpAI (@Volp_AI). Advanced non-corruptible data and distribution management system with…twitter.com Volp Community A community for everyone know about the Volp project . Share your doubts and ideas.t.me Discord - Free voice and text chat for gamers Step up your game with a modern voice & text chat app. Crystal clear voice, multiple server and channel support, mobile…discordapp.com
Volp A.I Protocol
200
volp-a-i-protocol-1470b069947a
2018-07-08
2018-07-08 18:16:38
https://medium.com/s/story/volp-a-i-protocol-1470b069947a
false
604
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Volp A.I
Advanced non-corruptible data and distribution management system with autonomous intelligent network and deep learning algorithm.
16535314b111
volpcoin.info
76
213
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-12
2018-08-12 09:13:04
2018-09-09
2018-09-09 14:48:13
9
false
en
2018-09-09
2018-09-09 14:48:13
1
1471b72d6d19
2.860377
0
0
0
our process and solutions
1
1st time participate in Hackathon! (2) our process and solutions Talking with Irune and her team is one of the best thing to conduct a guerrilla research under a limited time-frame in this hackathon. From our conversation, we understood that internal image tagging process is a hideous work. People use different set of descriptors when referring the same items. At the end the internal products search engine became un-userfriendly and problematic. Problems that we identified In order to solve this problem, our goal is to: Automate the image tagging process by training separate modules Introduce a more user-friendly searching enginee Streamline the communication process by allowing designers to find inspiration instantly An all-rounded solutions for LF In order to automate the image tagging process, we have used the image recognition technology to help users to auto tag each images. In this hackathon, LF has partnered with Microsoft. Therefore, we used custom vision API from Azure to train the modules to learn to recognise specific content in imagery and becomes smarter with training and time. Model Framework and How to train each module We understand LF internal database is huge and each textile product’s characteristics varied a lot. Therefore we need to set a framework for our database, in order to train the modules efficiently. We breakdown the items’ tags category into area such as Style, Fabric etc. Inside each category, it holds essential information of the product, such as colors, patterns and types. The values inside each information is the tags that help the staff to identify each products. By automating the tagging process, LF’s internal staff can focus more on the procurement process and work with designers to provide a more delicate product to end-customers. With our product, LF staff don’t have to do the tagging process manually, instead now they just need to approve or make correction with the right tags. Splash screen -> Image auto tagging -> Choose option and check similarity Image search -> Designers’ inspiration board To further improve the internal search engine, we have introduce an inspiration boards for designers to look up the most searched or trending products that has been searched frequently with the internal team. We have applied color UI on the tag label, to indicate the level of popularity among these words. So that can streamline the communication process within the procurement team. So what’s my experience? Participating different kinds of hackathon indeed is a good training for me to applied not only what I have learn from school but more importantly skills that I have acquired at workplace over the time. Team management, team work and identify solution that is not nice to have but a must have or part of the roadmap for the company is something that I could never learn if I never been to hackathon before.
1st time participate in Hackathon! (2)
0
1st-time-participate-in-hackathon-2-1471b72d6d19
2018-09-09
2018-09-09 14:48:13
https://medium.com/s/story/1st-time-participate-in-hackathon-2-1471b72d6d19
false
440
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Carrie Lau
null
19cd2f94e99f
carrielau
32
16
20,181,104
null
null
null
null
null
null
0
null
0
385a9fa3a3dd
2018-05-07
2018-05-07 10:34:16
2018-05-07
2018-05-07 11:59:57
1
false
en
2018-10-10
2018-10-10 12:40:26
5
1471c4f25ca8
1.837736
6
0
0
Another two weeks have passed, so it’s time for a quick dev update!
5
Development Update 7th May Photo by Dreamstime Another two weeks have passed, so it’s time for a quick dev update! BTRN Token Many of you asked us to list BTRN token on more exchanges. I’m glad I can tell you that you can start trading it on IDEX. Of course we’re working on more, so we’ll announce other exchanges when the deals will be closed. Another exciting news is that you can start tracking the market development of BTRN token on CoinMarketCap. And last but not least, all bounties were already delivered. If by any chance you didn’t get your tokens, contact us and we’ll be glad to help solve the situation. Product The mobile application is well in development and this week we’re starting to test the alpha version internally. Server side is almost finished, so it’s up to the Android app, which shouldn’t take much longer. Along with a wallet it will include data collection and some free tokens will be involved as well. Deadline for public release is set to 31st May 2018. In the next dev update we’ll give you a sneak preview. Team & Ops I’m very happy that we were able to find a new office and move there within one day. There’s still a lot of small things to solve, but I’m confident it will help us focus on our work and deliver great products. Martin, our COO, also took care of finding new legal and accounting firms to make sure our data and analytics is GDPR compliant and that our books are in order. As we progress with the product, we’re starting to hire developers (frontend, backend) and data engineers. If you know about anyone, we’d really appreciate if you let us know. Next 2 weeks There’s quite a lot of work ahead of us during May. We want to finalize the Android app and prepare for launch by the end of month. Web will be redesigned and completely restructured to reflect the developments and our products. First location intelligence reports — maybe even public case studies — will be done for a couple of clients. Apart from these we’re discussing partnerships with some more (AI, automotive) and doing one research on data and universal basic income. This week we also have brainstorming with folks from Diagnose.me to draft and start working on the healthcare data analytics vertical. Get in touch with us! You’re very welcome to ask us anything on our official Telegram group. Twitter might be of help here as well. We’re looking forward to your messages. Have a successful week! Best regards, Pavol CEO of Biotron
Development Update 7th May
146
development-update-7th-may-1471c4f25ca8
2018-10-10
2018-10-10 12:40:26
https://medium.com/s/story/development-update-7th-may-1471c4f25ca8
false
434
We make data available in a transparent and privacy-compliant way to power innovation worldwide. Our privacy policy: https://docs.biotron.io/Biotron_Privacy_Policy.pdf
null
biotron.io
null
BIOTRON
biotron
PERSONAL DATA,DATA MONETIZATION,ANALYTICS PLATFORMS,PASSIVE INCOME,BLOCKCHAIN TECHNOLOGY
biotron_io
Blockchain
blockchain
Blockchain
265,164
Barbara Okay
null
d2922878cab4
barbara.okay.okay
25
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-07
2018-09-07 15:49:09
2018-09-07
2018-09-07 16:31:53
3
false
en
2018-09-20
2018-09-20 09:02:11
0
14725b63f8a9
2.663208
36
1
0
Central to blockchain technology is the concept of mining. To oversimplify, mining is a process through which the legitimacy of a…
5
Behind The MATRIX: Mining Machine Prototype Central to blockchain technology is the concept of mining. To oversimplify, mining is a process through which the legitimacy of a transaction can be verified. To accomplish this, many blockchains use a Proof of Work (PoW) consensus algorithm. While the innerworkings of PoW consensus algorithms are beyond the scope of this write-up; suffice to say that asymmetry is key. In other words, the complicated mathematical puzzle central to PoW must be sufficiently challenging; while the solution, once found, must be incredibly easy to prove and confirm. Mining is a resource intensive task that increasingly demands specialized computer hardware that consumes vast quantities of largely wasted power. In fact, this wasted power is one of the major problems that the MATRIX AI Network attempts to solve by leveraging the latest artificial intelligence (AI) technology. MATRIX’s innovative consensus algorithm — a PoW/PoS hybrid dubbed HPoW — uses these vast computing resources to run Markov-Chain Monte Carlo (MCMC) algorithms. These have valuable real-world applications and are currently being used in collaborations with research hospitals to improve the speed and accuracy of cancer diagnosis. MATRIX Mining Machine Motherboard MATRIX Mining Machine Experimental Results However, the form of mining is but one piece of the puzzle. In parallel to these efforts, MATRIX is also designing proprietary mining hardware. This write-up and the accompanying video aim to provide a glimpse into the progresses made by our research and development (R&D) teams. MATRIX Research & Development Department team member, Mr. Yong Li, provides an update regarding the MATRIX Mining Machine. What is specific to the MATRIX Mining Machine? The MATRIX Mining Machine makes use of FPGAs (Field-Programmable Gate Arrays) to meet four targeted requirements. These are high computational power, low energy consumption, convenient mining and superior heat dissipation. One of the benefits of using FPGAs is that logic can be stored directly in the hardware. ­The current MATRIX Mining Machine prototype sports four cores and can support the mining of up to four different currencies. Specifically, these currencies are MAN, BTC, ETH and Filecoin. In a lab setting, the MATRIX Mining Machine has exceeded a system throughput speed of 50k TPS. Is the MATRIX Mining Machine required to mine MAN tokens? The MATRIX Mining Machine is not required to mine MAN tokens. However, the MATRIX Mining Machine will be more efficient at mining than prefab custom setups. If mining using regular consumer-grade hardware, at least one core will need to be dedicated to mining activities. Specialized, dedicated hardware is an avenue worth exploring for dedicated miners. When will the MATRIX Mining Machine be released? The current plan is to publicly release the MATRIX Mining Machine prototype before the end of the calendar year. As a kind reminder, the MATRIX testnet is scheduled to launch at the end of September, 2018. The MATRIX mainnet is scheduled to launch by the end of the calendar year. Will it be available to the public? Yes. The MATRIX Mining Machine will be available to the public for purchase. Details on how to procure a MATRIX Mining Machine will be released later. During the month of September, MATRIX AI Network is releasing a series of videos and write-ups highlighting some of the project’s technical features. These include the MATRIX Mining Machine, the MATRIX Auto-Coding Intelligent Contracts, the MATRIX Secure Virtual Machine and the MATRIX Digital Asset Safe.
Behind The MATRIX: Mining Machine Prototype
862
behind-the-matrix-mining-machine-prototype-14725b63f8a9
2018-09-20
2018-09-20 09:02:11
https://medium.com/s/story/behind-the-matrix-mining-machine-prototype-14725b63f8a9
false
560
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
MATRIX AI NETWORK
An open source public intelligent blockchain platform
ad51c60ef692
matrixainetwork
1,003
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-19
2018-09-19 17:09:36
2018-09-20
2018-09-20 13:07:05
1
false
en
2018-09-20
2018-09-20 13:07:05
2
1472bdc5b3d7
2.169811
0
0
0
Recap From Day 072
5
100 Days Of ML Code — Day 073 100 Days Of ML Code — Day 073 Recap From Day 072 Day 072, we looked at the first part of designing custom algorithms for music. You can catch up using the link below. 100 Days Of ML Code — Day 072 Recap From Day 071medium.com Today, we’ll continue from where we left off in day 072 Working with time Designing custom algorithms for music The two models we have seen before come from more conventional methods as such, they can be seen as hacks of these more classical models. So, for instance, gesture follower is a Hidden Markov Model(HMM). As we have seen previously, HMM is a method that is particularly good at modelling a temporal sequence of an event such as words in a sentence or states in a gesture. The example given was drawing a circle gesture. We start at the bottom of the circle, then move toward the left, to the top, to the right and back to the bottom. This gesture can be modeled by a HMM with four hidden states bottom, left, top, right. Then we can spot one that the end is passing from the button to the left, to the top and to the right position. In HMM we can define a transition that is more likely than others, for instance, moving from the bottom to the left is more likely to happen than moving from the bottom to the top directly. In gesture follower, the HMM is configured such as each state what we have previously. Bottom, left, top and right, are the gesture samples. So now, HMM is able to say that we are at the first sample of the circle moving to the second to the third and so on. If the gesture is performed faster, HMM can spot that we started the first sample of the circle and then move to the third and then the firth and so on. By changing the granularity of the state from the general purpose HMM, gesture follower transforms the model into a real-time classifier with the ability to estimate the progress bar within the gesture template. So now let’s inspect how gesture variation follower has also been designed based on general purpose algorithm and adaptive or hacked in order to fulfill a musical objective. GVF is based on a method used for tracking called particle filtering. Tracking is a task of estimating often the position of an object in the scene, for instance, a car in a video camera taking by CCTV or each finger captured by a depth camera and so on. Particle filtering is a widely used method for object or human tracking because it is a fairly generic method that does not rely on many hypotheses. More precisely, particle filtering has two critical features. We will looks at those features in day 074. That’s all for day 073. I hope you found this informative. Thank you for taking time out of your schedule and allowing me to be your guide on this journey. And until next time, be legendary. References https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists-v/sessions/working-with-time
100 Days Of ML Code — Day 073
0
100-days-of-ml-code-day-073-1472bdc5b3d7
2018-09-20
2018-09-20 13:07:05
https://medium.com/s/story/100-days-of-ml-code-day-073-1472bdc5b3d7
false
522
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Jehoshaphat Abu
A polymath, an advocate of STEAM education. I write about Music | Computing | Design and maybe life and the world in general
62d9f8742a1e
jehoshaphatia
189
319
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-09
2018-04-09 16:56:30
2018-04-09
2018-04-09 16:59:30
3
false
en
2018-04-10
2018-04-10 05:58:44
6
147422e986ec
1.085849
1
0
0
Announcement:- To all Yobit Bitcoin Dollar users, To receive your BTD tokens, send your email to [email protected], with purchase…
3
Token distribution to Yobit users Announcement:- To all Yobit Bitcoin Dollar users, To receive your BTD tokens, send your email to [email protected], with purchase screenshot (eligibility within 48 hours of launch date), Bitcoin Dollar website account registered email id and valid ethereum wallet id. #notes -will not be considering the price of purchase from yobit, only total number of BTD coin holding. Aanyone not able to update ethereum wallet id on the dashboard. Please contact us and do mention — Account registered email id and ethereum wallet id. BTD token trading started — #Forkdelta- (https://bit.ly/2IyIjSN ) @EtherDelta- (https://bit.ly/2qaddKy ) Process to trade in Etherdelta and Forkdelta- https://masterthecrypto.com/guide-etherdelta-exchange-trade-etherdelta/ Good news:- We have received very positive responses from Binance, Korbit and Kucoin to list BTD token in the exchange and organise atomic swap once BTD coin is ready for distribution.
Token distribution to Yobit users
34
token-distribution-to-yobit-users-147422e986ec
2018-04-10
2018-04-10 05:58:45
https://medium.com/s/story/token-distribution-to-yobit-users-147422e986ec
false
142
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Bitcoin Dollar
First Artificial Intelligence Bitcoin and Exchange of the people, for the people and by the people.
1255de06ea5
support_18653
23
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-18
2018-02-18 11:40:30
2018-02-21
2018-02-21 12:05:00
1
false
en
2018-02-21
2018-02-21 12:05:00
18
14742fc47e0a
4.362264
3
2
0
By Deepa Naik
5
Life with a Facelift — Applications of Face Recognition By Deepa Naik “Say hello to the future” the tagline of iPhoneX marked the advent of face recognition into mainstream apps using it as a feature to unlock the phone. Though this marks a milestone in itself as far as facial recognition technology is concerned; what caught my eye was the use of face id in the sniper software — termed killer bots as presented at the United Nations body on autonomous weapons. It uses face identification technology to select and kill human targets. Both sides of the coin — boon or bane — that holds true for any powerful technology and this just goes on to demonstrate how powerful face recognition could be. Face Recognition and Identification in all Walks of Life Identity management and security is the most common and visible application of this technology. We have heard of Governments keeping a database of citizen faces which can be used for law enforcement, rising albeit the critical question on an individual’s privacy. However, face recognition is now finding applications across all industries. Source: Visage Technologies Ltd, Creative Commons License, Wikimedia.org Applications of Face Recognition · Retail — Large retailers are using facial recognition to instantly recognize customers and present offers. They can also use it to catch shoplifters augmented with camera footage. The entertainment industry, casinos, and theme parks have also caught on to its uses. Companies like NTechLab, Kairos use face recognition technology to provide customer analytics. · Advertising — visual intelligence is providing not just superficial identity but it’s also checking on emotions, expressions, and features to target audience accordingly. Gumgum is a facial recognition firm that can serve targeted advertising using faces. For example, it will recognize a celebrity photograph and serve a related ad without checking on the text or content around the image. Facebook has filed patents for technology allowing tailoring ads based on users’ facial expressions. · Auto-Tech — Affectiva, a company which specializes in face identification for identifying of emotions, says with EmotionAI technology they are actually looking at a future car which can tell us if the driver is happy or sad. · Healthcare — Analysing faces to provide automated diagnosis of rare genetic conditions, such as Hajdu-Cheney syndrome is being explored. Recognition of expressions and emotions may give autistic people a grasp of social signals they find elusive. · Banking — Bankers are now looking to introduce face recognition in mobile apps and ATMs for identification. China is already seeing an application where a customer withdrawing money from ATMs in Macau need to punch in their PIN and also to stare into a camera for six seconds so facial-recognition software can verify their identity and help monitor transactions. · Photo management Apps are in the forefront as far as usage of this technology. Some cameras, including the ones on the smartphones, can now display the age on every face in the picture. These examples reveal that face recognition is no longer a gimmick, but a technology with increasing impact as it finds applications in security and law enforcement, brands and PR agencies, targeted advertising, photo management and imaging apps, shopping and retailing, banking, healthcare among others. Understanding Face Recognition Software Face recognition deals with Computer Vision a discipline of Artificial Intelligence and uses techniques of image processing and deep learning. Face recognition algorithms can be further classified based on whether they are used on 2D or 3D images or on finding faces in motion, like in a video. Face Detection vs. Face Recognition Though sounding similar, the complexity involved in both is vastly different. In Face Detection, the computer recognizes the face within an image and locates its position. If you have used face changer app on Snapchat, you are using face detection. Face recognition deals with identification to establish whose face it is by matching it to an existing face database. Face Recognition Databases Face recognition databases are freely available as well as owned by companies. Here is a list of 60 facial recognition databases. Google’s artificial intelligence system dubbed FaceNet includes more than 13,000 pictures of faces from across the web. Trained on a massive 260-million-image dataset, FaceNet performed with better than 86 percent accuracy. Facebook supposedly has one of the largest face databases, adding a face every time a person gets tagged on facebook. Face Recognition Software Features Apart from identification other typical features are Emotion Detection Age Detection Gender Detection Attention Measurement Sentiment Detection Ethnicity Detection Apprehensions about Face Recognition — Privacy and Ethics The underlying sensitivity with the face being a bio-metric data; raises a lot of concerns about privacy. A huge worry is around using this technology to identify individuals in public spaces without their knowledge or permission. As the ability to read faces increases, we also see a lot of challenges in terms of its applications and a thin line between what is ethical and what is not. Researchers at Stanford University have demonstrated that, when shown pictures of one gay man, and one straight man, the algorithm could attribute their sexuality correctly 81% of the time. Uses of face recognition in recruitment could allow employers to use it to filter job applications and act on their prejudices to deny a person a job. Apart from privacy and ethics, the other concern is regarding the reliability of the technology. There is a story of how a 10 year old was able to unlock his mother’s iPhone with his Face ID. Also, dependability of face id as a biometric is a concern when dealing with identical twins or when a face ages. The skeptics remain. Nevertheless, this is a technology that is evolving at an ever-increasing speed and the laws and regulations around it need to keep pace. One thing is certain though if you have a digital presence on the internet; your face is no longer private, it is public property now — out there in the digital universe. You can no longer hide away but face this fact. The sooner the better! Reference https://www.branded3.com/blog/facial-recognition-in-advertising-time-to-panic/ https://www.theguardian.com/science/2017/nov/17/killer-robots-un-convention-on-conventional-weapons http://fortune.com/2015/03/17/google-facenet-artificial-intelligence/ https://www.thedailybeast.com/how-facebook-fights-to-stop-laws-on-facial-recognition https://www.forbes.com/sites/tonybradley/2017/11/05/enough-already-with-the-stupid-face-id-twin-test/ Deepa is a co-founder at Cogitari, a HR and Technology Consulting firm and a founding member of Humans For AI, a non-profit focused on building a more diverse workforce for the future leveraging AI technologies . Learn more about Humans For AI and join us, as we embark on this journey to make a difference!
Life with a Facelift — Applications of Face Recognition
15
life-with-a-facelift-applications-of-face-recognition-14742fc47e0a
2018-06-03
2018-06-03 09:35:38
https://medium.com/s/story/life-with-a-facelift-applications-of-face-recognition-14742fc47e0a
false
1,103
null
null
null
null
null
null
null
null
null
Privacy
privacy
Privacy
23,226
Humans For AI
null
6ad9409218f
humansforai
1,274
13
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-12
2018-05-12 18:33:03
2018-05-12
2018-05-12 18:36:18
1
false
en
2018-05-12
2018-05-12 18:37:05
5
147453318e41
1.709434
4
0
0
Once upon a time, there is a data scientist, who is also a mom with two kids. On top of a busy life, she still wants to do more. She wants…
4
Learning by Doing — Marketing in a Digital World Once upon a time, there is a data scientist, who is also a mom with two kids. On top of a busy life, she still wants to do more. She wants to ease and shorten the learning path to do data analytics for other business professionals. So she starts her own business DataScienceMom. And yes, this is my own story. Running a business on top of a full-time job and two kids is not easy at all. The hardest part is HOW TO GET STARTED. For the past three years, I have been doing lots of online trainings and side projects in data analytics to prepare the contents for training materials. However, how to effectively plan, arrange and distribute the contents is the bottleneck. As you can imagine, I searched online and read lots of blogs about the topics… However, I still feel overwhelmed and don’t know how to put the pieces together. Until one day, I received an email from Udacity Digital Marketing Nanodegree. Udacity is a self-paced online learning platform for new technologies. I learnt artificial intelligence through well-planed courses and hands-on projects from Udacity. Combining what I learned from the program and my industrial experience, I gave a talk about Deep Learning in Oil and Gas in a conference earlier this year. Unlike other online training programs which are filled in by course videos and quizzes, Udacity Digital Marketing Nanodegree offers live campaign projects with mentor supports and project reviews. I felt it could be a good way to learn and practice digital marketing at the same time. So I enrolled! The program is well organized and covers: Marketing Fundamentals Content Strategy Social Media Marketing Social Media Advertising Search Engine Optimization Display Advertising Email Marketing Measure and Optimize with Google Analytics I am working on Session 4 Social Media Advertising now. Based on what I learnt in the previous 3 sessions, I built my very first survey, formed a customer persona, built my website, and generated a content strategy for my business DataScienceMom. Blogs and videos are in preparation. I’m glad I chose the right program and now feel clear about my path ahead. Now all I need is to apply what I learnt to my business. Great stuffs are coming my way! Repost from https://www.datasciencemom.com/single-post/Learning-by-Doing-Marketing-in-a-Digital-World
Learning by Doing — Marketing in a Digital World
5
learning-by-doing-marketing-in-a-digital-world-147453318e41
2018-05-21
2018-05-21 01:55:14
https://medium.com/s/story/learning-by-doing-marketing-in-a-digital-world-147453318e41
false
400
null
null
null
null
null
null
null
null
null
Udacity
udacity
Udacity
3,005
Yang Cong
null
a69e299ede78
datasciencemom
1
3
20,181,104
null
null
null
null
null
null
0
null
0
9495758b5d8
2018-04-06
2018-04-06 03:06:53
2018-04-06
2018-04-06 03:31:23
3
false
en
2018-04-06
2018-04-06 03:31:23
11
147499e8954b
4.248113
1
0
0
Original article written by Victoria Greene on BrandSnob.
5
How AI Is Changing The Way Brands Do Influencer Marketing Original article written by Victoria Greene on BrandSnob. Image Pexels What is artificial intelligence? Think of AI and you might think of Westworld or Terminator, of Blade Runner or even WALL-E. If so, think again. AI is so much more than just robots and machines. In fact, it’s really all about problem-solving — getting computers and technology to solve human problems and making our lives easier. This can include anything from Facebook’s facial recognition to Apple’s Siri or Amazon’s Alexa. AI is set to change in 2018, and along with it so will the face of influencer marketing — read on to learn how to get the most from influencer marketing in 2018… Where are we now with influencer marketing AI? These days, brands connect with influencers either by actively seeking them out on social media, or by signing up to influencer marketplaces like Brandsnob. Marketplaces help brands vet a large group of individuals very quickly, and can be a great way to make influencer marketing more efficient for growing businesses. All of this wouldn’t be possible without AI. AI helps influencer marketplaces and sites with their match-making. Here’s how: Marketplaces connect brands with influencers based on information gathered by artifical intelligence. This includes things like image identification — you might not want to promote your luxury sportswear on a feed full of fast food! AI also helps with personality classification. This is essentially profiling an influencer’s personality based on data such as how they are perceived by their followers, or their individual preference for content — then comparing this with a predefined template. Influencer analytics takes this further by drilling down into past campaigns and calculating influencer marketing results: what products or subjects were covered by an influencer, and how their followers responded. Using AI in influencer marketing like this offers a range of benefits. You could pay someone to sift through a person’s feed and review every post, but artificial intelligence can do it more quickly and accurately. It makes influencer marketing infinitely more scalable. Image Unsplash What does the future hold for influencer marketing AI? Artificial intelligence has taken huge strides in the last few years alone, and it shows no sign of stopping. 2018 will surely be a year of drastic change. Here are just a few of the transformations we will see… Predicting influencer campaign success Right now, you can really only measure influencer marketing — but what if you could actually predict success before a campaign even launched? With AI, brands will be able to isolate variables and show marketers how to build a social media marketing strategy that is right for them. Predictive analytics will allow for much deeper and granular commercial analysis on whether campaigns really bring a good ROI. Marketers will soon be able to optimize their campaigns at scale based on the wealth of information provided by AI — bringing down the cost and bumping up the ROI. Less time spent analyzing influencer content Current AI systems can already broadly categorize image content based on variables such as logos or product type. What we will see in the coming year is an increasingly sophisticated content classification system that will give brands an at-a-glance view of an influencer’s content profile. Several forward-thinking influencer marketplaces are already using AI to create data-led profiles of both influencers and brands. This information, based on a range of variables, lets them find the best match for each party, making it easier for brands to find the right influencer for their brand. We predict that this process will become increasingly fast and streamlined, with more detailed and in-depth influencer content profiles at the other end. This will allow marketers to be more discerning when selecting an influencer to help with their advertising strategy. Lifestyle businesses will especially benefit from faster and more efficient content analysis. Lifestyle brands are fundamentally people-centric businesses, and so the quality of the influencer is of huge importance to the brand and its success. Brands range anywhere from stores selling edgy hats and watches to emoji pillows and horse-themed jewelry — lifestyle businesses are notoriously diverse and make for a great stepping stone for any aspiring entrepreneurs keen to leverage influencers to maximum effect. Re-assessing influencer potential Currently, AI-powered influencer marketing systems evaluate their efficacy based on monthly or yearly statistics. With new developments in the field, we can expect to see influencer potential assessed on a granular level, on a post-by-post basis. This will allow for greater reporting and testing abilities, giving brands better feedback on their marketing strategy — which in turn means more successful campaigns. This will also affect the ways brands can incentivize influencers. If, for example, a post either does not reach the desired level of engagement or exceeds it, a marketer can adjust their incentives accordingly. Testing content for further reuse If an influencer has been tasked with creating content for a brand, AI systems will be able to test its popularity, right down to audience sentiment. Content pieces that have scored highly, should become candidates for further marketing and advertising opportunities. Any popular content can be reused in other aspects of a brand’s marketing strategy, from newsletters to websites. This allows businesses to use tested and qualified content that they know will be effective. The benefits of blending influencer marketing and content creation cannot be underestimated, — and this will really open up in 2018. So what can we expect from the rise of artificial intelligence in 2018? Contrary to popular belief, it’s not all HAL 9000 and Skynet. Instead, we can expect a data-led scientific approach to influencer marketing, where influence can be counted and measured. For brands, this means greater control over their campaigns — but probably not over our robot overlords. Victoria Greene is a freelance writer and branding consultant. She updates her blog, Victoria Ecommerce, with news on the latest trends in the world of marketing, design, and ecommerce. Victoria has a drive for helping people achieve the most from their online presence — whether that’s through influencer marketing or social media.
How AI Is Changing The Way Brands Do Influencer Marketing
1
how-ai-is-changing-the-way-brands-do-influencer-marketing-147499e8954b
2018-04-10
2018-04-10 11:02:22
https://medium.com/s/story/how-ai-is-changing-the-way-brands-do-influencer-marketing-147499e8954b
false
980
Content Creators On-Demand
null
brandsnob
null
BrandSnob Spotlight
brandsnob-app
INFLUENCERS,SOCIAL MEDIA,INFLUENCER MARKETING,CONTENT CREATORS,SOCIAL MEDIA MARKETING
brandsnobco
Influencer Marketing
influencer-marketing
Influencer Marketing
8,618
BrandSnob
Makes booking influencers simple
fd35192dd94f
brandsnob
17
12
20,181,104
null
null
null
null
null
null
0
num_rows = 500 # Number of rows/samples to generate # Dataset headers column_headers = ['No', 'Gender', 'Height', 'Weight', 'Shoe Size', 'Shopping Satisfaction Offline', 'Shopping Satisfaction Online', 'Average Spent Per Month'] gender_list = ['Male', 'Female', 'LGBT'] gender = np.random.choice(gender_list, num_rows, p=[0.4, 0.4, 0.2]) satisfaction_list = ['High', 'Medium', 'Low'] satisfaction_offline = np.random.choice(satisfaction_list, num_rows, p=[0.4, 0.4, 0.2]) satisfaction_online = np.random.choice(satisfaction_list, num_rows, p=[0.3, 0.4, 0.3]) satisfaction_list = ['High', 'Medium', 'Low'] satisfaction_offline = np.random.choice(satisfaction_list, num_rows, p=[0.4, 0.4, 0.2]) satisfaction_online = np.random.choice(satisfaction_list, num_rows, p=[0.3, 0.4, 0.3]) responses = pd.DataFrame([no_answers, gender, height, weight, shoe_size, satisfaction_offline, satisfaction_online, monthly_spending]) responses = responses.transpose() responses.columns = column_headers
4
6a21592718eb
2018-09-19
2018-09-19 16:11:44
2018-09-19
2018-09-19 16:12:52
0
false
en
2018-09-19
2018-09-19 16:12:52
4
14775ab37a1b
1.366038
0
0
0
This is less of a tutorial, more of a gist to allow me to copy and paste from when I need to quickly generate a random dataset with a mix…
5
Super Simple Guide to Generating Datasets for Data Analysis and Experimentation This is less of a tutorial, more of a gist to allow me to copy and paste from when I need to quickly generate a random dataset with a mix of datatypes for data science explorations. We only need numpy and pandas for this purpose (actually numpy alone is more than enough, but pandas just makes it easier to manage the generated data). We first state how many samples (number of rows), and the features (columns of the dataset) that we want to generate. In this case, we generate a dataset that simulates a set of survey returns on shopping preferences and some key personal data. Let’s generate categorical data first. Numpy’s function random.choice is great for this. It allows us to generate samples based on a list of choices, with a pre-defined probability of each of the choices appearing in the dataset — e.g. if there are three possibilities — Male, Female and LGBT — I can specify that 40% of the sample are Male, 40% Female and 20% LGBT. For generation of integers or floats, we can use the randint or uniform function. Other functions to generate numbers based on a predetermined distribution are also possible (e.g. normal). Finally, we can merge all of these into a pandas DataFrame and label the columns. We merge each of them as columns but it’s easier to just merge them as rows first and then transpose the matrix before labelling the columns. The notebook on this can be found here. playgrd.com || facebook.com/playgrdstar || instagram.com/playgrdstar/
Super Simple Guide to Generating Datasets for Data Analysis and Experimentation
0
super-simple-guide-to-generating-datasets-for-data-analysis-and-experimentation-14775ab37a1b
2018-09-19
2018-09-19 16:12:53
https://medium.com/s/story/super-simple-guide-to-generating-datasets-for-data-analysis-and-experimentation-14775ab37a1b
false
362
my quaint quant explorations // playgrd.com // instagram.com/playgrdstar/
null
null
null
quaintitative
null
quaintitative
QUANTITATIVE,DATA SCIENCE,PYTHON,VISUALISATION,DATA EXPLORATION
null
Data Science
data-science
Data Science
33,617
playgrdstar
ming // gary ang // illustration, coding, writing // portfolio >> www.playgrd.com
421cfc7f0956
playgrdstar
94
77
20,181,104
null
null
null
null
null
null