audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
null
2018-02-23
2018-02-23 17:40:51
2018-02-23
2018-02-23 19:06:22
1
false
en
2018-03-06
2018-03-06 02:14:57
4
1146e4ad5130
3.256604
0
0
0
Steam power revolutionized the world by bringing on an era of rapid growth and expansion for mankind, allowing humans to overcome the…
1
DML: Decentralized Machine-Learning ICO Steam power revolutionized the world by bringing on an era of rapid growth and expansion for mankind, allowing humans to overcome the physical limitations of their natural state through the use of industrial scale tools and machines for the first time. No longer were people’s imaginations bound by previously insurmountable sizes of things— handing them dominion over the material aspects of modern civilization from skyscrapers to jumbo jets. The Twenty First century is dawning the next great leap brought on by the advents of artificial intelligence which holds the key to surpassing the limitations and capabilities of the human mind’s mental processing output. As such, machine learning has become a hot topic as of late, with many industries ranging from the financial sector to robotic companies looking for ways of unlocking its capabilities and pushing it into mainstream usability. It is no question that the recent rise of blockchain technology in parallel to these other developments taking place have mutual applications which can help them mature and evolve together into something greater than the sum of their individual components. The Project Decentralized Machine Learning (DML), the name and self-explanatory designation for a new blockchain technology startup company, aims to do just that; creating the protocols which will enable artificial intelligence in a decentralized environment. DML is focusing on creating the conditions and infrastructure for a fertile landscape which will allow machine learning to flourish. As per the company’s whitepaper, the firm is looking at achieving its goals by: utilizing untapped private data for machine learning while protecting data privacy, connecting and leveraging idle processing power of individual devices for machine learning, encouraging involvement from the periphery by creating a developer community and algorithm marketplace that promotes innovation to build machine learning algorithms that match practical utilities, improving and correcting existing machine learning algorithms and models through crowdsourced fine-tuning model trainers, creating a new DML utility token and leveraging on blockchain smart contract technology to provide a trustless and middle-man free platform that connects potential contributors in machine learning from all aspects. At its core, the project’s appeal is that it is looking at making an existing idea feasible — namely, harnessing the collective processing powers of consumer or private smartphones and other devices when they are idle, turning them into a productive matrix rivaling some of the world’s greatest supercomputers. A research paper at the Technical University of Braunschweig in Germany for instance, demonstrated that joining six low-powered Android phones into a wireless network could carry out a combined 26.2 megaflops of calculations per second. To give you an idea, the US-based Titan supercomputer is capable 17 000 000 000 megaflops. “There are about a billion Android devices right now, and their total computing power exceeds that of the largest conventional supercomputers,” said BOINC creator David Anderson, a research scientist at UC Berkeley’s Space Sciences Laboratory. The fact that smartphones are increasing in performance and popularity only improves the potential of this approach to decentralized artificial intelligence which has had its main barrier rooted in the massive amounts of computing power that it needs to operate. The Team The team behind the project is composed of 6 core members, each of which are specialized in some kind of software development, engineering or machine learning background. They have all had notable IT careers with a variety of companies which include Uber, Goldman Sachs, Cisco Systems and various Hong Kong-based academic institutions, among other organizations. Nevertheless what standouts the most out of them is that they all seem to share a passion for progressive-technology, putting them on the cutting-edge of what is new. Michael Kwok (the Lead Project Director of the firm) for example, “founded 2 technology companies. He is a seasoned growth lead in early stage startups, especially adept at business development, digital marketing, search engine optimization (SEO), social media and community management”. Jacky Chan (a blockchain and software developer at DML) “co-founded Kyokan Labs and has been focusing on improving the blockchain space. He has substantial involvement in Metamask’s new UI development, as well as the network visualization dashboard design with DFINITY.” On that note, it is the team’s strong technical abilities which boasts well for the organization because the space which they are in requires a strong foundation in software and mathematical hard-skills. Although the team is seasoned- their advisers (consisting of five blockchain and machine learning experts) add further experience, knowledge and pedigree as well. For instance one such individual consists of Michael Edesess who “is currently an Adjunct Associate Professor in Hong Kong Science and Technology University, for which he teaches postgraduate course including cryptocurrency.” The other four members offer similar contributions. A full roaster of the company’s constituting individuals can be found in their whitepaper for more detailed information and scrutiny.
DML: Decentralized Machine-Learning ICO
0
dml-decentralized-machine-learning-ico-1146e4ad5130
2018-03-06
2018-03-06 02:14:57
https://medium.com/s/story/dml-decentralized-machine-learning-ico-1146e4ad5130
false
810
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Pay
Knowledge Enthusiast
8186c9675615
paycrypto
6
1
20,181,104
null
null
null
null
null
null
0
null
0
962fb1c4c47e
2018-03-07
2018-03-07 15:26:38
2018-03-07
2018-03-07 07:22:06
1
true
en
2018-03-07
2018-03-07 18:03:54
1
114778784943
2.4
23
2
0
By Mark Bergen
4
Google AI Used by Pentagon Drone Project in Rare Test Photo: Elijah Nouvelage/AFP/Getty Images. By Mark Bergen Google’s artificial intelligence technology is being used by the U.S. Department of Defense to analyze drone footage, a rare and controversial move by a company that’s actively limited its work with the military in the past. A Google spokeswoman said the company provides its TensorFlow application programming interfaces, or APIs, to a pilot project with the Department of Defense to help automatically identify objects in unclassified data. APIs are software-based rules that let computer programs communicate. TensorFlow is a popular set of APIs and other tools for AI capabilities such as machine learning and computer vision. The feature is part of a recent Pentagon contract involving Google’s cloud unit, which is trying to wrest more government spending from cloud-computing leaders Amazon.com Inc. and Microsoft Corp. Alphabet Inc.’s Google bids on federal contracts and supplies some equipment to the military, but it has been sensitive about how its technology is used. “The technology flags images for human review, and is for non-offensive uses only,” the Google spokeswoman said. “Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.” After Google bought AI specialist DeepMind in 2014, the company set up an ethics committee to ensure the technology wasn’t abused. When it bought a series of robotics companies, it pulled one of them, Shaft Inc., from a Pentagon competition. After the acquisition of Skybox, Google cut some of the satellite startup’s defense-related contracts and ultimately sold the business. Information about Google’s pilot project with the Defense Department’s Project Maven was shared on an internal mailing list last week, and some Google employees were outraged that the company would offer resources to the military for surveillance technology involved in drone operations, Gizmodo reported earlier. Google’s attitude toward military work may be changing as its cloud business competes with AWS, Microsoft and other rivals. The U.S. government is already a big cloud customer and the Pentagon is looking to the technology sector for new tools and strategies, including AI. In August, U.S. Defense Secretary James Mattis visited Google headquarters in Mountain View, California, and met with company executives to discuss the best ways to use AI, cloud computing and cybersecurity for the Pentagon. Google executive Milo Medin, and former Alphabet Executive Chairman Eric Schmidt are on the Defense Innovation Board, an independent federal committee, and advised the Pentagon on data analysis and potential cloud-based solutions. At a meeting in July, the board recommended that the Defense Department look at ways “to take the vast data that exists in the enterprise and turn it into something that is actionable.” Every piece of data should be stored somewhere no matter what the structure is, because we can always go back and discover the structure and use it in an appropriate way, Schmidt said, according to minutes of the meeting. He remains a technical adviser to Alphabet. Medin warned of a tremendous lost opportunity when Pentagon data isn’t collected. That’s especially true in AI which requires lots of information to train software algorithms that automatically improve. Every fighter plane or destroyer that returns from a mission or deployment and doesn’t provide data it collected represents a loss of capability in machine learning and training that is forever lost, Medin said, according to the meeting minutes.
Google AI Used by Pentagon Drone Project in Rare Test
55
google-ai-used-by-pentagon-drone-project-in-rare-test-114778784943
2018-08-29
2018-08-29 17:53:33
https://medium.com/s/story/google-ai-used-by-pentagon-drone-project-in-rare-test-114778784943
false
583
The first word in business news.
null
bloombergbusiness
null
Bloomberg
null
bloomberg
BUSINESS,TECHNOLOGY,NEWS,FINANCE
business
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Bloomberg
Find more like this at bloomberg.com
3d76181076e6
bloomberg
133,717
225
20,181,104
null
null
null
null
null
null
0
null
0
661161fab0d0
2018-09-22
2018-09-22 20:32:51
2018-09-22
2018-09-22 20:49:44
0
false
en
2018-10-25
2018-10-25 18:21:29
0
114bf05c6d4c
2.003774
1
0
0
It’s that time of year where I get to disappointingly watch my alma mater (NC State) try to muster some wins against other Tier 2 schools…
5
100 Days of ML — Day 6 — Machine Learning Built By The Gridiron or How Can ML Help Save College Football It’s that time of year where I get to disappointingly watch my alma mater (NC State) try to muster some wins against other Tier 2 schools while waiting for some combination of Oklahoma, USC, Ohio State, Clemson, Georgia, and Alabama fight for the right to be controversial. I first start watching college football in 2004. It was an easy way to make friends for a guy who was painfully awkward around people and who’s social circle included the fame-desperate, but way post-college, comedians in Raleigh at Goodnight’s Comedy Club. That year’s controversy was the disclusion of the Utah Utes from the (then) BCS bowl games. It took a while, but college football fans finally got their playoff. Much like the expansion of the NCAA tourney field, it didn’t diminish controversy, the controversy multiplied. It was controversy steroids. Fast forward to last year’s snub of the UCF Golden Knights wherein we got the now-justified national champions in Alabama and the raucous roar of both sides continues to echo louder. The core issue with college football is that it is an economic, data-driven, and political nightmare. Economically, there used to be major bowl spots for six teams, then eight, then ten. Now we do the weird New Year’s 6 and the playoff thing. Anyway, there’s a lot of teams vying for little room and the FBS schools refuse to have a 16-team playoff to ease the pressure. On the data side, different teams have different schedules. No one really plays each other, so, on the surface, it seemed math had no place in determining who should qualify for these final games. Without a clear-cut methodology, this meant convincing by a lot coaches to fans and TV networks. Nothing sullied the sport of college football more than turning what should be the most important games into a popularity contest. But where we used to have all these statistics that we didn’t know what to do with, now we have machine learning to help sort through it all. There are several approaches you can take: a points system based on scoring, yardage, and yard prevention, a ratings draw to keep the networks happy, or Notre Dame always makes it, because no matter what universe we’re in, Notre Dame will never have to play by the same rules as any other college football team ever. I think the output I’d choose is “game interest level” wherein the match-ups provide the games that are most likely to be close based on each team’s strengths. There are many applications for ML in the college football space and I hope to hear about them soon. Jimmy Murray is a Florida based comedian who studied Marketing and Film before finding himself homeless. Resourceful, he taught himself coding, which led to a ton of opportunities in many fields, the most recent of which is coding away his podcast editing. His entrepreneurial skills and love of automation have led to a sheer love of all things related to AI. #100DaysOfML #ArtificialIntelligence #MachineLearning #DeepLearning
100 Days of ML — Day 6 — Machine Learning Built By The Gridiron or How Can ML Help Save College…
50
100-days-of-ml-day-6-machine-learning-built-by-the-gridiron-or-how-can-ml-help-save-college-114bf05c6d4c
2018-10-25
2018-10-25 18:21:30
https://medium.com/s/story/100-days-of-ml-day-6-machine-learning-built-by-the-gridiron-or-how-can-ml-help-save-college-114bf05c6d4c
false
531
where the future is written
null
null
null
Predict
predict
FUTURE,SINGULARITY,ARTIFICIAL INTELLIGENCE,ROBOTICS,CRYPTOCURRENCY
null
Machine Learning
machine-learning
Machine Learning
51,320
Jimmy Murray
null
5eaf0b0dbdc6
talktojimmymurray
25
12
20,181,104
null
null
null
null
null
null
0
null
0
db8ab17249e
2018-04-26
2018-04-26 19:57:42
2018-01-22
2018-01-22 20:06:54
1
false
pt
2018-11-03
2018-11-03 21:15:09
9
114c99bc9ba8
2.709434
0
0
0
null
3
7 livros essenciais para aprender machine learning Ao longo deste post você verá 7 livros essenciais para quem deseja aprender machine learning, os quais trazem conceitos que vão desde o nível básico até o avançado. O ano de 2016 já “batia à porta” no dia em que eu digitei o termo “Data science” no Google e caí num post que falava sobre inteligência artificial. Logo me deparei com termos até então completamente alheios ao meu entendimento. O texto compunha-se de jargões como “machine learning”, “deep learning”, “aprendizagem supervisionada” e “overfitting”, que meu humilde vocabulário desconhecia até aquele momento. Ora, eu sou muito curioso, sempre fui, e movido por esta curiosidade eu decidi ler um pouco mais sobre o assunto. Com o passar de poucos dias eu já estava decidido a aprender machine learning. Mas logo me vi completamente perdido, pois não sabia o que era necessário estudar ou onde encontrar bons materiais de estudo. Acontece que eu já estava bastante obcecado pelo assunto e queria aprender de qualquer maneira. Na falta de alguma orientação, acabei tomando o caminho mais longo e estudando muita coisa desnecessária (não que isto tenha me prejudicado, pelo contrário). Tive que me organizar e, então, eu pensei: “primeiro eu preciso me concentrar no básico”. Foi aí que eu pensei em comprar algum livro que pudesse me levar a uma leitura mais introdutória em torno do tema, pois eu precisava compreendê-lo desde os tópicos mais elementares. Dentro de poucos meses, eu já havia adquirido meia dúzia de livros que se mostraram essenciais para o meu aprendizado. No vídeo que acompanha este post, você verá que livros são esses. Porque é interessante estudar por meio de livros? O que me faz gostar tanto de estudar por meio de livros (impressos ou digitais) é que eles naturalmente se aprofundam muito mais em seus temas do que outras fontes, tais como artigos científicos ou alguns cursos online que costumam compilar e resumir em tópicos, ou citações, o conteúdo dos livros. Além disso, se o seu objetivo é entender o tema desde as suas raizes e ir se aprofundando, dificilmente haverá material melhor de onde você possa extrair conhecimento. O Machine Learning é uma área que toma emprestado diversos conceitos matemáticos usados na ciência e na engenharia. Se eu quiser me aprofundar em machine learning o suficiente para me permitir criar meus próprios modelos com total controle e liberdade, é importante que eu aprenda sobre estes conceitos ao menos num nível básico. Uma grande parte dos livros voltados para o assunto, eu diria que a maioria deles, nos orientam a respeito desses pré-requisitos logo nos primeiros capítulos. Quase sempre, esta orientação ocorre por meio de alguma revisão ou visão geral sobre algum conhecimento prévio necessário. Isto é bastante útil porque nos permite saber sobre qual assunto específico de uma determinada área será preciso aprender previamente e, assim, você passa a estudar de uma maneira mais focada. Leia também: Dicas para aprender Machine Learning Machine Learning — a matemática da aprendizagem supervisionada Confira 7 livros essenciais para aprender machine learning Os livros que você verá no vídeo abaixo foram adquiridos com base em um levantamento de pré-requisitos que acabei fazendo ao longo da minha jornada de estudos e decidi compartilhar isto com vocês. Todos eles foram comprados na Amazon. Não há comentários sobre os livros durante o vídeo, porque eu já havia gravado um outro vídeo onde falo um pouco a respeito de cada um deles. Onde encontrar estes livros? Álgebra Linear com Aplicações — Howard Anton: http://amzn.to/2G7NBUY The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition — Trevor Hastie : http://amzn.to/2DJY2gR Pattern Recognition and Machine Learning — Christopher M. Bishop: http://amzn.to/2Dtjedd Redes Neurais — Simon Haykin: http://amzn.to/2DgLo7M Deep Learning — Ian Goodfellow: http://amzn.to/2DKUfjd Hands–On Machine Learning with Scikit–Learn and TensorFlow — Aurelien Geron: http://amzn.to/2DsDpbm O que você achou deste post? Se tiver alguma dúvida ou alguma dica que sirva para postagens futuras, não deixa de comentar aí em baixo!
7 livros essenciais para aprender machine learning
0
7-livros-essenciais-para-aprender-machine-learning-114c99bc9ba8
2018-11-03
2018-11-03 21:15:09
https://medium.com/s/story/7-livros-essenciais-para-aprender-machine-learning-114c99bc9ba8
false
665
Desenvolvedor android e back-end com mestrado em ciência da computação pela Universidade Federal de Pernambuco (em andamento) na área de inteligência computacional (Machine Learning e Mineração de dados)
null
luisfredweb
null
luisfredgs
null
luisfredgs
MACHINE LEARNING,DEEP LEARNING,INTELIGENCIA ARTIFICIAL
luisfredgs
Livros
livros
Livros
5,010
Luís Fred
Desenvolvedor Android/Back-end | Mestrando em Ciência da Computação | Machine Learning | Deep Learning | Processamento de linguagem Natiural
13fca9f3a3de
luisfredgs
31
12
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-08
2017-12-08 09:41:53
2017-12-12
2017-12-12 08:09:22
0
false
en
2017-12-12
2017-12-12 08:09:22
0
114cbdb5c52c
0.54717
0
0
0
Last week I started working on a project that involved pedestrian detection, I had two options either to build my own model for the same or…
1
Google does it for you! “Customizing the google Object Detection API for detecting pedestrians” Last week I started working on a project that involved pedestrian detection, I had two options either to build my own model for the same or explore the internet for the same.In this world of Siraj , who will go with the first one. After some good intuitive googling I landed up on google object detection API.It’s been more than six months since Google launched it’s object detection API.It’s time for the testing. While doing some research on the previous work done by people on google object detection API, it seems to me that not much people have given it a shot. But still there are a couple of projects which gives you a good overview of the setting up of API and it’s demonstration. The tutorial series done by While
Google does it for you! “Customizing the google Object Detection API for detecting pedestrians”
0
google-does-it-for-you-customizing-the-google-object-detection-api-for-detecting-pedestrians-114cbdb5c52c
2018-05-22
2018-05-22 20:33:48
https://medium.com/s/story/google-does-it-for-you-customizing-the-google-object-detection-api-for-detecting-pedestrians-114cbdb5c52c
false
145
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
mahavir dwivedi
null
2ad87afc9f5c
mahaviredx
0
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-25
2018-05-25 19:53:22
2018-05-25
2018-05-25 19:54:49
2
false
tr
2018-05-25
2018-05-25 19:54:49
10
114dd448f81b
1.564465
1
0
0
Önceki yazımda Opirianın token yapısı ve satış periyotlarının nasıl ilerleyeceğini açıklamıştım. Bu yazımda sizlere Opiria’nın token…
4
3 — OPIRIA PDATA: TOKEN DAĞILIMI VE FON KULLANIMI Önceki yazımda Opirianın token yapısı ve satış periyotlarının nasıl ilerleyeceğini açıklamıştım. Bu yazımda sizlere Opiria’nın token dağılımının nasıl olacağını, bu dağılımdan arta kalan miktarın nasıl kullanılacağını ve elde edilen fonun nereye aktarılacağını açıklayacağım. Token Dağılımı - Satış (%60): Üretilen tokenlerin %60’lık bir kısmı satışa çıkarılacaktır. Bu da 450 Milyon tokene eşittir. - Geliştime (%13): Üretilen tokenlerin %13’lük kısmı data alımı ve iyileştirmeler için ayrılacaktır. 97.5 Milyon token değerindedir. - Ekip (%20): Tokenlerin %20’si ekibe ayrılacaktır. 150 Milyon token değerindedir. Yazımın devamında ekibe ayrılan kısmın nasıl kullanılacağı ile ilgili bilgiler de vereceğim. - Danışmanlar (%5): Yüzde 5’lik kısım danışmanlara aktarılacaktır. Bu bölüm 37.5 Milyon token değerindedir. - Ödül (%2): Ödül için ayrılan kısım %2’dir. Bu da 15 Milyon token değerindedir. PData token değerinin 0.10 dolar yani 10 cent olduğunu söylemiştim. Bu değerleri 0.10 ile çarparak dolar bazındaki karşılıklarını hesaplayabilirsiniz. Ekip %20’lik kısmı nasıl kullanacak? Ekibe ayrılmış olan kısmın %25’i ana satış sürecinden 3 ay sonra kullanıma hazır olacaktır. Geri kalan %75’lik kısım %25–25–25 olarak 6–12–24 aylık süreçlerde kilitli kalacaktır. Buradan projenin uzun soluklu planlandığını anlayabiliriz. Fon Kullanımı Şirket toplanan fonun sistemin gelişimi ve dünya çapında yayılması için kullanılmasını planlamaktadır. Burada planlanan giderleri şu şekilde sıralayabiliriz. - Opiria platform iyileştirmesi: Fonun %30’luk kısmı platformun teknik altyapısının iyileştirilmesi için kullanılacaktır. - Küresel yayılma: Fonun %45’lik kısmı yeni pazarlara girilip yeni müşteri bağlantılarına ulaşılması için kullanılacaktır. - Veri tabanının genişletilmesi: Fonun %25’lik kısmı veri tabanının genişletilmesi, yeni panel müşterileri kazanılması ve pazarlama faaliyetleri için kullanılacaktır. Bu makalemde sizlere PData tokenlerinin satış dağılımı, bu dağılımdan ekibin payına düşen kısmın nasıl kullanılacağı ve toplanan fonların nasıl harcanacağını açıkladım. Opiria PData platformu ile ilgili bilgiler vermeye devam edeceğim. Proje hakkında daha fazla bilgi edinmek isteyenler aşağıdaki linklerden çeşitli kaynaklara ulaşabilirler. Web sitesi: https://opiria.io/ Teknik İnceleme: https://opiria.io/static/docs/Opiria-PDATA-Whitepaper.pdf Telegram: https://t.me/PDATAtoken BTC ANN: https://bitcointalk.org/index.php?topic=3076122.new#new BTC Bounty: https://bitcointalk.org/index.php?topic=3081090 Medium: https://medium.com/pdata-token Twitter: https://twitter.com/PDATA_Token Facebook: https://www.facebook.com/pdatatoken/ Reddit: https://www.reddit.com/r/PDATA/ My BitcoinTalk Profile: https://bitcointalk.org/index.php?action=profile;u=1780407
3 — OPIRIA PDATA: TOKEN DAĞILIMI VE FON KULLANIMI
14
3-opiria-pdata-token-dağilimi-ve-fon-kullanimi-114dd448f81b
2018-05-27
2018-05-27 14:14:01
https://medium.com/s/story/3-opiria-pdata-token-dağilimi-ve-fon-kullanimi-114dd448f81b
false
313
null
null
null
null
null
null
null
null
null
Data
data
Data
20,245
Burak Koçyiğit
Industrial Engineer / Cryptocurrency Enthusiast
34eedb2284dc
burakkocyigit1
927
1,203
20,181,104
null
null
null
null
null
null
0
tf.train.write_graph(sess.graph_def, directory_to_save_model, model_name, as_text=False) mkdir -p ~/graphs curl -o ~/graphs/inception5h.zip \ https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip ~/graphs/inception5h.zip -d ~/graphs/inception5h gf = tf.GraphDef() gf.ParseFromString(open(‘tensorflow_inception_graph.pb’, ‘rb’).read()) with open(txt_file, ‘w’) as f: for n in gf.node: f.write(n.name + ‘ => ‘ + n.op + ‘\n’) git clone https://github.com/tensorflow/tensorflow.git python tensorflow/python/tools/freeze_graph.py \ — input_graph=path-to-model-graph \ — output_graph=path-to-frozen-model \ — input_checkpoint=path-to-checkpoint-file \ — output_node_names=softmax_1 \ — input_binary=true bazel build tensorflow/tools/graph_transforms:transform_graph bazel-bin/tensorflow/tools/graph_transforms/transform_graph \ — in_graph=path-to-frozen-model \ — out_graph=path-to-quantized-model \ — inputs=input_1 \ — outputs=softmax_1 \ — transforms='add_default_attributes strip_unused_nodes(type=float, shape="1,180,180,3") remove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true) fold_batch_norms fold_old_batch_norms quantize_weights quantize_nodes strip_unused_nodes sort_by_execution_order' bazel build tensorflow/contrib/util/convert_graphdef_memmapped_format bazel-bin/tensorflow/contrib/util/convert_graphdef_memmapped_format \ — in_graph=path-to-frozen-model \ — out_graph=path-to-memmaped-model xcode-select — install pod install const bool model_uses_memory_mapping = true; const int wanted_input_width = 180; const int wanted_input_height = 180; const int wanted_input_channels = 3; const float input_r_mean = 123.68f; const float input_g_mean = 116.78f; const float input_b_mean = 103.94f; const std::string input_layer_name = "input_1"; const std::string output_layer_name = "softmax_1"; for (int c = 0; c < wanted_input_channels; ++c) { out_pixel[c] = (in_pixel[c] — input_mean) / input_std; } out_pixel[2] = in_pixel[2] — input_r_mean; out_pixel[1] = in_pixel[1] — input_g_mean; out_pixel[0] = in_pixel[0] — input_b_mean; open tf_camera_example.xcworkspace
29
e3b7e2ec1966
2018-04-18
2018-04-18 13:54:00
2018-05-09
2018-05-09 07:33:20
13
false
en
2018-05-09
2018-05-09 09:40:50
13
114e334d7aba
12.260377
6
0
0
Abstract
5
Shopping e-commerce products by example through deep learning Abstract In this blog we present our work at DeepLab regarding a mobile-integrated e-commerce application for object classification with deep learning. A user can capture a photo with e-commerce content using a mobile and the application will suggest similar products from shopping sites. The implementation is based on a convolutional neural network trained on e-commerce images. The model is implemented with Tensorflow and it is integrated into an iPhone mobile from which we can recognize e-commerce images through live video. In the next sections, we present the training process with the respective data, practical details for the integration of the CNN into the mobile and results of the complete application which can offer an impressive shopping experience. Introduction E-commerce is the activity of buying or selling products online. The aim of this blog is the composition of a complete e-commerce mobile application using deep learning. In particular, we are making use of a training pipeline to acquire a deep learning model for e-commerce products classification and afterwards, we are integrating this model into a mobile application. Tensorflow provides a platform to build a pipeline for training and integrating our CNNs as a tensorflow application for android or ios as depicted in Fig.1. Several works are related to similar pipelines. Shopping by deep learning exploitation is an appealing field as seen by recent works (Shopping by example and Matching Street Clothing photos ) and powerful applications (E-commerce powerful applications 2017). In the following section, we present the use-case for the pipeline of training and integration processes, analyzed further in later parts. Figure 1. Pipeline for integrating a deep learning tensorflow application into a mobile device. Use case The ultimate goal is the development of an application which recognizes e-commerce images through live video, in order to be used for shopping by example. To this end, we present the end-to-end processes of: Training a cnn for e-commerce image classification (first pipeline of Fig.2). Integrating the trained model into an iPhone (second pipeline of Fig.2). Developing a complete application for shopping by example (third pipeline of Fig.2). Figure 2. End-to-end process for a shopping application. It consists of 3 consecutive pipelines. From top to bottom: 1) Train a CNN model, 2) Import the latter model into mobile 3) Video application and prompting to e-commerce site. E-commerce image classification We start with the first pipeline mentioned above, namely products’ classification for e-commerce sites. The French e-commerce site C-discount already exploited products’ classification based on their text description. In this work, we employ a model which classifies products according to their image content (Fig.3). For this purpose, in what follows, we present: The dataset used for training and evaluation. The selected model. The experimental results. Figure 3. Model capable of recognizing products’ category from image content. Dataset The dataset consists of 12.371.293 training images, 3.095.082 test images with 180x180 resolution and 5270 labels (i.e., product category), as provided by the Kaggle challenge. Several samples of the e-commerce data are illustrated in Fig. 4. Figure 4. E-commerce samples based on C-dicsount . Two important details are worth mentioning: a) The provided data are products, each of which consists of 1 to 4 images. An example is included in Fig.5 where the product Desktop is related to three images. This information is important because we can exploit the image similarity per product during training. Figure 5. Multiple views that correspond to the same product. We exploit image similarity among examples of each product during training resulting in increased performance. b) Class hierarchy: A lot of classification problems are based on class hierarchy which means that a sample corresponds to a range of more general to more specific class labels, as depicted in Fig.6. In our case, the final 5270 classes are based on four levels of hierarchy and the classes per level are 49, 483, 5263 and finally 5270. Figure 6. Example of three levels of class hierarchy(3–6–9 classes). In our case, there are four levels(49–483–5263–5270 classes). Model The model used for training was Resnet-101, but for the final integration into mobile we chose Resnet-50 without losing much accuracy. In Fig. 7, we present the modifications integrated into Resnet for exploiting the class hierarchy. Specifically, we added three auxiliary layers in order to have 3 additional outputs according to the levels of hierarchy. Figure 7. Resnet with three auxiliary layers, one for each level of hierarchy. The modification integrated into Resnet for exploiting the image similarity, was the averaging of the feature maps per product before entering the last fully connected layer, as depicted in Fig.8. In case of a sample product consisting of 3 images, we average the [3, 6, 6, 2048] feature map resulting in a feature map with dimensions [6, 6, 2048] that enters the final fully connected layer. Figure 8. Resnet with feature averaging per product before the final FC layer. Results An overview of the main experiments is presented in Fig.9. The image by image methods (1 to 4 columns) are based on models trained with images as training samples. The last product by product method (column 5) is based on a model trained with products (multiple images per product) as training samples. The Accuracy reported is the accuracy on the test set. By exploiting image similarity per product, the accuracy increased by 3% ( Method 4 vs Method 5). By exploiting class hierarchy, we did not have accuracy gains, so we preferred the unmodified, less complex Resnet-101. The training process was computationally demanding because of the large dataset as well as due to the 5270 classes. Figure 9. Methods 1 to 4 correspond to image by image training and method 5 corresponds to product by product training. The lightest and fastest model for mobile is Resnet-50 (method 3). Although we can exploit information cues such as class hierarchy and image similarity per product during training, the integration of such a model into mobile is not trivial. So we used the third model which is lighter and faster for the mobile integration processes. We trained Resnet-50 from scratch on 2 gpus with batch size 128, learning rate schedule 0.1 from 1 to 4 epochs, 0.01 from 5 to 6 epochs, 0.001 from 7 to 8 epochs and 0.0001 from 9 to 10 epochs. In the next parts, we analyze the mobile integration processes and we further present coding details about problems that one might encounter when importing a CNN into an ios device. Integration and coding details In the following sections, we explain in detail the processes for the integration of a trained CNN into an ios device. The coding steps and the encountered problems are analyzed, before importing the tensorflow application for live video recognition of e-commerce products into the mobile. We first present the procedures for getting the model graph and after that, the processing steps of this graph for getting the final integration model. Exporting the model architecture In this section, we present the problems confronted for producing the model graph. Firstly, what do we mean by model graph? Tensorflow provides a way of getting the node names of a tensorflow graph and the respective operations as follows: Exporting the model architecture correctly is important, because the next steps are based on this file. It is also a bit tricky and we need to pay attention during this step. First of all, we don’t recommend exporting the model by using the same script as training! When we train a tensorflow model, the scripts used are too complicated and we might have nodes in the tensorflow graph that are not used for inference. We suggest the creation of a new simple script with a placeholder as input, forward-passing the network and getting the prediction. Afterwards, tf.train.write.graph can be used for getting a simple model graph without redundant nodes. Another important fact is the preprocessing. We can view the node names of a network using the inception example provided by Tensorflow. The inception graph needs to be downloaded: Now we export the node names and operations of the tensorflow_inception_graph.pb file into a text file to view them, as follows: We can observe that after the input placeholder, there are no preprocessing nodes and the placeholder enters the network. The reason is that the final scripts, which are written in objective-c, are taking care of the preprocessing. So, the aim is to export our model architecture without any preprocessing. Moreover, one needs to pay attention on the node names of input and output because we will need them in the next steps. Most of the tutorials suggest to view them on tensorboard, but you can just use the above code and view them in a txt file. In our case, the input placeholder is called input_1 and the output softmax node softmax_1. Another important practical issue is batch normalization. Tensorflow suggests the training of Resnet-50 using fused batch normalization for better performance. But the integration process of a model into a mobile device does not support this kind of batch normalization, so the model was trained with fused=True and then the architecture was exported with fused=False (and not None). The accuracy was not affected in our experiments. To sum up, a good pipeline for getting a correct model graph is the following: Separate script for a simple inference without preprocessing and redundant nodes in the Tensorflow graph. Specification of input-output names for the inference. Use of nodes that are supported for integration (in our case regular batch normalization instead of fused). Viewing of node names and operations. After completing the exportation of the model graph, it needs to be processed through the Tensorflow pipeline for mobile, as described in the next parts. Final model In the previous section, we mentioned the most important issues that we may face while exporting the first model graph file. The next steps are relatively simple but we still need to pay attention on specific details. We first clone the Tensorflow repo: The next step is the freezing process for importing the weights through the checkpoint file into the model graph. Freezing It’s ok remaining at master branch after cloning. We move to the root of the cloned Tensorflow repo and we freeze the checkpoint file of training into the model graph: This way, a new file is created with the trained weights freezed into our model. As one can observe, the size of the exported freezed graph is bigger than the previous graph which contains the nodes without the weights. Tensorflow provides a quantization process, as described in the next section, for getting a graph of small size for efficient integration. Quantization (optional) Most of the tutorials suggest the quantization of the frozen model. With the quantization process, the size of the model gets almost four times smaller so it can be imported into mobile easier than a model of bigger size. The inference part is also faster because of the 8-bit computations instead of 32-bit during training. Let’s see how we can produce a quantized model: Although we have the above benefits, this process was not working in our case for Resnet-50 (we did not use the native Tensorflow Resnet) and it is optimized only for specific models. So this step was skipped, since the provided inception graph is not quantized either. In the next part, the memory mapping procedure is described, for getting a graph which handles memory efficiently during inference. Memory mapping Attempting to integrate the freezed model into mobile, resulted into memory errors due to the size of the graph. Resnet-50 is not a very deep CNN but in our case the size was large because of the number of outputs (5270 classes). The solution to this was the memory-mapping transformation of our model with this script: The exported graph is optimized for lower memory usage into mobile. The following section presents the installations that need to be done in order to proceed with the final coding steps. Integration Now the memory mapped model can be imported into mobile. We explain the integration process for a macbook with High Sierra OS. If you do not have xcode you can install it by: After the xcode installation, you need to copy this folder from the cloned tensorflow repo: tensorflow/examples/ios/camera Afterwards, you should modify it according to your needs. All these steps have been done using a fabfile, but it’s better having a view of them. Now, inside your camera folder you need to install pod as follows: The last step before importing the model into the ios device, is the modifications that need to be made into the inference script written in objective-C. Code modifications Before opening the xcode workspace, the preprocessing code was modified inside CameraExampleViewController.mm. In our case, the memory mapping flag is set to true, because we used the convert_graphdef_memmapped_format script previously: We also modified the preprocessing values, for getting the same preprocessed image as the training process: Additionally, the following snippet needs to be modified according to the preprocessing that we used during training. In our case, we used the vgg preprocessing so we subtract different mean values from each channel without dividing by std. So, the following snippet of code is converted: Into this: Attention needs to be paid since the image is received in BGR format. Of course, we have to replace the labels’ text file with our labels file. The following section describes the importing procedure through xcode. Importing the model We have to connect the iPhone with the macbook and open the xcode workspace: Final steps: Import your apple account into xcode. Choose your connected iPhone as a device. In General settings update the name of bundle identifier. Choose yourself as a team. Now we are ready to build and run. The model will be imported automatically into the mobile. If you run it for the first time, you will need to make the app reliable from the general settings of your iPhone. From the software engineering point of view, we have a folder called mobile into the working repo and the camera folder is used for the model. The tensorflow repo can be cloned as a submodule, in case of a new model creation. This way we can make easy integration attempts right after training. It’s really beneficial to use a fabfile for downloading the packages you want and also run the integration (except the xcode process) with 1 command. In the next section, live video results for e-commerce images recognition are presented. Integration results As one can notice, the images of Fig.10 have been captured through live video and the integrated model is still able to recognize real-world e-commerce products. This is worth mentioning, because the model is trained on simple e-commerce images (as you can see in Fig.4) and most of them have white background, which means that the performance can be boosted with a better real-world training dataset. Figure 10. Video captures from Tensorflow ios app recognizing real-world e-commerce images with Resnet-50 Conclusions In this tutorial, we presented the pipeline for deep learning exploitation into an e-commerce application which assists the users to shop by image snapshots or video (Fig.11). After completing the deep learning part, the overall construction of a shopping application is a matter of ios developing. From the coding point of view, we gained a lot of insights through the problems we had with the integration of Resnet-50 into the iPhone. Tensorflow has also introduced tf-lite for integrating a CNN but there are a lot of improvements to be done so for now on we use the provided graph which is trained on Imagenet challenge. The most important margins of improvement are: The construction of a complete ios application that recognizes an object and prompts the user to the equivalent e-commerce site with products from the same category. More experiments for improving the already existing single image by image model. Experiments using real-world e-commerce datasets for performance boosts. App modifications for integrating the multi-view product by product model with accuracy gains. Figure 11. Prompting the user to similar products in a shopping site In the future we plan to make use of the newest tf-lite pipeline that tensorflow provides. The Resnet model that we used is not well supported yet, so we can integrate it in the future through tf-lite. Finally the integration of a model into ios mobile is proved to be a really good base for the composition of a complete application which provides outstanding shopping experience! The end-to-end pipeline is illustrated in Fig.12. Figure 12. Final pipeline of integration processes and shopping by example. The interconnection of training-inference pipelines is the process of freezing and memory mapping. The output enters the integration pipeline for TF-app. References TF-mobile (link) Kaggle challenge (link) Pete Warden’s blog (link) E-commerce powerful applications 2017 (link) Resnet CNN Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun 2015. (pdf) Shopping by example Ashish Kare, Hiranmay Ghosh, Jaideep Shankar Jagannathan 2009. (pdf) Matching Street Clothing photos M. Hadi Kiapour, Xufeng Han, Svetlana Lazebnik, Alexander C. Berg, Tamara L. Berg 2015. (pdf)
Shopping e-commerce products by example through deep learning
37
shopping-e-commerce-products-by-example-through-deep-learning-114e334d7aba
2018-05-12
2018-05-12 21:04:15
https://medium.com/s/story/shopping-e-commerce-products-by-example-through-deep-learning-114e334d7aba
false
2,878
Bridging the gap between research and industry
null
null
null
deeplab-ai
deeplab-ai
MACHINE LEARNING,DEEP LEARNING,MACHINE INTELLIGENCE,COMPUTER VISION,NATURALLANGUAGEPROCESSING
null
Machine Learning
machine-learning
Machine Learning
51,320
Christos Rountos
Machine Learning Engineer
d690c918cada
chrisrn
5
4
20,181,104
null
null
null
null
null
null
0
null
0
5a998828d2f
2018-08-14
2018-08-14 22:04:02
2018-08-14
2018-08-14 22:04:00
5
false
en
2018-08-15
2018-08-15 13:23:25
5
114e6c55b726
4.697484
1
0
0
null
5
The Rise of SaaS HR In the ecosphere of technology, there has always been massive swings from the old school, that bring a wave of revolution to life. It not just elevates the industry to a next level but also reconditions the past practices. Every arena and pace of life, every segment has re-vamped itself enough to fit into the lives of a modern-day office worker, catering to the real needs, as compared to prolonged and time taking old mechanisms, which might have been a novel discovery at that time but have lost their significance in today’s world. As time passes, things that were good enough then, might not be noteworthy today, including some very humongous software technologies, that met the requirements of the corporate world then. The speedy conversion of these sectors, bring awareness to our work worlds, urging us to move forward and implement the very latest, to stay in line and bring onboard the best practices possible. Similarly, in the HR world, SaaS HCM has been effectually substituting on premise software, introducing a wide range of HR technologies that go beyond the scope of core functionalities. The new wave of SaaS HR brings forth the fast paced Socially knit HR tools, that go beyond engagement and user experience. They incorporate business intelligence tools, that are ready to foresee trends and help the C-suite decision making a notch higher than expected. So, the single point, single function HR solutions do not serve organizations as much as they are expected to, paving way for the modern solutions, to help the companies into evolving at a much earlier cycle and adapt themselves accordingly. An HCM Solution has a much more flexible platform and has the aptitude to develop and become a multi-purpose and all-inclusive software, that can build innovative functionality, according to the size of an organization. These software technologies may become an integral part of a company early on and grow with the organization as it grows. The new and rising trend of SaaS HR has captured a considerable volume of audience in the market, by replacing the huge one-point ERP solutions of the past, compelling them into implementing the newer technology themselves. Oracle, Sap, Workday etc. all organizations trapped in the past have been hard-pressed by the fresher players in the market, to move forward in an innovative direction, just by following up with younger companies and vibes they are creating. For larger companies to stay competitive and ahead in the fast-paced race of technologies and to obtain the best possible talent, they need to re-think their mantra of streamlining workloads, instead of putting the employees first. The state-of-the-art HCM encompasses a wide variety of tools that are specifically designed to formulate team strategies and highlight the need of bringing forth employee connectivity and recognition in all forms. The modern systems tend to widen a company’s horizon and focus at the same time, enhancing the user understanding and building a reliable mechanism that improves the learning and capabilities of all employees at the same time. There is a considerable evidence that measures like, building engagement tools, that track employee loyalty, based on mobile usage and messaging platforms, along with surveys and targeted goals management, have proven their worth in the field of HR. The corporate structure could immensely benefit from advanced performance evaluation, mentoring and coaching plans and hiring tools, including the modern applicant tracking mechanisms, all leading to same conclusion that the SaaS HR is overwhelmingly replacing the old methodologies and taking over the market by storm. These trends indicate, that clearly big names like Oracle and others must evolve and compete with newer companies in that area, as they are already cloud based and provide more value compared to their legacy systems. The shift towards SaaS HR is one of the leading trends amongst the corporate world along with CRM and other systems but the question here is, how long are the old legacy systems going to last in this battle of evolution and competing for attention, as the users recognize that deployable on-point solution are at the brink of extinction. According to Forrester projections, the total scope of HR administration and SaaS HR is going to grow as much as $24 billion in 2018. This tells us how the subscriptions models of the businesses like WebHR ( https://web.hr ) have penetrated deeply into the markets and are replacing the way US consumers behave and take ownership of the products available. The Consumer trends have shifted hugely; from properties to clothing and from edibles to banks, nothing is how it use to be. Everything is moving to subscription-based solutions, the cassettes, CDs, DVDs, nothing is owned by the consumer these days, we very comfortably download music on Spotify and that is that. The mobile phones, specially the iPhone were something only the elite could afford and were symbolic of how well a person is doing financially, all one must do today is to get a subscription through a mobile service provider and get it on a monthly fee. This has allowed the users to keep upgrading without spending too much in one go. The banks own our houses and cars, the concept of owner is long gone. We no longer own the text we read, as Amazon kindle helps us read the text owned by Amazon itself, we can even get our printer inks every month on the subscription. The shaving and grooming need no frequent visits to the nearby stores as that is also subscription based, as a package arrives every month to provide you with your monthly quota of appropriate utilities along with a little extra every now and then. The concepts of ownership are changing fast and so is the technical world and the way corporate world behaves. This exemplifies how the SaaS HR companies like WebHR (https://web.hr) have forced the consumers to re-think their need of on-premise, one point archaic legacy system and replace it with , fast, innovative and dynamic method of managing their people. Originally published at WebHR.
The Rise of SaaS HR
50
the-rise-of-saas-hr-114e6c55b726
2018-08-15
2018-08-15 13:23:26
https://medium.com/s/story/the-rise-of-saas-hr-114e6c55b726
false
1,024
AI Powered Chatbots for E-commerce
null
GoBeyond-549887012026130
null
GoBeyond.ai
gobeyond-ai
ECOMMERCE,SHOPIFY,ECOMMERCE SOFTWARE,ECOMMERCE SOLUTION,CHATBOTS
null
All In One Hr
all-in-one-hr
All In One Hr
5
Anna Naveed
Co-founder WebHR
d2396ba1c70c
annanaveed
719
2,563
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-08
2018-04-08 22:17:04
2018-04-08
2018-04-08 22:37:11
25
false
en
2018-04-08
2018-04-08 22:49:56
0
114edddb297a
2.218868
0
0
0
Top Industries talking about AI on twitter
5
Analyzing how AI is growing across all industries using past 4 years tweets. Top Industries talking about AI on twitter Top Industries talking on AI 2. Tweet count / month on AI Historical tweet count on AI 3. Historical Sentiment on AI Sentiment analysis on AI over last 4 years 4. Overall Sentiment Overall Sentiment on all AI tweets 5. Monthly engagement on AI Number of tweets per month on AI 6. Influential people across different industries on twitter Each tile contains (Industry, no of tweets on AI, twitterId) Influential people across different Industries tweeting on AI
Analyzing how AI is growing across all industries using past 4 years tweets.
0
analyzing-how-ai-is-growing-across-all-industries-using-past-4-years-tweets-114edddb297a
2018-04-08
2018-04-08 22:49:57
https://medium.com/s/story/analyzing-how-ai-is-growing-across-all-industries-using-past-4-years-tweets-114edddb297a
false
58
null
null
null
null
null
null
null
null
null
Social Media
social-media
Social Media
143,805
Anvesh Tummala
Inspire Your Soul. Transform your Body.
a2b10c44fba3
Anvesh525
3
43
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-09
2018-08-09 18:02:43
2018-08-09
2018-08-09 18:05:10
0
false
en
2018-08-11
2018-08-11 00:59:46
1
114f0e5981d7
0.128302
0
0
0
The tutorial is provided in notebook viewer’s mode.
3
Google Free GPU Tutorials: Part 1 The tutorial is provided in notebook viewer’s mode. Jupyter Notebook Viewer download tensorflow object detection models from github # Environment installation on your local machine can be found…nbviewer.jupyter.org
Google Free GPU Tutorials: Part 1
0
joyful-google-colab-tutorial-object-detection-step-by-step-114f0e5981d7
2018-08-11
2018-08-11 00:59:46
https://medium.com/s/story/joyful-google-colab-tutorial-object-detection-step-by-step-114f0e5981d7
false
34
null
null
null
null
null
null
null
null
null
Deep Learning
deep-learning
Deep Learning
12,189
Joy Yan
null
553a48cdf975
joyyan
3
6
20,181,104
null
null
null
null
null
null
0
path = “/regression-models/tmdb-movies-final-features-no-header.csv” raw_data = sc.textFile(path) num_data = raw_data.count() records = raw_data.map(lambda x: x.split(“,”)) first = records.first() print(‘First record: ‘, first) print(‘Total number of records: ‘, num_data) def get_mappings(rdd, idx): print('index:', idx) return rdd.map(lambda fields: fields[idx]).distinct().zipWithIndex().collectAsMap() mappings = [get_mappings(records, i) for i in range(0,2) print(“Feature vector length for categorical features: %d” % cat_len) print(“Feature vector length for numerical features: %d” % num_len) print(“Total feature vector length: %d” % total_len) OUTPUT: Feature vector length for categorical features: 2096 Feature vector length for numerical features: 6 Total feature vector length: 2102 def extract_features(record): cat_vec = np.zeros(cat_len) i = 0 step = 0 for field in record[0:1]: # catogorical feature print('extract_features', i) m = mappings[i] idx = m[field] cat_vec[idx + step] = 1 i = i + 1 step = step + len(m) num_vec = np.array([float(field) for field in record[1:7]]) return np.concatenate((cat_vec, num_vec)) def extract_label(record): return float(record[-1]) def extract_features_dt(record): return np.array(map(float, record[0:6])) data_dt = records.map(lambda r: LabeledPoint(extract_label(r), extract_features(r))) (trainingData_dt, testData_dt) = data_dt.randomSplit([0.7, 0.3])
23
null
2018-06-19
2018-06-19 04:31:44
2018-08-14
2018-08-14 04:36:01
0
false
en
2018-08-14
2018-08-14 04:36:01
2
114f2ad4ac4a
1.920755
0
0
0
Background
3
Data loading and Categorical feature binary mapping Background This dataset is inspired from below kaggle, as it simplified version of movie dataset. TMDB_Movie dataset | Kaggle Edit descriptionwww.kaggle.com Sample implementation can be found at https://github.com/Jayasagar/sparkml-regression-models-movie-revenue-predictions Loading the dataset and inspecting it: Removed certain variable which are not required as they do not add value for this task, such as id, imdb_id, original_title, cast, homepage, director, keywords, overview, production_companies, release_date, release_year, budget_adj and revenue_adj Output First record: [‘Action|Adventure|Science Fiction|Thriller’, ‘2015’, ‘32.985763’, ‘150000000’, ‘124’, ‘5562’, ‘6.5’, ‘1513528810’] Total number of records: 10866 In the above code, sc is the Spark context available when you run Jupyter! As we have too many input features and i am not sure whether all required, I just want to consider some of the interesting features and simplified for this implementations purpose!! Variables consider are popularity budget runtime genres vote_count vote_average release_year revenue Out of them ‘genres’ and ‘release_year’ are the categorical and others are normalized real-valued variables. Extract categorical feature into a binary vector form We have two categorical features. Using the two helper functions given in the assignment(extract_label and extract_features) extracted the last column variable (revenue) into a float and extracted mappings to convert the categorical features to binary-encoded features. 1. genres, is at index : 0 2. release_year is at index : 1 Apply mapping function to each categorical column (0, 1) We now have the mappings for each variable, and we can see how many values in total we need for our binary vector representation. The next step is to use our extracted mappings to convert the categorical features to binary-encoded features. Extract the data so that we are ready for training and prediction on Decision Tree model. Split the data into training and test sets (30% held out for testing) In this Part, we learned how to work with the dataset categorical column mapping to binary for regression models!! In the next post, we will see some of the Spark Regression models and their performance tuning!!
Data loading and Categorical feature binary mapping
0
data-loading-and-categorical-feature-binary-mapping-114f2ad4ac4a
2018-08-14
2018-08-14 04:36:02
https://medium.com/s/story/data-loading-and-categorical-feature-binary-mapping-114f2ad4ac4a
false
509
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Jayasagar
null
53af6f89e32e
jayasagar
23
52
20,181,104
null
null
null
null
null
null
0
null
0
9aa231fac359
2018-02-10
2018-02-10 21:29:56
2018-02-10
2018-02-10 21:33:00
1
false
en
2018-02-10
2018-02-10 21:33:00
5
114fa1149aad
2.656604
1
0
0
Thought, a blockchain start-up based in Harrisburg, Pennsylvania, knows the importance of a great team. They have put together a skillful…
5
Harrisburg University Backed THOUGHT ICO Launches Whitelist Thought, a blockchain start-up based in Harrisburg, Pennsylvania, knows the importance of a great team. They have put together a skillful development team, and an accomplished board of advisors. And they are certainly not trying to hide themselves. Thought is fundamentally changing the way data is being processed by embedding every piece of data with artificial intelligence. This means that otherwise ‘dumb data’ that needs an application to become useful, becomes valuable and ‘smart’, and is able to act on its own — understanding where it came from, where it’s supposed to go, and what it has to do. This reduces the need for third-party applications, making data processing a lot cheaper and faster. “The amount of data being created by humans is increasing exponentially with things like social media, and innovations like IoT (Internet of Things) devices.” says the Founder and CEO of Thought, Professor Andrew Hacker. “Traditionally, data just sits around until it is sent to applications that are able to process that information, but Thought makes data agile and able to act on its own without any external help.” Integrating blockchain technology with this concept creates an extra layer of security on the smallest level. Every piece of data is secured with cryptography, allowing the owners of the data to dictate exactly who has access to it. Imagine a hospital room where all of the equipment communicates with each other. For example, the thermostat can communicate with the patient’s health record and his/her current health readings from the equipment and find the most optimal room temperature and humidity for this patient. And that’s only the thermostat! You don’t need any third-party applications to make it happen; the information is able to communicate with other pieces of data to make this happen. Thought’s development is led by Professor Andrew Hacker, who, in addition to being the Founder and CEO of Thought, is a Cybersecurity Expert in Residence at Harrisburg University. He has conceived and developed the concept of hybrid and smart data for the last six years. With extensive cybersecurity experience, Andrew has appeared in many publications talking about the ever-increasing importance of security in the age of the Internet. “Harrisburg University provides world-class science, technology and analytics education. But we increasingly also support business start-ups and entrepreneurship. Several years ago, we had the opportunity to have Mr. Andrew Hacker, a known expert in cyber security, join HU.” Says Dr. Eric Darr, President of the Harrisburg University. “Andrew’s technical expertise added greatly to our educational offerings, while his start-up company provided terrific experiential opportunities for our students. We came to believe that Mr. Hacker’s technology had significant promise. Therefore, HU has been pleased to financially support the involvement of many computer science students in the further development of the latest Thought technology. “ “The upcoming release of this new technology, Thought Blockchain, and its applications for businesses and consumers in artificial intelligence and analytics represents the bleeding edge of innovation. Harrisburg University looks forward to continuing its work with Mr. Hacker and Thought Technologies as they continue developing new and valuable technology.” Explains Dr. Darr. The whitelist is open from today, with Pre-ICO starting on March 1, 2018 and the main ICO starting on March 14, 2018. To participate in the Thought ICO, or join their whitelist, please visit: https://thought.live Join our Telegram Community! Thought Community Thought is fundamentally changing the way information is processed by embedding AI into every small piece of data…t.me Follow our Twitter! Thought (@Thought_THT) | Twitter The latest Tweets from Thought (@Thought_THT). Telegran channel: https://t.co/ifkqGaAH9ktwitter.com Like us on Facebook Thought Thought. 3.1K likes. Thought is fundamentally changing how information is being processed by embedding AI into every…www.facebook.com or LinkedIn Thought Network | LinkedIn https://thought.live Data is inherently inanimate and only becomes valuable within the context of an application. As…www.linkedin.com
Harrisburg University Backed THOUGHT ICO Launches Whitelist
11
harrisburg-university-backed-thought-ico-launches-whitelist-114fa1149aad
2018-02-27
2018-02-27 22:14:15
https://medium.com/s/story/harrisburg-university-backed-thought-ico-launches-whitelist-114fa1149aad
false
651
Thought is fundamentally changing how data is being processed by embedding AI into every small piece of data.
null
null
null
Thought Network
thought-tht
BLOCKCHAIN,ARTIFICIAL INTELLIGENCE,IOT,SECURITY,ICO
Thought_THT
Blockchain
blockchain
Blockchain
265,164
Thought
Thought is fundamentally changing the way information is processed by embedding AI into every small piece of data. Learn more: https://thought.live
c77b7092b85f
thought_tht
13
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-30
2017-10-30 16:22:16
2017-10-30
2017-10-30 16:22:16
1
false
en
2017-10-30
2017-10-30 16:22:16
1
114fb83df3f0
2.377358
0
0
0
null
5
Future Factories: How AI enables smart manufacturing Today’s consumers are pickier than ever. They want customized, personalized, and unique products over standardized ones and prefer local, smaller producers over large-scale global manufacturers. Factories, power plants, and manufacturing centers around the world must rely on automation, machine learning, computer vision, and other fields of AI to meet these rising demands and transform the way we make, move, and market things. Since the industrial revolution, factories have been optimized to mass produce a few products rapidly and cheaply to satisfy global demand. “The largest inefficiency that most manufacturers face is inflexibility,” says Jim Lawton, Chief Product & Marketing Officer of Rethink Robotics, maker of collaborative industrial robots. “Traditional industrial automation requires hundreds of hours to reprogram, making it very impractical to change how the task is performed.” Catering to finicky consumers is not the only challenge confronting modern factories. Costs of production in traditionally affordable countries like China and Mexico are rising. Oil and gas industries have been hit incredibly hard by historically low oil prices, driving the need for further efficiencies and cost reduction. In virtually all factories, poor demand forecasting and capacity planning, unexpected equipment failures and downtimes, supply chain bottlenecks, and inefficient or unsafe workplace processes can lead to resource wastage, longer production periods, low yields on production inputs, and lost revenue. Manufacturers are also strapped for qualified labor, both skilled and unskilled, as older employees retire, younger generations lose interest in manufacturing jobs, and immigration policies tighten. Prabir Chetia, Head of Business Research and Advisory at global analytics firm Aranca, details the current dilemma faced by many manufacturers around the world: Innovative manufacturers already use artificial intelligence to tackle these many challenges. Here are the key ways that “Industry 4.0”, the latest trends in smart factories, leverage automation, data exchange, and emerging technologies: Rethink Robotics, founded by robotics pioneer Rodney Brooks, advocates the “cobot” model where humans and robots work side by side for maximum effectiveness. While industrial robots have long performed heavy lifting and tedious work on assembly lines, they’re typically designed for a single tasks and require hours to reprogram. Baxter and Sawyer, Rethink’s smart collaborative robots, are able to learn a multitude of tasks from demonstrations, just like their human counterparts can. “Training a robot is nearly as simple as training a human,” claims Chief Product & Marketing Officer Jim Lawton. “Companies that don’t have programming expertise on staff and can’t afford to spend hundreds of thousands of dollars on a traditional industrial robot can instead leverage more affordable, flexible automation and adapt to market changes.” Industrial equipment is typically serviced on a fixed schedule, irrespective of actual operating condition, resulting in wasted labor and risk of unexpected and undiagnosed equipment failures. Once instrumented with sensors and networked with each other, devices can be monitored, analyzed, and modeled for improved performance and service. An industry leader in the space, GE enables manufacturers to create “Digital Twins”, or physics-based virtual models of large-scale machinery, on their industrial cloud platform, Predix. “Twinning” a piece of equipment allows human operators to constantly monitor performance data and generate predictive analytics. According to Marc-Thomas Schmidt, Chief Architect of Predix, nearly 650,000 twins are currently deployed and range widely in complexity. Complex twins like those of gas turbines interpret data from hundreds of sensors, understand failure conditions, track anomalies, and can be used to regulate production based on real-time demand. Posted on 7wData.be.
Future Factories: How AI enables smart manufacturing
0
future-factories-how-ai-enables-smart-manufacturing-114fb83df3f0
2017-11-12
2017-11-12 17:21:14
https://medium.com/s/story/future-factories-how-ai-enables-smart-manufacturing-114fb83df3f0
false
577
null
null
null
null
null
null
null
null
null
Aranca River
aranca-river
Aranca River
0
Yves Mulkers
BI And Data Architect enjoying Family, Social Influencer , love Music and DJ-ing, founder @7wData, content marketing and influencer marketing in the Data world
1335786e6357
YvesMulkers
17,594
8,294
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-13
2017-09-13 11:55:34
2017-09-15
2017-09-15 07:51:52
1
false
en
2017-09-16
2017-09-16 08:45:59
0
115023e648d
2.645283
0
0
0
Artificial Intelligence is the branch of Computer Science that seeks to develop intelligent computers.
1
THE DEVELOPMENT OF FULL ARTIFICIAL INTELLIGENCE : THE BEGINNING OR THE END FOR MANKIND? Artificial Intelligence is the branch of Computer Science that seeks to develop intelligent computers. John Mccarthy in 1956 first defined Artificial intelligence as the science and Engineering of marking intelligent machines. Intelligence simply means the ability to reason, think, learn and solve problems as well as the Human mind. In other words the computational process of Cognitive science. Therefore, the goal of Artificial intelligence is to elevate computers from depending on human-programmed instructions to carry out a task, to a level where computers can now reason, Analyse think, plan, learn and act on their own. Artificial intelligence is not just a topic which focuses on whats going to be in the future, rather this is a concept that exists already to a certain degree in our world today. Computer systems such as google search engine Technology, apples siri online assistant, image recognition in photographs, even targetting online advertisements, and the new google self driven cars use artificial intellligence to a degree to learn(based on a knowledge-based system) there by producing a more accurate and individually specific output. However these technologies A.I factor are limited as they make use of some pre-programmmed knowledge-based functions. This form of A.I is known as Artificial Narrow intelligence and this innovation would plunge the world into a whole new age. Some science and tech enthusiasts argue that once we are able to build machines that are smarter than we are then there will begin to improve themselves rapidly, more than we can keep up with and this will be a threat to our existence. They suggest that these super-intelligent machines either as robots or communication systems having the ability to think, will set their own goals which could be in contrast with human goals , morals and standards. Nick Bostrum the author of the book SUPER-INTELLIGENCE is one of the supporters of this theory, other famous scientists like Elon Musk , are in-line with this view. Many sci-fi movies such as The matrix, The terminator etc speak of this theories, putting together a scenario where general artificial intelligent machines wipe out mankind in an attempt to accomplish their man-given objectives. Artificial intelligence expert Stuart Russel at TED-TALKS 2017 in Vancouver,suggests that this is notlikely to be the case, he goes on to say that safety principles will be implemented to control the behavior of the super smart machines; The Super computers will be built as altruistic machines with uncertain objectives.Also the machines would not be given specific commands but allowed to learn more by observing humans. The goal of A.I is to learn from Humans, this simple goal assures that super intelligent computers would set goals and objectives based on what they learn from Humans. Thus said, the idea of Artificial Intelligence working with strict safety principles and operational guidelines could take the world as we know it to a whole new dimension of unimaginable possibilities just as predicted by Alan Turing in 1950. A.I would be used in many sectors including; Health Transportation, Research, weather forecast etc the integrative AI which includes competencies such as vision, speech, natural language, machine learning and planning(Artificial General Intelligence)will enable this super intelligent computers to carry out wide research at far better accuracy than human researchers leading to amazing break-through. In conclusion, it is obvious however that the positive effects of A.i easily out-weighs the negative effects. Also just like every innovation that changed the way we do things over the centuries, people are always going to be skeptical about new things , the truth is there is still a long way from developing full artificial intelligence of any kind so for now we can worry less and enjoy the benefits of these technologies as they grow. by Jude Ukana
THE DEVELOPMENT OF FULL ARTIFICIAL INTELLIGENCE : THE BEGINNING OR THE END FOR MANKIND?
0
the-development-of-full-artificial-intelligence-the-beginning-or-the-end-for-mankind-115023e648d
2017-09-16
2017-09-16 08:46:00
https://medium.com/s/story/the-development-of-full-artificial-intelligence-the-beginning-or-the-end-for-mankind-115023e648d
false
648
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jude Ukana
null
d057c175910c
judeukana
0
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-11
2018-06-11 14:36:02
2018-05-10
2018-05-10 18:03:42
0
false
en
2018-06-11
2018-06-11 14:40:04
13
115206784253
6.283019
1
0
0
What a time to be alive. I was inspired to write this based on seeing a friend’s work get some new attention (more on that below). And by…
4
Voice AI, Telecom, Scams, and Co-evolution What a time to be alive. I was inspired to write this based on seeing a friend’s work get some new attention (more on that below). And by the way, this is my first post on this new second-order thinking project so I’m diving in even before I had the chance to write a piece on why I’ve started the project itself. Let’s discuss new opportunities with interactive voice AI. If you haven’t seen it, here’s the Google Duplex demo from two days ago that has everyone fired up. Actually, the team’s writeup about Duplex, complete with recorded examples, is even more interesting. Worth a read. But the subtle question is, why is everyone so fired up about this? After all, the technology to make a phone reservation like that demoed is not new. My friend Jeff Smith co-founded a startup that did the same thing a year before the Duplex demo. You can watch his demo here: Jeff’s co-founder Wesley has a great writeup here. He also mentions Duplex’s use of “disfluencies,” noting that adding disfluencies (our uses of “uh” and “um” which make us cringe when we hear recordings of our own voice) is not a great technological improvement, but it sure does make the demo better. The first industrial revolution was about getting past the limitations of human strength. The computer revolution was about getting past the limitations of human thinking. The internet revolution was about getting past the limitations of human communications. — Jeff Smith (@jeffksmithjr) May 9, 2018 When it comes to transactions that must take place by phone, Duplex and John Done are good for humans. In the demos, the person trying to acquire the service (book the appointment or buy the flowers) saves time and can accomplish things that they never would in person. The business potentially gets more customers or saves time in the bookings since a benevolent voice AI is only going to ask things related to actual wants. Second-Order Effects (Remember, this new blog focuses on second-order thinking….) There are several second-order effects of voice AI that is this good. The most direct change is in not having to wait on hold, look up and call multiple numbers, call back if there is no answer and any other thing that can go wrong with a phone call. Related to this, the international call volume rate (both traditional PSTN and VOIP) has remained flat from 2013–2015 (the last year available). Even though more people are still getting access to phones, voice is a smaller part of what we do with our phones. Many of the appointments and quick check-ins that had to take place by voice calls in the past have since switched to other forms of communication, often by removing the need for a human on both sides of the communication. Humans are already used to asking for and receiving information from a machine. What John Done and Duplex have done is the reverse of that. So back to positive effects from a voice AI. One application is to take a data approach for the customers’ benefit but in a way that a business cannot easily defend against. Here imagine a shy caller who is not able to bully their way into a reservation at a hot restaurant (yes, this is actually a thing). Meanwhile, their AI can. Or, a human being who speaks in a less desirable accent may be told that there is no availability when their AI, which sounds acceptable, gets the reservation. Impact: more diverse restaurant goers, fewer awkward moments. A negative impact in the short term, depending on how fast things change, is that a little more than 1M people are employed as receptionists, as of 2016. While receptionists aren’t totally obviated by voice AI (answering phones is only part of their job) it means that their job will change. Some will be able to deal with the change and others won’t. Businesses will need voice AIs to talk to voice AIs… and the occasional human. At some point, it won’t be worthwhile for humans to answer the phone in a restaurant, salon, or florist. I’m curious to learn what percentage of calls in a restaurant deal with table bookings or service hours versus other types of calls that will probably stay human longer — for example, calls from suppliers. Now let’s take the (minuscule) assumption that if this tech can be used in beneficial ways it can also be used for the opposite. While access to the tech is limited today, it is only a matter of time before it is widely available. The key difference is in how the voice AI tech can allow misuse at scale. Remember back when Uber employees generated multiple requests for Lyft rides, only to cancel them? The purpose was to waste the time of the drivers and encourage them to drop Lyft for Uber. Imagine doing something like that at scale for a restaurant, cafe, salon, or any type of business that you compete with. It would be possible to tie up phone lines, waste time, or leave restaurants with empty tables. An older scam that this tech will scale is what’s known as the “Hey, Grandma” scam, where a grandparent gets a call from a “grandchild” in distress. There are different flavors of this. For US grandparents the story is often that the grandchild got into legal trouble and needs money wired. In China and Taiwan, it’s often that the grandchild has been kidnapped and is being beaten up. Again, wire the money. While the Lyft drivers couldn’t know that their requests were fake, why can’t the grandparents tell that they aren’t really receiving a call from a grandchild? Most of the time they can tell but apparently, the success rate may be as high as 2%! That’s more than enough to make it worthwhile. If you can scale this scam, by automating realist calls using a realistic voice AI, then that’s a game changer. If you can also emulate the voice of the supposed grandchild, then that success rate will increase. If you can emulate the voice of the grandchild and also know specific facts about them, perhaps gleaned from public social media, then increase that rate even further. Related to the above, for a great writeup of why it’s so cheap to make phone calls at scale (as taken advantage of by robocallers), see this post on The Broken Economics Of Robocalls. The Antidote to This Scam (…or the Scamidote) There is a long-standing low-tech solution to this problem. One is the use of a secret passphrase that only someone in the family would know (I know of multiple instances where this technique is actually used). Another technique is to ask a question that only the caller would know (if they are who they say they are). While it’s difficult to remember to do this when you are caught up in an emotional moment, the best way to check who is really on the other line is to use a preset code or ask a specific question. The solution to this tech problem can’t come from technology. Even today, voice emulation technology is pretty good, as heard for example in these demos of a fake Trump voice. Other copycat voices can be created with only seconds of recorded voice of the actual person. Out of Step Timing in Co-evolution Yuval Noah Harari notes in his book Sapiens that history has seen times of co-evolution (lions and gazelle each evolving slowly faster over long stretches of time) and periods when one species (that’s humans) emerged in a new environment and by using intelligence, tools, and cooperation, was able to wipe out stronger species. There was no chance for the woolly mammoths to evolve their way along with humans. So even if the question antidote described above starts to be widely used, you still just need to fool a few people to make the scam worthwhile. In some cases, the scammers and their targets will co-evolve, each getting better as time goes on. But perhaps not in this case. In this case, the scammers tech advantage happens at scale before their targets are able to realize what’s happening. While the scammers’ tech builds on itself, each potential target must be educated and maintain their rationality when they hear an extreme phone call from a supposed family member in trouble. The humans eventually catch up. So the humans suffer in the short-term from this out of step timing. If anything, a scammer running a malevolent voice AI would want to keep the humans happy so as to survive longer themselves. The Medium is the Mess In my old voice startup in 2009–11 I made lots of unintelligent interactive voice responses, but nothing like John Done and Duplex. We researched the differences in communication when people used voice alone, versus text or voice and video. People communicate in incredibly different ways depending on the medium. As above, imagine a shy person who can’t get the words out being able to type a conversation that their voice AI then delivers. Or next generation, have them speak the words directly in order for their voice AI to say them to the listener, but with more conviction. It is ridiculous to imagine that voice AIs will be required to identify themselves as such. That is, various governments could “mandate” it and various companies could “demand” it, but it’s not going to happen. When there is next to no cost for misuse of voice AI, it is too profitable to fool people, and the calls can be made from international jurisdictions, there is no enforceability. But a source for good when it comes to making our lives easier in everyday ways? Yes, absolutely. Just with some second-order effects. Originally published at unintendedconsequenc.es on May 10, 2018.
Voice AI, Telecom, Scams, and Co-evolution
5
voice-ai-telecom-scams-and-co-evolution-115206784253
2018-06-11
2018-06-11 14:48:24
https://medium.com/s/story/voice-ai-telecom-scams-and-co-evolution-115206784253
false
1,665
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Paul Orlando
Incubator director and professor of entrepreneurship at USC. Former startup founder. More contrarian each day.
385052d031
porlando
989
703
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-15
2018-03-15 17:15:26
2018-06-01
2018-06-01 21:55:42
8
false
en
2018-06-01
2018-06-01 21:55:42
4
1152ef4c7386
3.891824
1
0
0
In our previous post, we explored the rhythm of speech in a group setting. Inspired by a TechCrunch transcript from Leena Rao and the…
1
Visualizing the Ebb and Flow of Feelings at Your Company What was the sentiment cultivated at this moment in time? Here’s how we can help make a guess. In our previous post, we explored the rhythm of speech in a group setting. Inspired by a TechCrunch transcript from Leena Rao and the founders of Honest Company, we analyzed the airtime distribution between founders and found the timing and rhythm of Leena’s moderation role to carry weight despite not necessarily “speaking” the most. We found that airtime is not necessarily associated with speech time but may just as well be associated with visual appearance, body language, as well as emphatic cues. While it was straightforward to look at the relative airtime amongst team members and ultimately deduce a rhythm for how to create a productive conversation, we had yet to look at the semantic content itself. Analyzing sentiment against time is a complex task with many dimensions and angles to consider. There are many tools available on the market for this type of analysis. For example, you can play with the open source Python NLTK sentiment analysis here. It has long been known that the amount of airtime each participant contributes is a factor in predicting overall output. With this in mind, Cultivate’s dashboard is created from analysis beyond counting airtime. The data science team has developed several in-depth natural language processing techniques to produce metrics from business e-mail, chat records and recorded forms of team conversation. For example, one of the approaches currently being implemented at Cultivate is sentence-by-sentence analysis capturing words that fall into pre-defined sentiment categories like: afraid, amused, angry, annoyed, apathetic, happy, inspired and sad. As such, an overall cumulative score can be visualized over the course of a conversation for each one of these sentiments. Here is the overall normalized content for scores associated with words that were used during the last Presidential debate. This shows a very constant and even pacing in the professional rhetoric. It is only when we zoom in very closely, that we can see some of the subtle shifts in the positioning and ranking amongst the different sentiments: Overall, as the conversation progressed throughout the debate, the statistics show AFRAID and ANGRY as the dominating sentiments when examining the rhetoric. Later on, we can see ANGER dominating the conversation and words associated with HAPPY moving away from the other sentiments: The x-axis on these graphs show only sentence count and is, unfortunately, not correlated with precise timing data. Without timing information from the audio itself, it is difficult to get a handle for how the ebb and flow of sentiment changed throughout the course of the debate. In order to get some of these sentiments correlated with some video data, we need to first associate the sentences with the actual speaker. Then, using hand-transcribed text, we need to coordinate the actual timestamps with the sentence-based sentiment analysis. Finally, we put two and two together to visualize a .srt file that looks like this: A .srt file records a manually cleaned-up version of what was said and best guess precise timing data for when it was said The .srt file has an ID number which increases in sequence. There is also a timing start and end for each phrase that is mentioned. Cultivate analysis led by Chief Data Scientist Andy Horng adds speaker data along with a best guess for next speaker. Sequence numbers are included at the beginning of the file with the speaker annotated. Then, the best guess for the next speaker is included at the end of the speech segment Combined with Cultivate’s method of sentiment analysis using semantic data, we are able to get a temporal-based estimate of what the emotional energy being generated was at the time. Ultimately, we want to be able drive positive energy within a team conversation using speech and semantic content. Using a little ASCII hack in a SRT file, I visualized some of the emotion scores generated by Cultivate metrics. Specifically, I looked based off of the zoomed-in scores at what point the discussion became more angry or more happy. Moderator Wallace has a change in emotion early on according to Cultivate metrics Cultivate’s semantic analysis conveys quite a bit of information about sentiment. I took two screenshots of the video corresponding to when the analysis suggested more angry rhetoric vs. more happy rhetoric. See if you can tell the difference: Overlaying video data with the plethora of sentiment data available using Cultivate’s analysis gives potentially more data than facial recognition. -But, more on that later. To see the full video, see this YouTube link. About Cultivate Cultivate’s AI-powered platform enables engagement and inclusion by helping you understand and improve your workplace communications. For more information on what we are doing at Cultivate, check out our website.
Visualizing the Ebb and Flow of Feelings at Your Company
1
visualizing-the-ebb-and-flow-of-feelings-at-your-company-1152ef4c7386
2018-06-07
2018-06-07 23:13:14
https://medium.com/s/story/visualizing-the-ebb-and-flow-of-feelings-at-your-company-1152ef4c7386
false
731
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Grace Woo
Writes on Medium for @CultivateAI | http://cultivateai.com
4b41ded9a6f9
grace_52247
10
16
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-18
2018-05-18 10:37:26
2018-05-18
2018-05-18 20:51:16
3
false
en
2018-05-18
2018-05-18 20:51:16
4
115312500ff4
4.021698
0
0
0
After spending the past 2 articles (here and here) playing with the data in the Kaggle competition, today I will start to build a basic…
5
Approaching a competition on Kaggle: Avito Demand Prediction Challenge (Part 3 — linear regression) Infinite Regression — Alan Levine — Flickr After spending the past 2 articles (here and here) playing with the data in the Kaggle competition, today I will start to build a basic model. I think I will start with something simple like a linear regression model. Linear regression should be a perfect fit here as we have a number of independent variables all trying to describe our dependent variable deal_probability. What linear regression tries to do is form a linear equation that describes the pattern in our independent variable. See graph below: By Oleg Alexandrov (self-made with MATLAB) [Public domain], via Wikimedia Commons Take price as the independent variable here, each red dot represents our mapping of price to deal_probability. And the blue line is what the linear regression formula tries to calculate so as to reduce the distances from our points, the green lines. This is often done using the least squares formula. Linear regression always makes a straight line (hence linear) of the form y = mx + c. As such it may not always be the best formula to use if relationships are more complicated than linear. We will give it a go though and see what we get. As always take a look at my Github to follow along with the code, today I will be using file “main_day3.py”. The thing about linear regression is in theory it should only work for numerical data. As such I shall purely include this to start with. I am using Sklearn another fantastic machine library (probably targeted more at beginners) and it’s LinearRegression function. I will exclude all non-numerical data for now leaving just 3 columns ‘price’, ‘item_seq_number’, ‘image_top_1’. Running this linear regression I achieve a root mean squared error score (the scoring metric used in the competition) of ~0.257. Not impressive by any means but also not bad for 5 mins work. This was run locally on my 50000 row subset, so now to produce a kernel on Kaggle and run this algorithm against the entire dataset. Running on the full dataset on Kaggle I achieve a score of 0.2631. Putting me near, but not completely at the bottom an achievement I would say. We at least have a working kernel that we can build upon. You can see my Kaggle script in my Github repository, file kaggle_main_day3.py. The script differs slightly as you must output results to a file for Kaggle’s competition scoring to pick up and process. A Per Category Model Now to improve upon this. My first idea which will allow me to keep operating with simple linear regression models is to split the data by category and build a ‘per category’ linear regression. Let’s see how this fairs. Let’s start with parent category as there are only 10 or so of these, this means less models and less intensive on my system. Again I am using Pandas groupby function to split my data. Then code is not so different to how it was before. But instead of one model we now have to check the parent_category of each row before we choose which of our 10 models to use. Doing this I get ~0.246 root mean square error. A significant improvement, testing this on Kaggle I get 0.2524 score and I jumped 25 places up. It is clear to me there is still significant improvement to be made here. And I will probably begin by looking at all the different category columns: region = 0.256865922681442 city = 0.2630428696751874 category_name = 0.24182267480235786 user_type Category = 0.2555981529232477 Every category variable apart from city seems to give a better result than the standard linear regression. In theory if I swap out my Kaggle kernel for one that looks at category_name rather than parent_category_name I will achieve a marginally better result. But I think I can do better than this. Ensemble Models An ensemble model (also blending, also stacking) is essentially combining multiple models to give a superior result to just one. There are a number of ways to do this, in fact you can do it any way you like but obviously you are aiming for good results. Take my current scenario here, I have a collection of linear regression predictions for each test row. Now I can combine these results to maybe make a better model. One simple way I have seen before is to simply take the mean of your predictions. I will try this now. All I have to do is add up each prediction for each row and divide by the number of predictions. Doing this I achieved ~0.256 not an improvement but I will run the code on Kaggle just in case. I received a score of 0.2624, a backwards step but at least now we know this will not be a successful avenue to explore. Finally one more thing I wanted to try, what if we built a linear regression of our linear regression predictions? Could work. So let’s go ahead and do that. Unfortunately due to my shortsightedness this won’t work as I don’t have target variables on the test set. I feel like there will be some way to linearly combine these predictions to make a better prediction. I will do some research for tomorrow. Additionally I aim to include a lot of the data I have been neglecting so far. I haven’t considered the wording or images at all. There is plenty of easy improvements to be made to my model and these will likely be in ensemble form. Until then.
Approaching a competition on Kaggle: Avito Demand Prediction Challenge (Part 3 — linear regression)
0
approaching-a-competition-on-kaggle-avito-demand-prediction-challenge-part-3-linear-regression-115312500ff4
2018-05-18
2018-05-18 20:51:17
https://medium.com/s/story/approaching-a-competition-on-kaggle-avito-demand-prediction-challenge-part-3-linear-regression-115312500ff4
false
920
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
chris stevens
I am a seasoned web/distributed developer who wishes to work in ML and data science, I will document my journey here. First, an implementation a day for a month
64b8e1fc9a07
chrisstevens1
4
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-11
2018-03-11 23:26:00
2018-03-13
2018-03-13 01:22:19
1
true
en
2018-03-13
2018-03-13 01:22:19
0
115323c08611
2.830189
2
0
0
They Are Modeled On Various Theories of the Structure/Function of the Human Brain
2
Artificial Neural Networks Are Not Modeled On the Structure/Function of the Human Brain They Are Modeled On Various Theories of the Structure/Function of the Human Brain A model. She is quite beautiful, but not the kind of model I am talking about. Thanks Unsplash and random pretty woman. When someone has built something which was modeled on something else I think most people assume the something else that is being used as a model is well understood and well characterized. ANN’s are often pitched as being “modeled on the human brain” but only rarely as a “model of the human brain.” Neither is accurate though in my judgement the first is a larger offense against the accepted definitions of the word model (at least as it is defined for the biological sciences). These two main uses/definitions of the word model are 1. A representation of something, often idealised or modified to make it conceptually easier to understand, and 2. Something to be imitated. Clearly the way ANN’s are described is not as a representation of the human brain as a way of making it (the human brain) easier to understand. They are almost always sold as ‘imitations’ of the human brain. Specifically imitations of some of the functions of the human brain. Substitutes for the human brain or some functions of the human brain to put it another way. That they are poor substitutes is a topic for another post, but my quibble in this discussion is with the very idea of ‘imitation’ as it relates to the structure/functions of the human brain. It brings me right back to where I began, which is that in order to imitate something (to model it), one must know what the thing is one is imitating (modeling). I cannot imitate an ape if I have never seen an ape and do not know how it looks and acts. I can attempt to imitate it, but I will never even know if I have been successful in my attempts as I have no fixed reference against which to judge the accuracy of my imitation. In the case of ANN/brains it is not that no one has ever seen a brain, nor that we do not (sort of) know what it does, rather it is that the specific mechanisms and structures of the brain and how they relate to function are only theoretical. You can ‘model’ something on a theoretical system as much as you like, but the accuracy of your model as it relates to the actual thing can never be assessed. Therefore you are not justified in making any claims as to the performance of that thing as it relates to the performance of the actual thing. Theories abound that attempt to describe the structure of various neural networks in the brain, how they are organized, how they function, and ultimately how/if they are important to human intelligence/consciousness or a million other more mundane processes. Ultimately, it is still not even “proven” that such networks exist, let alone that they are somehow the key structures that should be the focal point of any model attempting to mimic how the human brain works. Each of these theories has a viable claim to some level of “correctness” and corresponding data to support it. None are completely accurate and no doubt most are mostly wrong. Given this brute fact it is simply impossible for anyone to claim that an artificial neural network works anything like a network of neurons in the human brain. This is obviously true because, to put it simply, it is still not known how these function and/or how they are structured in the human brain. Depending on the whims of the particular programmers/engineers designing/programming any particular artificial neural network they might select any of fifty competing theories of neuronal structure and function to model. Mostly, because they are ignorant of the complexities of biology and neuroscience they will select the most tried and true/easiest/most previously used approaches. These will be incorrect and will produce the exact same non-intelligent, non learning machines we have been producing since day 1 of this ridiculous quest for artificial intelligence way back in the 1950s. Wasting time attempting to design/develop/create an artificial intelligence is absolute nonsense when we have yet to understand actual, natural intelligence.
Artificial Neural Networks Are Not Modeled On the Structure/Function of the Human Brain
20
artificial-neural-networks-are-not-modeled-on-the-structure-function-of-the-human-brain-115323c08611
2018-04-04
2018-04-04 17:47:57
https://medium.com/s/story/artificial-neural-networks-are-not-modeled-on-the-structure-function-of-the-human-brain-115323c08611
false
697
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Daniel DeMarco
Research scientist (Ph.D. micro/mol biology), Food safety/micro expert, Thought middle manager, Everyday junglist, Selecta, Boulderer, Cat lover, Fish hater
7db31d7ad975
dema300w
3,629
148
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-03
2018-04-03 20:18:35
2018-04-03
2018-04-03 20:21:33
2
false
en
2018-04-03
2018-04-03 23:43:04
6
115405fbaff4
4.13805
0
0
0
I’m about to reveal a big secret about myself. I love a good Facebook quiz. Whether I’m finding out what I will look like in 20 years or…
5
How Much Personal Data Did I Give Up to Take This Facebook Quiz? Source: FaceApp I’m about to reveal a big secret about myself. I love a good Facebook quiz. Whether I’m finding out what I will look like in 20 years or what my leprechaun name is, it’s fun to do these mindless games on Facebook and compare results with friends. If you’ve ever done one of these, you know it’s easy — you click one button to agree to share information about yourself, information in your Facebook profile, and information on your Facebook friends. What could be the harm? We figure, “Of course this information is needed if we’re looking to find the accurate answer to ‘What will my Hollywood movie poster look like?’” It seems harmless, so we trust it. The Facebook platform collects massive amounts of data on us, and it does so in a brilliant way. Imagine having a stranger come knocking on your door and asking you for a list of all your family, your friends along with photos and everything you know about them. No one would ever fall for this. But now that Facebook is such a familiar and popular way to connect with people, it doesn’t feel like a stranger to us. We “trust” Facebook, and we use it to store massive amounts of information about ourselves and the people we know. In fact, we trust it so much that when it comes to their “privacy agreement,” we agree to it without even reading its terms. The reason why the Facebook/Cambridge Analytica debacle has people angry is because people assumed there was no risk in how their data from Facebook would be used. But in this case, to the shock of the world, Facebook exposed data on 50 million Facebook users to a researcher who worked at Cambridge Analytica. And, as another piece of the puzzle, Cambridge Analytica worked for the Trump campaign. So as the public is wielding pitchforks at Facebook’s door, the first lesson for us all is this: #1: Any data that we’re publicly sharing will be used. And once our data is out there, absent restrictions, we have little control over how it is being used. Data is valuable to companies, both in utility and in dollars. So when it comes to any platform that collects and stores any data on you, you can assume this data will be used in some way or sold to a 3rd party. #2: So much more of our personal data exists than what we realize. It’s scary, I know. Data on you and me is everywhere. And if you have watched my talk for TEDxProvidence, you know how the amount of data we’re able to capture has increased exponentially in just the last 15 years. According to Google’s former CEO Eric Schmidt, the same amount of data created from the beginning of time to 2003 is what was generated in the last 2 days. Our data is used by marketers, by election strategists, by grocery stores, and by prescription drug companies. It’s used by every social media platform, and our data is used by their affiliated companies as well. Simply put, most companies are using our personal data in some way. #3: Not only are most companies using our data, but the most successful companies are built on data. There are 13 companies in the S&P 500 that have managed to outperform the entire S&P 500 5 years in a row. The majority of these companies are “algorithmically driven,” meaning they gather data from their users and they update the consumer experience almost automatically. These are companies like Facebook, Amazon and Google. Global business investments in data and analytics will surpass $200 billion a year by the year 2020. In the future, we will see more and more businesses moving data to the core of their competitive strategy. What does this mean to us? The time is right for the public to champion a universal code of ethics surrounding our data use. #4: Our data should be protected by a common code of ethics. Now that we have just a glimpse of what can happen when data is available unrestricted in the hands of others, we need to have a common set of rules to govern data use. DJ Patil, the first Chief Data Scientist for the White House, reminded us that “with great power comes great responsibility” in his February 2018 call to action “A Code Of Ethics for Data Science.” This post coincidentally was published over a month before the Facebook/Cambridge Analytica Scandal hit the press. The weighty responsibility of using data appropriately weighs on the minds of many within the data science community. When my partners and I formed our company BetaXAnalytics, our founding principle is that we wanted to use the power of data “for good” to improve the cost and quality of healthcare in the United States. Since we had a deep experience in clinical and pharmacy data science, we knew there was a resounding need for ethical transparency for those who are paying for health services. We wanted to provide the actionable insight that our clients need to make decisions regarding healthcare services and care coordination. Since my company BetaXAnalytics works with healthcare data, the way we protect data is governed by HIPAA; this legislation ensures both the privacy and safeguard of people’s health-related information. A large amount of our time and resources are put towards our focus of maintaining data security and privacy. The data we use is governed by strict contracts with our clients and we never sell data to third parties. As a company whose business is built on interpreting health data, we live by the mantra “with great power comes great responsibility.” We hope to see this movement grow both within and outside the data science community to work towards using the powers of data “for good.” - Shannon Shallcross is Co-Founder and CEO of BetaXAnalytics
How Much Personal Data Did I Give Up to Take This Facebook Quiz?
0
what-the-facebook-cambridge-analytica-scandal-teaches-us-about-the-future-of-our-personal-data-115405fbaff4
2018-04-03
2018-04-03 23:43:05
https://medium.com/s/story/what-the-facebook-cambridge-analytica-scandal-teaches-us-about-the-future-of-our-personal-data-115405fbaff4
false
995
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
Shannon Shallcross
Happy people-person, CEO at @BetaXAnalytics, Olympic multi-tasker, TEDx Speaker, crusader to use the power of data "for good" to drive healthcare reform.
6cfe6d05c0a4
SRShallcross
52
947
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-03
2018-05-03 20:36:19
2018-05-03
2018-05-03 20:40:22
1
false
en
2018-05-03
2018-05-03 20:40:22
1
11552eefcecf
1.071698
0
0
0
As I am preparing for my upcoming MSc in Data Science at The University of Edinburgh, I am browsing the web for useful resources regarding…
5
Data Science Foundations: Introduction As I am preparing for my upcoming MSc in Data Science at The University of Edinburgh, I am browsing the web for useful resources regarding Data Science. Until this point, I have successfully participated in more than 50 MOOCs (online courses) and read hundreds of articles. The fact that so many people share their knowledge online is astonishing. Unfortunately, due to the enormous mass of information, many people who want to get into Data Science are becoming disoriented and or confused — Let’s say I’ve been there before 😅 I made the decision to begin these series with the purpose of compiling a well-curated list of resources, examples, cheatsheets and code snippets that will allow anybody start applying Data Science to solve everyday problems. The resources that I am going to propose have been either tried by me or have heavily been recommended by people in my environment. The series “Data Science Foundations”, will consist of the topics below: Introduction Software Tools Installation Command line Version Control Systems (VCS) Mathematics and Statistics Python Basics Relational Databases (SQL) Non Relational Databases (MongoDB) Scientific Computing (NumPy) Data Manipulation and Analysis (Pandas) Data Visualisation (Matplotlib and Bokeh) Machine Learning with Python (scikit-learn) Deep Learning with Python (keras) Natural Language Processing with Python Online Competitions This list of topics will be updated continuously as new stories are being added.
Data Science Foundations: Introduction
0
data-science-foundations-introduction-11552eefcecf
2018-05-03
2018-05-03 20:40:22
https://medium.com/s/story/data-science-foundations-introduction-11552eefcecf
false
231
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Georgios Goniotakis
Data Science Student at The University of Edinburgh
bbcc5c4059e7
GeorgiosGoniotakis
6
39
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-13
2018-09-13 02:36:02
2018-09-13
2018-09-13 02:41:30
2
false
en
2018-09-18
2018-09-18 13:16:25
2
11553e25f940
1.583333
2
0
0
We are in a world where technology is shaping our behaviour, our thinking… and our craft. Designers have finally been given a seat at the…
5
Don’t “get” what Designers say? We are here for you… We are in a world where technology is shaping our behaviour, our thinking… and our craft. Designers have finally been given a seat at the table; a voice to help shape a new era. As Design Thinkers, our purpose is to adapt, shape and test organisations to push constantly, think differently and tackle real business challenges. Our mission is to deliver meaningful experiences and outcomes for the business… and customers. Our most powerful tool is our mindset and the way we translate complexity in to simplicity. As the world re-invents itself, there is a need for Designers to re-invent their craft of Design Thinking to be more accessible, easier to understand and something anyone can communicate and connect with and say… “Hey I get it!”. We are Human After All… for now… but let’s not “F**k it up”… Credit: https://dribbble.com/shots/4247646-Fail So… fellow designers, engineers, academics, data scientists, management consultants, doctors, accountants, ballerinas, farmers, nerds, babes and furry friends … whoever you may be; if you have ever walked out of a “design thinking session” or a “design experience” and thought to yourself… “What the F**k did that even mean?” “We are going to do what now?” “What are we really paying for?” “What are we actually going to get at the end of this?” “Why are we spending all this time doing this s**tty activity?” We are here for you. Hope there is…Credit: https://dribbble.com/shots/3367535-Yoda Our mission is to help you understand and connect with what Designers really mean when they say WDW (otherwise known as W***y Design Words)! Just drop us an email on: [email protected] And just remember…anything you send us is always shared as anonymous content. We don’t care about gossip. We care about re-inventing and shaping the role of Designers for the future… Please provide us feedback below if you enjoyed this article… your feedback helps us become better…
Don’t “get” what Designers say? We are here for you…
66
we-are-in-a-world-where-technology-is-shaping-our-behaviour-our-thinking-and-our-craft-11553e25f940
2018-09-19
2018-09-19 13:45:06
https://medium.com/s/story/we-are-in-a-world-where-technology-is-shaping-our-behaviour-our-thinking-and-our-craft-11553e25f940
false
318
null
null
null
null
null
null
null
null
null
Design
design
Design
186,228
S**T Things Designers Say (STDS)
Demystifying Design Thinking — Our mission is to help you understand what Designers really mean when they use WDW (otherwise known as W***y Design Words)!
4794d9159e08
plzstopstds
14
14
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-02
2018-09-02 03:47:38
2018-09-02
2018-09-02 03:47:53
0
false
en
2018-09-02
2018-09-02 03:47:53
1
11554665818a
2.033962
0
0
0
([PDF]) The Essential Physics of Medical Imaging Download EBOOK EPUB KINDLE By Jerrold T. Bushberg
1
Read Online The Essential Physics of Medical Imaging By Jerrold T. Bushberg (Download Ebook) #ebook ([PDF]) The Essential Physics of Medical Imaging Download EBOOK EPUB KINDLE By Jerrold T. Bushberg Read Online : https://bestreadkindle.icu/?q=The+Essential+Physics+of+Medical+Imaging This?renowned?work is derived from the authors’ acclaimed national review course (?Physics of Medical Imaging”)?at the University of California-Davis for radiology residents.?The text is a guide to the fundamental principles of medical imaging physics, radiation protection and radiation biology, with?complex topics presented in the clear and concise manner and style for which these authors are known.?Coverage includes the production, characteristics and interactions of ionizing radiation used in medical imaging and the imaging modalities in which they are used, including radiography, mammography, fluoroscopy, computed tomography and nuclear medicine. ?Special attention is paid to optimizing patient dose in each of these modalities.?Sections of the book address topics common to all forms of diagnostic imaging, including image quality and medical informatics?as well as the non-ionizing medical imaging modalities of MRI and ultrasound.The basic science important to nuclear imaging, . . . . . . . . . . . . . The Essential Physics of Medical Imaging PDF Online, The Essential Physics of Medical Imaging Books Online, The Essential Physics of Medical Imaging Ebook , The Essential Physics of Medical Imaging Book , The Essential Physics of Medical Imaging Full Popular PDF, PDF The Essential Physics of Medical Imaging Read Book PDF The Essential Physics of Medical Imaging, Read online PDF The Essential Physics of Medical Imaging, PDF The Essential Physics of Medical Imaging Popular, PDF The Essential Physics of Medical Imaging , PDF The Essential Physics of Medical Imaging Ebook, Best Book The Essential Physics of Medical Imaging, PDF The Essential Physics of Medical Imaging Collection, PDF The Essential Physics of Medical Imaging Full Online, epub The Essential Physics of Medical Imaging, ebook The Essential Physics of Medical Imaging, ebook The Essential Physics of Medical Imaging, epub The Essential Physics of Medical Imaging, full book The Essential Physics of Medical Imaging, online The Essential Physics of Medical Imaging, online The Essential Physics of Medical Imaging, online pdf The Essential Physics of Medical Imaging, pdf The Essential Physics of Medical Imaging, The Essential Physics of Medical Imaging Book, Online The Essential Physics of Medical Imaging Book, PDF The Essential Physics of Medical Imaging, PDF The Essential Physics of Medical Imaging Online, pdf The Essential Physics of Medical Imaging, read online The Essential Physics of Medical Imaging, The Essential Physics of Medical Imaging Jerrold T. Bushberg pdf, by Jerrold T. Bushberg The Essential Physics of Medical Imaging, book pdf The Essential Physics of Medical Imaging, by Jerrold T. Bushberg pdf The Essential Physics of Medical Imaging, Jerrold T. Bushberg epub The Essential Physics of Medical Imaging, pdf Jerrold T. Bushberg The Essential Physics of Medical Imaging, the book The Essential Physics of Medical Imaging, Jerrold T. Bushberg ebook The Essential Physics of Medical Imaging, The Essential Physics of Medical Imaging E-Books, Online The Essential Physics of Medical Imaging Book, pdf The Essential Physics of Medical Imaging, The Essential Physics of Medical Imaging E-Books, The Essential Physics of Medical Imaging Online , Read Best Book Online The Essential Physics of Medical Imaging #ebook #Mobi #online #DOC #DownloadOnline
Read Online The Essential Physics of Medical Imaging By Jerrold T. Bushberg (Download Ebook) #ebook
0
read-online-the-essential-physics-of-medical-imaging-by-jerrold-t-bushberg-download-ebook-ebook-11554665818a
2018-09-02
2018-09-02 03:47:53
https://medium.com/s/story/read-online-the-essential-physics-of-medical-imaging-by-jerrold-t-bushberg-download-ebook-ebook-11554665818a
false
539
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Leigh Herring
null
1cee9025e7ad
leighherring
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-16
2017-09-16 14:14:51
2017-09-16
2017-09-16 14:18:25
3
false
en
2017-09-16
2017-09-16 14:18:25
3
11568ea8ddb4
1.825472
1
0
0
Companies have several properties that can be examined and compared to their risk profiles. At Open Risk Exchange, we are constantly…
5
Relationship between Officer Age and Probability of Survival for UK Companies Companies have several properties that can be examined and compared to their risk profiles. At Open Risk Exchange, we are constantly looking to evaluate these properties and better understand their relationship with the probability of business closure. An example of such a relationship is how the probability of survival for a company varies with the age of its officers. See Figure 1 below. Figure 1: Probability of Survival (24 months) vs Minimum Office Age It is evident from the chart that companies with younger officers (between 20 and 30 years of age) have a lower probability of survival within a 24 month period relative to companies with older officers. As the age of officers within a company increases (between 30 and 50 years of age), the probability of survival of these companies also increases, peaking at between 50 to 55 years of age. The probability of survival for companies with officers older than 55 years of age appears to decline through to 65 years of age. A similar analysis can be conducted on the relationship between the ORX Business Closure Score and the age of a company’s officers. See Figure 2 below. Figure 2: ORX Business Closure Score vs Minimum Officer Age Note that the overall patterns highlighted in Figure 1 above are also relevant in Figure 2. This relationship is further supported by an analysis of how the financial characteristics of a company vary with the age of its officers. See Figure 3 below. Figure 3: Financial Characteristics of Companies vs Officer Age Companies with young officers (where the oldest officer is under 30 years of age) appear to have smaller total balance sheet size and less variation relative to companies with older officers. Details on how to interpret the box plot diagram. Testing relationships such as the one presented in this post allows Open Risk Exchange to frequently assess the performance of the ORX score. Check the risk of a company Originally published at www.openriskexchange.com.
Relationship between Officer Age and Probability of Survival for UK Companies
1
relationship-between-officer-age-and-probability-of-survival-for-uk-companies-11568ea8ddb4
2017-09-16
2017-09-16 16:15:52
https://medium.com/s/story/relationship-between-officer-age-and-probability-of-survival-for-uk-companies-11568ea8ddb4
false
338
null
null
null
null
null
null
null
null
null
Finance
finance
Finance
59,137
Open Risk Exchange
Free credit risk scores for businesses in the United Kingdom.
35b62d9a69f3
ORX
4
176
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-01
2017-12-01 12:08:18
2017-12-01
2017-12-01 14:00:18
6
false
en
2017-12-07
2017-12-07 13:06:00
8
115a336dc1e4
2.587736
17
0
0
As part of my learning path on data science, I came across several articles from Business Science, in particular a demo-week series of time…
2
Starting into Time Series Analysis As part of my learning path on data science, I came across several articles from Business Science, in particular a demo-week series of time series forecasting with machine learning using R packages. The Business Science blog articles are pretty nifty, as the articles are anchored on business case uses (see customer analytics, employee churn, etc). These articles espouse the value-adding of utilising data science to make data driven decisions, and there is a strong emphasis on framing the problem/hypothesis, which I feel goes beyond the typical machine learning tutorials. Going through the R demo-week for time series brought about a certain curiosity on TSA. The principles behind the demo-week articles aren’t very different from other machine learning/regression modelling concepts: one would have to process the obtained data to ensure that a suitable modelling can be fitted, and validating your model with train/test data. By learning the patterns in the time series, the model empowers the user to make some future forecasting! (I’ve always felt that forecasting was empowering as it provides some form of clairvoyance…) Predicting Australian beer sales based on a time series model. Special thanks to the Business Science team! I decided to plunge myself into the world of time series analysis (TSA) as a learning goal. Besides, time series forecasting seems pretty valuable and probably applicable be it on a macro scale for industries (finance, econometrics, etc) or a micro scale for business entities (sales, revenue, production demand). What I discovered was that TSA was a confluence of data-science disciplines such as regression modelling (residual diagnostics, squared error, data transformation), mathematics (trigonometry harmonics, complex numbers, Fourier transformations), statistics (correlation, estimators, significance, hypothesis testing), and a hint of time based domain knowledge (lagging, filtering, spectral analysis). Spectral analysis with harmonics: Combining trigonometry with statistics! Perhaps the hype of machine learning and artificial learning has blinded me, but I also realised that TSA is not as niche as I thought it was. Plus, the academic study on it goes way back (since at least the 1970s; see Box and Jenkins) and the resources on it are pretty staggering. After doing some research on KDNuggets, I decided on “Time Series Analysis and Its Applications” as a guiding resource. Click here for the freely-available “easy” version of TSA (although I won’t say it’s that easy at least for me). Lag plots to uncover time based data correlations, which I was pretty mind-blown by. It has been a challenging learning goal but looking back at my progress, I’m really glad to have chosen this direction. There is a beauty in translating the mathematics and statistics behind the analysis in understanding time series analysis. I will use Medium as a form of documenting my learning points, and as a diary for my progression into TSA.
Starting into Time Series Analysis
50
starting-into-time-series-analysis-115a336dc1e4
2018-06-14
2018-06-14 22:37:11
https://medium.com/s/story/starting-into-time-series-analysis-115a336dc1e4
false
434
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Kenneth Foo
null
7067adee1ae0
foo.food.fool
21
3
20,181,104
null
null
null
null
null
null
0
null
0
6c08165ab9c1
2018-03-05
2018-03-05 20:06:54
2018-03-05
2018-03-05 20:09:04
0
false
en
2018-03-05
2018-03-05 20:09:04
1
115af1f59c22
2.788679
0
0
0
The Forex Weekly Outlook is designed to help traders remain aware of intermarket correlations of global market relationships. You can…
5
VantagePoint Forex Weekly Outlook for March 5th, 2018 The Forex Weekly Outlook is designed to help traders remain aware of intermarket correlations of global market relationships. You can become more profitable if you know how to get ahead of the trends and understand that these relationships can potentially expand your portfolio. Utilizing the predictive indicators and intermarket relationships in VantagePoint Intermarket Software can help traders find the right trades and the right times to enter and exit those trades. Let’s look at the charts for the U.S. Dollar and the major pairs. Forex and the U.S. Dollar The U.S. Dollar Index is the backbone of forex trading. The bulk of the trades involves buying or selling the U.S. dollar. Understanding the movements of the individual market will greatly benefit forex traders as they will be able to better predict the movements of the pairs based on the IDX market movement. Key levels and market movements: The US Dollar is in a repetitive pattern. Last week, it closed exactly on that key VantagePoint level and this week will be a big one with the non-farm payroll number and a few Fed speeches. If there is a week payroll number, which will take the pressure off further rate hikes. Traders should monitor the sideways movement of the dollar until at least Wednesday, when that ADP report comes out. What do the indicators say? The VantagePoint key level is at 89.91 and the VantagePoint PRSI is at 50.1. Forex Weekly Outlook for Major Pairs The major pairs are where most Forex traders trade the market. In the Forex Weekly Outlook we take a look at the most popular pairs analyzing price action, news events and/or risk off scenarios that could play a role in market movement, and a series of VantagePoint charts that best present information that can assist traders in determining where the market may move in the week ahead. Euro/U.S. Dollar (EUR/USD) Key Levels and market movement: If gold continues to move higher, we’ll see a lot of activity with this EUR/USD pair. There is both a double top and double bottom in place, so this pair is trading within that range. The PRSI is trying to put out a bullish signal, but it is struggling a little bit. What do the indicators say? The key VantagePoint level is at 1.2297 and the PRSI is at 50.7. British Pound/U.S. Dollar (GBP/USD) Key Levels and market movement: This pair is following a similar pattern to EUR/USD with a potential bottom trying to form around the 1.3713 area. What do the indicators say? The key VantagePoint level is at 1.3903 and the PRSI is at 34.3. U.S. Dollar/Japanese Yen (USD/JPY) Key Levels and market movement: There really seems to be no buyers for this pair. There has been another retracement with multiple failures at the key VantagePoint level of 107.35. Traders can look for this pair to hold above the 105 area, but this pair is extremely volatile during the payroll week and traders need to take caution. What do the indicators say? The key VantagePoint level is at 107.35 and the PRSI is at 26.5. The Commodities Currencies U.S. Dollar/Canadian Dollar (USD/CAD) Key Levels and market movement: With some political moves by the Prime Minster of Canada, this is signaling a bigger move to the downside for this pair. Traders should take note of the massive level of resistance coming in at 1.2920. It’s important to take note of a bull trap. What do the indicators say? The key VantagePoint level is at 1.2670 and the PRSI is at 82.1. Australian Dollar/U.S. Dollar (AUD/USD) Key Levels and market movement: There is significant support all the way down to the .7640 area. If gold moves higher, this pair is likely to go with it. What do the indicators say? The key VantagePoint level is at .7836 and the PRSI is at 25.5. New Zealand Dollar/U.S. Dollar (NZD/USD) Key Levels and market movement: This pair is virtually the same trade at the AUD/USD. The neural index has turned positive, but traders should continue to watch the intermarket correlation of gold as well. What do the indicators say? The key VantagePoint level is at .7316 and the PRSI is at 49.2. Ready to use the power of AI to predict the strength and direction of the currency markets? It’s time to request your complimentary demonstration of VantagePoint Software. Click here to get started>>
VantagePoint Forex Weekly Outlook for March 5th, 2018
0
vantagepoint-forex-weekly-outlook-for-march-5th-2018-115af1f59c22
2018-03-05
2018-03-05 20:09:27
https://medium.com/s/story/vantagepoint-forex-weekly-outlook-for-march-5th-2018-115af1f59c22
false
739
the cashflow stories that matter. covering finance, wealth accumulation, venture capital, bitcoin, and money, money, money.
keepingstock.net
keepingstock
null
Keeping Stock
keeping-stock
STOCK MARKET,FINANCE,CASH FLOW,STOCKS,FINANCIAL REGULATION
keepingstock
Trading
trading
Trading
19,801
Vantagepoint ai
Patented software using Artificial Intelligence to help traders predict the market with up to 86% accuracy. Get a free demo: www.vantagepointsoftware.com/
d47649555a99
Vantagepoint
238
140
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-02
2018-03-02 01:13:17
2018-03-02
2018-03-02 03:58:38
2
false
en
2018-03-03
2018-03-03 02:07:51
1
115b1e7d9a91
2.175786
7
0
0
My struggle
3
How To Become A Data Scientist When You Have No Technical Background My struggle Ever since my first The Analytics Edge class at MIT, I have gone down the rabbit hole of trying to figure out what exactly I need to learn to be a data scientist. Over the course of the last 1.5 years, I interviewed many data scientists, worked with some of them at Facebook, watched numerous Youtube videos, read voraciously, forced myself to complete online classes and dabbled with projects on Github. It took me 1.5 years to feel I can see a path now. I am not near being a great data scientist yet. It bugs me so much that when so many of us are passionate about the subject, yet there is no place to even understand where my gaps were. And, I am lucky, because I was at MIT and later Silicon Valley, surrounded by real data scientists. I can’t imagine the struggle many others face. What is the chance of a college student in Vietnam successfully figuring out how she can become a data scientist, and becoming one eventually? I would not bet my money on it. Imagine that you are set on a hard hike a mountain. You are ready to endure the hot weather and the strenuous climb. But, how are you ever gonna reach the peak if there is absolutely no path. You don’t know where the peak is, or how long it is going to take. What is the chance of you quitting? Many “quit” and went back to school. That is why universities can charge such a high price tag on a one-year master degree on Data Science. It doesn’t quite make sense to me that only those that can afford $100,000 tuition fee can be data scientists. Data Science Knowledge Tree I didn’t quit and continue to self-learn. Hence, the knowledge tree. I put together key modules of data science based on my own learning experience and research. This is only meant to be a starting point. Data science is such a nebulous term. It can mean completely different things in different companies and different context. It’s only with the community of data scientists with decades of experience that a real knowledge tree that can benefit many would be possible. Click on this link to view the full image and please wait for a few seconds to load You can contribute I believe in the wisdom of the crowd and therefore I am sharing this tree with everyone in the community, hoping that together, we can make this better. This open source project will be part of a social enterprise I am starting to help people learn smarter and more effectively, such that fewer of us will be discouraged. Please leave your comments if you are interested in contributing your insights and wisdom. I want to give back and trust that many of you want to, as well.
How To Become A Data Scientist When You Have No Technical Background
26
how-to-become-a-data-scientist-115b1e7d9a91
2018-05-25
2018-05-25 22:56:50
https://medium.com/s/story/how-to-become-a-data-scientist-115b1e7d9a91
false
475
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Hui Zhu
Growth | Data | HasBrain team
259585719e01
huizhu2011
370
198
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-11
2018-04-11 15:32:35
2018-04-11
2018-04-11 15:35:46
0
false
en
2018-04-11
2018-04-11 15:35:46
2
115bec3e68b4
2.486792
0
0
0
Throughout the Cold War, a silent battle was heating up. This battle was not being fought with bullets and bombs — it was being fought with…
1
The ever-growing battle for AI supremacy Throughout the Cold War, a silent battle was heating up. This battle was not being fought with bullets and bombs — it was being fought with ones and zeroes. To many people, the rapid improvement of artificial intelligence was simply a matter of economics; to others, it was a matter of national security. While movies like WarGames (1983) capitalized on the hysteria around Soviet–US relations, the war of computing power got its major kick-start courtesy of the electromechanical rotor cipher machines known as Enigma machines, giving inspiration for the biopic The Imitation Game (2014). The Enigma Machines were used by the Germans during World War II to encrypt their communiqués, and the work of cryptanalysts — thanks in no small part to Alan Turing — allowed the Allies to change the tide of the war. Since then, computer-based intelligence (and counterintelligence) has led to many outstanding breakthroughs in the world of cryptography and computer science. While military-grade encryption is, of course, still extensively used by the world’s militaries to protect information from getting in the wrong hands, companies of all shapes and sizes are benefiting from the capabilities of encryption. The reason I bring this up (aside from demonstrating the importance of computational power in war and industry) is that the use for computers and AI is so much vaster than we could’ve possibly imagined when computers were still in their infancy. During World War II, computers were mostly used to help crack encryptions or calculate the angles of artillery weapons. Today, the computing power in a single modern missile would likely be more advanced than all of the computing power of World War II combined. You’ve probably also heard similar comparisons made about the Moon landing, which usually is something like the following: “All of NASA’s computational power would have been more than a million times less powerful than a modern smartphone.” Now let’s imagine a paradigm shift that follows the notion that access and control of advanced AI would be more important than merely stockpiling thousands of nuclear weapons. Currently, nuclear weapons are primarily stored to deter an enemy attack, with mutually assured destruction often being cited as a reason why the idea of initiating World War III is such an insane prospect for world leaders. However, AI could change this. A sufficiently advanced AI framework in the wrong hands could lead to a situation whereby abused superintelligence could devastate humanity. Elon Musk agrees: “China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.” But it’s not just countries beefing up their AI capabilities; large companies are also joining the bandwagon — although they are seeing dollar signs, not national security. Obvious examples in commercial AI research include intelligent personal assistants such as Siri and Cortana, but this is just the tip of the iceberg. In 2016, AWS (one of Amazon’s subsidiaries) grew its sales by 60% (to $11 billion), making it the fastest-growing and most profitable subsidiary in Amazon’s corporate portfolio. Despite its financial success, Amazon’s venture into cloud computing was not without its share of critics. The head computer scientist at Allen Institute for Artificial Intelligence, Oren Etzioni, has gone on the record as saying that “[Amazon is] playing catch-up” when it comes to AI research. What every business owner must keep in mind — regardless of size — is that no company is too small to reap the rewards of harnessing AI to help maximize profits. There are plenty of opportunities for small-scale companies to take a very juicy slice of the pie. WorkFusion, for example, offers an AI product that makes use of machine learning to help take your company to the next level with ease. There is also a plethora of cloud-computing solutions that make great use of AI to handle company operations, including server load.
The ever-growing battle for AI supremacy
0
the-ever-growing-battle-for-ai-supremacy-115bec3e68b4
2018-05-10
2018-05-10 06:40:42
https://medium.com/s/story/the-ever-growing-battle-for-ai-supremacy-115bec3e68b4
false
659
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ben Schultz
null
69c7b2c80f40
benschultz_57614
17
17
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-03
2018-08-03 08:45:32
2018-08-03
2018-08-03 08:46:18
2
true
en
2018-08-03
2018-08-03 08:46:18
0
115de92de041
3.213522
11
0
0
AI’s most simple concept is terrifying
5
Why AI Terrifies the World’s Greatest Minds and How it’s Inevitable Machines Take Over AI’s most simple concept is terrifying If it *only* reaches the same level of intelligence as us, it’s ability to operate at a far higher speed means that in 6 months it would have effectively operated for our equivalent of 500,000 years. For comparison — it has taken us around 200,000 years to reach where we are now as a species Let that sink in for a moment In a period of 6 months for us, AI will have accumulated half a millions years worth of knowledge on top of everything we already know meaning that we will be unable to compete with them within days of them achieving the same levels of intelligence as us. We won’t even have time to react because machines will have surpassed us literally within the blink of an eye. In the same way we couldn’t relate to the earliest humans AI won’t be able to relate to us This, at best relegates us to the role of how we treat pets now Or worse — Ants When ants stay out our way we leave them alone. When they invade our homes or obstruct our intentions we obliterate them without a second thought. They are an insignificance that can be eradicated. This is the likely outcome for AI and us It is why AI is a zero sum game. How would Russia or China react if they though the States were on the brink of engineering such dominance? How would they react if there were even murmurs of rumour that it was close? This isn’t a technology that can be competed with — if you are 6 months ahead you have 500,000 years worth of knowledge more than the competition. The winner literally takes it all Then likely loses it all to the AI itself This exceeds the Manhattan project by several orders of magnitude, both in terms of danger and graveness to the future of humanity. This is why the world’s greatest minds are terrified This isn’t just the most likely outcome — it is inevitable If any rate of progress is assumed it is unavoidable that we reach this point in at an undetermined point in the future. But it will be far sooner than we think In singular tasks, machine already bests us with ease. Chess, Jeopardy, and Go are all examples of this. Machines can lift far more than we ever could and calculate things that would take us years. A computers ability to operate computationally millions of times quicker than we are able to comprehend is an insurmountable barrier to our operation alongside them. It is why enabling human machine connectivity is critical to the survival of our species. Without it, we are worse than cavemen. It is critical that we expand our bandwidth as soon as possible. Yet people are scared of what Crispr means for development of a different class of human. Sure, genes may be edited to create a superior version of man — more intelligent, more beautiful, less susceptible to serious illness — but they will still be human. AI won’t be I know where my fears are placed If AI accumulates the equivalent of 2 years of our knowledge each minute we are finished. Potentially we already are — we have never taken a step back from an impending technological innovation because of it’s danger to the survival of humanity — just look at nuclear bombs. It doesn’t even matter if they are benevolent Eventually, they will be operating so far out of our realms of comprehensionthat their actions will affect us as a byproduct of their intentions. It’s as simple as that And that is if you assume they only reach the equivalent intelligent as us and never progress past that. But They will All that it now requires is a way to combine all these individual elements of intelligence into a singular AI that is able to make use of them all. Verticallythey already exceed us in every area. Horizontally they don’t come close. This is to mean their general intelligence lets them down in the broadness of it’s capability. Once they achieve this it is game over In the blink of an eye a single machine has accumulated years of human level knowledge, literally. The second our goals diverge the AI is in control of our destiny It’s evolve or face extinction Killed by our own creation
Why AI Terrifies the World’s Greatest Minds and How it’s Inevitable Machines Take Over
101
why-ai-terrifies-the-worlds-greatest-minds-and-how-it-s-inevitable-machines-take-over-115de92de041
2018-08-03
2018-08-03 10:20:30
https://medium.com/s/story/why-ai-terrifies-the-worlds-greatest-minds-and-how-it-s-inevitable-machines-take-over-115de92de041
false
750
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Chris Herd
Founder @Nexves, Entrepreneur, Angel Investor, ICO/Blockchain Advisor
da7b665f3cc7
ChrisHerd
31,328
3,629
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-01
2018-02-01 08:28:15
2018-02-01
2018-02-01 08:29:20
1
false
en
2018-02-01
2018-02-01 08:29:20
5
115e68d62bcf
3.233962
0
0
0
What is a Chatbot?
4
Chatbots: Changing the Way Enterprises Communicate What is a Chatbot? Chatbot, or sometimes just referred to as bots, involves developing an intelligent chat system that can hold meaningful conversations with users. With the evolution of Natural Language Processing (NLP), Chatbots became the next logical step for businesses to consider. Why Chatbot for Enterprise? Chatbot is available 24X7. You never miss that query from a customer and the opportunity to acquire or retain them. Your customers won’t have to read the message, “Our next available agent will get back to you within 24 business hours”. Rather, they get instant service. Bots can multitask and take the load off from your human task force. Customers would prefer talking to someone rather than go through your endless FAQ section every time they have an issue or a query. While your agents are able to handle few customers at a time, Chatbots can handle many. These Robo-advisors can suggest the products or services, as per your requirement and the feasibility. As part of a shift in buying behavior, we see customers more armed with information before they buy. They are looking for easy access to the information that solves their problems and Chatbot will hold the key in the coming days. Where do You Use Chatbots? Chatbots converse with humans to provide service and that’s exactly when and where you use them. Look into your organization’s services that have least changes in its process/procedure while requiring more human intervention. Start with the most repetitive tasks with least complexity that can be automated and a trained bot can handle them going forward. You effectively, can make your human task force focus on more complex areas by doing this. Are you contemplating on using them interactively, capturing customer information? You are thinking on the right track. Organizations have even found Chatbots to be valuable in engaging and retaining talent. The most common mistake organizations make is to apply Chatbot all over the service you opt for implementation. Chatbot is built based on intelligence and natural language processing. This means that bots, like humans, need to be trained to understand user intentions, rules and procedures. Ideally, you should select few use cases within the service which could benefit from using a Chatbot and develop it. You need a fully conversational bot on a particular topic rather than a bot that talks but provides questionable responses. Add more use cases as you continuously, train your bot. Chatbot: Behind the Scene You will need to be aware of different elements required to make a Chatbot work. Chatbot has a mix of components that need to be put in place for it to be effective. We will need a communication channel where the dialogues/chat will take place such as Skype, Facebook messenger, Slack, etc. A component is needed to understand the user’s intention from their chats using a plethora of frameworks available for natural language understanding. You can either develop a bot from scratch using Bot Development Frameworks provided by Microsoft, Amazon, etc. or choose to go with an existing platform on which you will develop your bot known as Bot Platforms. The bot will need access to relevant information from which it can provide the answers. Training the bot for contextual understanding and responses. How to Get Started? Organizations have seen rapid adoption of chatbots in mainstream communication with customers. It is necessary to start quick and yet, there is the need to see if it is a viable solution, before you take the plunge. The best way to start is to do a POC (Proof of Concept) that will help ascertain your understanding and put your ideas into implementation on a small scale, then scale-up eventually. For a POC, the following steps can be considered: Discuss with your stakeholders Initial understanding of the business and a specific service where the bot can be applied. Identifying the business problem for which the bot can be a suitable solution. Evaluating specific use cases for human interaction from the business problem. Defining success criteria for POC. Based on the above discussion points, the service provider should Define the goal of the Chatbot. Design the conversation tree for the Chatbot. Identify the Chatbot architecture and required bot development framework or bot platform. Train the Chatbot for a particular area so that it can answer queries by itself. Deploy and integrate the Chatbot with existing systems for rollout. Conclusion Summarizing it, a simple rule for identifying areas where chatbots can be applied is to look for aspects or factors which help you to: Reduce cost for your organization Minimize human effort Reduce time taken to complete a service If you have any questions on chatbots and how it can digitally transform your organization, You can reach out to me. Harsha Bindu Business Development Manager, RapidValue
Chatbots: Changing the Way Enterprises Communicate
0
chatbots-changing-the-way-enterprises-communicate-115e68d62bcf
2018-05-03
2018-05-03 06:29:57
https://medium.com/s/story/chatbots-changing-the-way-enterprises-communicate-115e68d62bcf
false
804
null
null
null
null
null
null
null
null
null
Bots
bots
Bots
14,158
RapidValue Solutions
RapidValue is a leading provider of end-to-end mobility, Omni-channel, IoT, AI, RPA and cloud solutions to enterprises worldwide.
24233cee8f5f
rapidvalue
131
424
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-06
2018-07-06 05:07:47
2018-07-06
2018-07-06 05:11:19
1
false
en
2018-07-06
2018-07-06 05:11:19
1
115f941bcb57
1.449057
0
0
0
Since the invention of computers or machines, their behave to outfit various tasks went regarding growing exponentially. Humans have…
2
What are artificial intelligence techniques? Artificial Intelligence Since the invention of computers or machines, their behave to outfit various tasks went regarding growing exponentially. Humans have developed the perform of computer systems in terms of their diverse active domains, their increasing rapidity, and reducing size once esteem to era. A branch of Computer Science named Artificial Intelligence pursues creating the computers or machines as capable as human beings. What is Artificial Intelligence? According to the father of Artificial Intelligence, John McCarthy, it is The science and engineering of making intelligent machines, especially skillful computer programs. Artificial Intelligence is a habit of making a computer, a computer-controlled robot, or a software think intelligently, in the same song the dexterous humans think. AI is able by studying how human brain thinks, and how humans learn, sit in judgment, and acquit yourself though frustrating to solve a millstone, and later using the outcomes of this psychiatry as a basis of developing gifted software and systems. Philosophy of AI While exploiting the power of the computer systems, the curiosity of human, guide him to shock, Can a robot think and discharge commitment gone humans obtain? Thus, the fee of AI started taking into account the strive for of creating same depth in machines that we regard as creature and regard high in humans. Goals of AI To Create Expert Systems The systems which exhibit gifted actions, learn, be ill, make aware, and advice its users. To Implement Human Intelligence in Machines Creating systems that believe on, think, learn, and quarrel out by now humans. What Contributes to AI? Artificial penetration is a science and technology based upon disciplines such as Computer Science, Biology, Psychology, Linguistics, Mathematics, and Engineering. A major thrust of AI is in the maintenance taking place front of computer functions connected when human permitted judgment, such as reasoning, learning, and difficulty solving. Out of the following areas, one or fused areas can contribute to construct an intelligent system. Read More Decisive Artificial Intelligence
What are artificial intelligence techniques?
0
what-are-artificial-intelligence-techniques-115f941bcb57
2018-07-06
2018-07-06 05:11:19
https://medium.com/s/story/what-are-artificial-intelligence-techniques-115f941bcb57
false
331
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Decisive Artificial Intelligence
Who are we? A group of ambitious go-getters looking to change the world through technology. Read More Information About Decisive AI: https://www.decisive.ai/
fdeeacf377ca
DecisiveAI
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-20
2018-07-20 08:09:32
2018-07-20
2018-07-20 08:12:06
1
false
en
2018-07-20
2018-07-20 08:12:06
2
11604a86b32a
8.181132
4
0
0
This is a guest article by Miguel Davis from Macro Connect.
5
How to Use Data and Analytics to Help Schools Make Smart Decisions This is a guest article by Miguel Davis from Macro Connect. Schools now have more data than ever about their students. And, that should translate into making better decisions about curriculum and instruction. The trouble with realizing this optimal learning experience for students, it seems, is that educators need the time and direction to analyze this overabundance of data points. Advances in technology, standardized testing, student information systems, and instructional software have provided districts with an overload of information about learning trends, patterns, and student progress — many of which probably are never identified. Large amounts of information can easily get lost when those who need to sort through it don’t have a goal or outcome in mind. Mike Parker, assistant director of the Center for Accountability Solutions at the American Association of School Administrators, says, “We recommend not fishing blindly as you review data, if you don’t have a purpose in mind, it’s easy to get off track.” Set relevant goals and avoid organizational complexity In order to avoid data chaos, districts have begun creating and/or staffing positions like chief information officer, data specialist, and chief data officer to take the lead in distinguishing what their data means. The data trend in districts is only set to grow with increased connectivity and use of more and more devices inside the classroom. Unfortunately, too much of the data collection is done in the spirit of compliance rather than with the purpose of putting it to good use. Better use of school-collected data includes but isn’t limited to measuring and predicting student progress, program effectiveness, instructional effectiveness, allocating resources more wisely, and promoting accountability. Rather than school districts being data-driven, perhaps it’s best for them to be SMART, an homage to the common goal-setting acronym meaning specific, measurable, achievable, relevant, and time-based or words to that effect. As with goal-setting, the SMART use of data is most effective in schools when users follow a structured process with the following steps: Set a vision. Communicate the vision. Train all stakeholders. Have action/reaction in execution. The order in which these steps are completed is key to its success. Collecting and sifting through data just for the sole purpose of completing data entry tasks makes it tough to gain traction as well as see any considerable positive results. Define performance metrics to achieve expected results A CIO, superintendent, or administrator along with their committee should ask non-data related questions about areas where they feel they’re falling short or might improve upon considerably. In Data Analysis for Continuous School Improvement, author Victoria Bernhardt identified 7 main questions to ask in the initial stages of data analysis: What is the purpose of the school or district? What do you expect students to know and be able to do by the time they leave school? (standards) What do you expect students to know and be able to do by the end of each year? (benchmarks) How well will students be able to do what they want to do with the knowledge and skills they acquire by the time they leave school? (performance) Do you know why you are getting the results you get? What would your school and educational processes look like if your school was achieving its purpose, goals, and expectations for student learning? How do you want to use the data you will gather? It’s also important to identify how (if you don’t already have key performance indicators in place) you’re going to gather the best data to answer the questions you’re asking. Narrow the number of data points to the smallest amount possible to make data collection simple. Often this requires drilling down to leading performance metrics that drive end results in an area. A common trap is relying on test scores. Standardized tests are supposed to “act like a report card for the community, demonstrating how well local schools are performing…testing is the first step to improving schools, teaching practice and educational methods through data collection,” Using Data to Improve Schools: What’s Working. However, standardized tests aren’t frequent enough to measure the growth of individual students and are lagging indicators. By the time you get your results back, it’s too late to intervene. As Suzanne Bailey, a restructuring issues consultant at Tools for Schools said, “If your data set is only test scores, you are doomed.” Data points that are collected far more than a few times a year will be the data points that can influence results. Some schools are finding success with assessment alternatives like project-based learning, and the use of platforms by students with adaptive interfaces and interactive dashboards. Systems that are updated regularly and are easy to access help build good practice around those specific data systems. Boost engagement through clear communication Notoriously, at the macro level, education is slow to adopt change. Small-scale, teachers are often fearful of change because they’re risk averse. “American education remains basically modeled on an approach hundreds of years old. Students with varying levels of ability sit in classes organized by grade level before a ‘sage on the stage’ who teaches reading, writing, arithmetic, and a bit of science. That system, at least in the US, doesn’t seem to work well enough. Among developed countries ranked by the Organization for Economic Cooperation and Development, the US is 31st in math achievement, 24th in science, and 21st in reading” (Sims). To combat organizational inertia, leadership must create the conditions for change to take root and thrive. Encouraging discussion, feedback, and criticism is a key factor in the communication step in the process. The most fruitful conversation surrounds the why– without a well-defined why your initiative becomes another item on a checklist for your staff. Everyone involved in a district’s data management and metrics should be on the same page about the intended results as well as equally confident in the how aspect. It’s critical for the entire staff to buy into the plan presented before moving onto training. The how should include: the time frame, how progress will be evaluated, how it affects routine, commitments to one another, and what supports will be in place. For example, Mark Hess, Executive Director of Instruction, Technology, and Assesment from Walled Lake Consolidated School District, invited team members from all across the district to evaluate and make a case for piloting a usage analytics and SSO tool called ClassLink. By his own evaluation, he felt the data it would collect would be powerful for teachers and admins, but without staff buy-in he reasoned it would be underutilized. Provide training for end users Training is essential for school staff and administration just as it is important in a large corporation. Adopting a new model or system for data analysis will require new habits from employees and a deep understanding of the vision at hand. Asking employees to follow a new process will only be successful if they feel empowered, supported, and motivated to carry on when you’re not watching. Training should also be an on-going effort rather than a one-time session or introduction. Brian Benzel, a school superintendent from Spokane, Washington, was able to summarize his findings on using data in schools with the following points: Start small, don’t overwhelm staff with a data dump. Begin with the core issues, such as student achievement markers in a single subject area. Listen to what the data tells about the big picture; don’t get lost in too many details. Work to create trust and build support by laying data on the table without fear of recrimination by staff. Provide training opportunities for staff on how to use data. Be patient, working with what is possible in the district. Monitor the new process and highlight positive results Regular check-ins should serve to encourage accountability as well as prevent the process from shifting off track. If something isn’t going as planned, a quick intervention is necessary to avert poor or even inaccurate results. Remember that you chose data points that are collected regularly — so bad behaviors shouldn’t hang around long enough to become habits. Though the new system might not produce positive results immediately, honor the progress through benchmarks and highlight district wins. People respond best to positive reinforcement and proof that what they’re doing is working. The monitoring process should not only look for the positives, however, but also weed out the selected indicators that might not actually be improving results. If some are identified, there’s nothing wrong with tweaking the process for continuous expansion and development. In practice: making data-driven decisions To illustrate the entire process, let’s dig into a practical example. Many a district has attempted to move the needle on student achievement by improving parental engagement, but fallen flat in execution. Often parent-facing initiatives are doomed to fail from the vision-setting stage because the engagement metrics are misaligned to the desired end result. If the goal is student achievement, then we need to select performance indicators that capture behaviors that most directly influence student achievement. Compare: The number of parent volunteer hours per child given to the school. The number of parent logins per child to the online gradebook. The first metric is a strong sign of parent engagement but may be a step removed from a guardian’s involvement in monitoring and supporting a student’s academic activities. Additionally, in practice, volunteer programs create some limiting factors for data collection, i.e. there are only so many opportunities to volunteer, volunteer opportunities may be seasonal in nature, etc. On the other hand, logins to a grading system create an opportunity to set an ongoing expectation for parental interaction with a student’s achievement. For example, Valley Christian Junior High (VCJH) asks parents to check their students’ PowerSchool accounts once a week. Each login is reasonable verification that a parent is staying up to date on how their child is progressing, what he/she has been working on, and what might be coming up. In communicating the vision, it’s important to not shy away from how routines and norms will change. For the number of parent logins metric to be an effective one, data should be up to date. Imagine being a parent who logs in and sees that no grades have been entered for 3 weeks! It’s pretty unlikely that logging in will become a weekly ritual if a parent isn’t confident that new data will be waiting for them. That’s why buy-in is so critical at this stage and an effective leader will communicate why and how with input from their teams. After some discussion, the how agreed upon for VCJH was simple: Have the previous week’s assignments entered by 5 pm every Monday. A clear-cut, hard and fast deadline like this makes compliance easy to monitor, plus it is much easier to train end users and create support structures around a recurring event. VCJH can find and assist any teachers who are struggling to input data correctly or on time. When firing on all cylinders, this means all teachers, administrators, and parents are using up-to-date data to drive instruction and intervention at all levels. You can read the full case study on VCJH’s results using PowerSchool for parental engagement by clicking here. Data alone can be more costly than valuable to an organization without the right plan in place. Without data fluency and data discipline, the KPI’s as well as the positions who manage them are simply counterproductive. Managing and using that data to drive measurable results will be differentiators within school systems sooner rather than later. If you’re a school official with any access to data projects, consider where past or existing initiatives missed the mark. How can the current data system reach its untapped potential? How will a fresh vision lead to better results? Establishing a vision, communicating effectively, training staff, and then implementing and monitoring the system is a process to be done in sequential order for an optimal outcome. Miguel Davis is the Digital Learning Manager and Client Solutions Director at Macro Connect, a Detroit based education technology solutions agency. His role is to support schools and businesses in achieving breakthrough performance through technology. Miguel is a former classroom teacher, district technology coach, and currently a board member of Playworks, a non-profit organization focused on improved student performance through recess and healthy play. Want to write an article for our blog? Read our requirements and guidelines to become a contributor. Originally published at AltexSoft’s blog “How to Use Data and Analytics to Help Schools Make Smart Decisions”
How to Use Data and Analytics to Help Schools Make Smart Decisions
4
how-to-use-data-and-analytics-to-help-schools-make-smart-decisions-11604a86b32a
2018-07-20
2018-07-20 08:12:07
https://medium.com/s/story/how-to-use-data-and-analytics-to-help-schools-make-smart-decisions-11604a86b32a
false
2,115
null
null
null
null
null
null
null
null
null
Education
education
Education
211,342
AltexSoft Inc
Being a Technology & Solution Consulting company, AltexSoft co-builds technology products to help companies accelerate growth.
fb641da4b895
AltexSoft
572
13
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-04
2017-11-04 10:43:44
2017-11-04
2017-11-04 10:43:44
1
false
en
2017-11-04
2017-11-04 10:43:44
1
1160a00da37d
2.067925
0
0
0
null
5
4 reasons why you should use big data for sales prospecting With the continuous progress in data science, there is a huge opportunity for sales managers to achieve new insights on how to increase sales, have more satisfied customers who return again and again. But, one obstacle many organizations face is learning how to convert ‘big data’ into use-able data. This is why the availability of data is not enough, knowing how to use it is crucial. Businesses need to learn to extract actionable data and use it in the right way to increase sales prospects. According to a recent research by IBM, only 5% businesses in the UK are using big data to their advantage, and almost 70% of the businesses are only operating within the first two stages i.e. educating and exploring, while missing out on engaging and executing. Another eye-opening statistic states that 41% of businesses lack an understanding of how to use big data effectively, and this is why they choose not to engage with it. This is why, you need to learn how to use big data efficiently in order to increase the effectiveness of your marketing and sales teams. The process starts with collecting and consolidating data. Sales leaders also need to learn tools that will help them transform the data available into intelligent sales strategies. With more than 41% of the business lacking analysis and understanding, it is crucial to learn how to analyze and translate big data, so that the bottom line reflects that. Here are 4 reasons that will show you how big data can take your sales to the next level; One of the greatest benefits of analyzing big data is that it can help you gain invaluable insights about how users feel about your product or service. By identifying the frequently bought services or products, you can link big data with social data to help you understand any barriers present in increasing overall sales. This can help you tap new markets, gain access to a bigger target audience and thus provide your business with new leads. By using the results of social data intelligently, businesses can increase their prospects of ‘social selling‘. Marketing teams can offer their ideal customer profile brand new content of high quality, thus taking audience engagement to a whole new level. Data that is obtained from social networks can be used in ‘recommender systems‘. One of the best examples of a recommender system is Amazon which provides a customized homepage to each user according to their profile and interaction history. This can help generate repeat sales, hence increasing overall sales by leaps and bounds. This is only possible if record of repeated sales is kept for each user in the database. With so many effective sales tools in the market, and available online it has become really easy to review sales report. Marketers don’t need to overburden themselves with updating and maintaining complex spreadsheets anymore. Posted on 7wData.be.
4 reasons why you should use big data for sales prospecting
0
4-reasons-why-you-should-use-big-data-for-sales-prospecting-1160a00da37d
2017-11-04
2017-11-04 10:43:46
https://medium.com/s/story/4-reasons-why-you-should-use-big-data-for-sales-prospecting-1160a00da37d
false
495
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
Yves Mulkers
BI And Data Architect enjoying Family, Social Influencer , love Music and DJ-ing, founder @7wData, content marketing and influencer marketing in the Data world
1335786e6357
YvesMulkers
17,594
8,294
20,181,104
null
null
null
null
null
null
0
null
0
a9221aa347a8
2017-12-15
2017-12-15 23:55:17
2017-12-15
2017-12-15 23:59:51
1
false
en
2017-12-16
2017-12-16 00:30:22
5
1161920db240
1.958491
4
0
0
It has been 10 years since the iPhone launch. iPhone introduced the seminal user interface — touch, the next evolution from Stylus, Mouse…
5
The World’s First AI Powered Voice Control It has been 10 years since the iPhone launch. iPhone introduced the seminal user interface — touch, the next evolution from Stylus, Mouse, and Keyboard. We are introducing voice control to Synqq, the next evolution from touch. Use your voice to do everything you do with the Synqq Mobile application. Check out the Synqq TechCrunch announcement. In the past few years, speech recognition has evolved from obscurity to the mainstream. Amazon made the first breakthrough with its stay-at-home Echo, enabling the user to buy anything from Amazon and extend your home control with natural language commands. Google followed suit with Google Home. Apple has introduced an exciting new mode of hands-free operation with its AirPods and recently became the largest headset vendor. Apple, Google, and Amazon are accelerating the voice era for consumers. Now, with voice, Synqq enables users to issue a set of commands to navigate the mobile application, and create, share, or consume the content they need at work. The Synqq platform understands natural language requests such as “open people”, “find Bob”, “add a note,” “share a note,” “create a meeting,” and “when was my last meeting with Bob?” Synqq is also self-learning: its understanding of what the user is looking for grows with each time it is used. Synqq handling of voice and Natural Language Processing (NLP) automatically recognizes names, entities, locations, and content based on prior interactions, making it an eminently effective tool to respond to commands, fill out forms, and locate salient information. We handle voice in all situations: from your phone, computer, online conferences. We take the speech from the user, run it through Google Automatic Speech Recognition (ASR) service and apply our NLP to improve the accuracy of transcribed text and determine the intent and entities. For example, as a user, if you need to have a follow-up meeting, you merely need to issue the command “Create event TechCrunch follow up with Ramu, Nelly, and Michael tomorrow at 10:00 AM for an hour”. Synqq’s voice NLP understands the names of the people (Ramu, Nelly and Michael) involved and picks out the pertinent facts required (title, time, and location) to create a new meeting and send out the calendar invites. The Synqq voice NLP learns from each such encounter and delivers to the user what he or she needs to succeed. Also the learnings are cumulative for a domain. Synqq Voice Assistant helps a business user by using AI to enable voice access to daily activities, sharing and retrieval of short-form content, is the first company to demonstrate voice as a primary interface for its principal functions. Download Synqq and start using your voice for your daily business interactions. We would be delighted to hear from you.
The World’s First AI Powered Voice Control
105
the-worlds-first-ai-powered-voice-control-1161920db240
2017-12-20
2017-12-20 17:05:38
https://medium.com/s/story/the-worlds-first-ai-powered-voice-control-1161920db240
false
466
Voice AI for your Application
null
alanvoiceai
null
Alan
alanvoiceai
ARTIFICIAL INTELLIGENCE,VOICE ASSISTANT,VOICE RECOGNITION,MACHINE LEARNING,FUTURE OF WORK
alanvoiceai
Cloud Computing
cloud-computing
Cloud Computing
22,811
Andrey Ryabov
Co-Founder, CTO of Alan
b711a32e9ad4
andreyryabov
29
31
20,181,104
null
null
null
null
null
null
0
null
0
d04f50fbda4e
2018-08-27
2018-08-27 12:23:11
2018-08-13
2018-08-13 00:00:00
1
false
en
2018-08-27
2018-08-27 12:24:18
1
1163ecf2e0c6
0.524528
0
0
0
The software industry is becoming obsessed with artificial intelligence. It is spreading into every new product and service that is…
5
Powerful Statistics About Where Artificial Intelligence is Making a Huge Impact [Infographic] The software industry is becoming obsessed with artificial intelligence. It is spreading into every new product and service that is released. AI products are especially popular among consumers, who love the thought of a product anticipating and reacting to their needs. Check out this helpful infographic for interesting facts about where artificial intelligence is headed. Originally published at https://www.callstats.io/2018/08/13/powerful-statistics-about-where-artificial-intelligence-is-making-a-huge-impact-infographic/ on August 13, 2018.
Powerful Statistics About Where Artificial Intelligence is Making a Huge Impact [Infographic]
0
powerful-statistics-about-where-artificial-intelligence-is-making-a-huge-impact-infographic-1163ecf2e0c6
2018-08-27
2018-08-27 12:24:18
https://medium.com/s/story/powerful-statistics-about-where-artificial-intelligence-is-making-a-huge-impact-infographic-1163ecf2e0c6
false
86
Articles on interesting real-time communication topics and challenges. Handmade in Helsinki.
null
callstats.io
null
Real-time Communication - The callstats.io Blog
callstatsio
WEBRTC,REAL TIME ANALYTICS,VIDEO CONFERENCING,CHAT,COMMUNICATION
callstatsio
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
callstats.io
Articles on interesting real-time communication topics and challenges. Handmade in Helsinki and the rest of the world.
910e2421441
callstatsio
191
244
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-19
2017-09-19 22:28:01
2017-09-19
2017-09-19 22:33:42
3
false
en
2017-09-21
2017-09-21 00:17:56
5
1164a2bf58b2
4.191509
0
0
0
By Eric Taylor, Senior Data Scientist, CircleUp
5
What Makes a New Beverage Brand Win By Eric Taylor, Senior Data Scientist, CircleUp The beverage sector is currently more dynamic than ever. Thanks to global trends including growing urbanization and increased disposable income, the beverage market worldwide is estimated to reach a value of $1.9 trillion by 2021. Much of the sector’s growth is being propelled by independent brands creating the small batch and better-for-you offerings that today’s consumers crave: The market for organic fresh juices and drinks, for example, grew by a whopping 33.5% in 2015, and small and independent craft brewers more than doubled their market share from 2011 to 2016. In this increasingly competitive market, what makes a beverage brand win? To find out, we at CircleUp pulled some data from Helio, our proprietary machine learning platform for identifying and analyzing promising consumer product brands. We ran several analyses in Helio to identify what qualities in U.S.-based beverage brands seem to correlate the most strongly with strong revenue growth and positive consumer sentiment (which is interpreted from product reviews and other sources.) Here are a few insights that turned up. Unique Ingredients Matter As much as we humans like our routines, there’s also something to be said for trying new things. And an appetite for novelty certainly seems to influence U.S. consumers’ response to beverages. Our analysis indicated that the majority of products with very low ratings had highly typical ingredient profiles compared with other products in the beverage sector. Conversely, the majority of products with very high ratings had highly distinctive ingredients. In the Early Days, Team is Everything Next, we examined what had a stronger correlation to revenue growth for a beverage business: The strength of its brand, or the quality of its executive team. Helio creates a “brand score” and a “team score” for each company it evaluates by examining billions of data points covering a wide range of information, taking into account things such as a brand’s social media presence and the prior experience of its leadership team. Here, we noticed an intriguing effect: Overall, brand strength was a stronger predictor of revenue growth than team strength, but the relative impact of team strength was larger for companies founded in the past 4 years than for older companies. Specifically, the impact of team strength was twice as large for the newer companies compared to older companies. There are a couple of factors that might be at play here. One is that in the beginning, the success or failure of a company is closely linked to its people: The founding team is responsible for everything in the early days. When a beverage company has become more established and is further into its lifetime, the executive team may shift into more of a maintenance mode. Success going forward from there is more dependent on the brand’s ability to stay interesting and relevant. Another possibility is that in decades past, having a slick brand and the wherewithal to invest in advertising made a big difference. Now that technology has lowered the barrier to entry for curating a polished image and promoting it through social media, it may be a given that a new beverage company must have a strong brand to be a contender. So today, what really sets a new company apart from the pack is the wit and tenacity of its founding team. The SKU Danger Zone Another issue we examined is how the breadth of a beverage brand’s product offerings seems to impact its appeal to consumers. It turns out that there are a couple of sweet spots — and one dangerous area through which brands must navigate very carefully as they grow. Beverage companies seem to have higher consumer ratings on average when they have either fewer than 10 SKUs (stock keeping unit, which is a unique identifier for a single product) or more than 20 SKUs. Companies with between 11–20 SKUs tend to lag behind in average consumer sentiment. Intuitively, this makes sense. When a company is just launching, it’s naturally going to put its best foot forward. After all, you wouldn’t go to market with a mediocre product. As a company grows and tries new things, there are bound to be more mixed reactions — quality may slip, or reviews may simply be more varied as new flavors come into play. But if a company can push past these growing pains, its consumer appeal perks back up. This trend may indicate some survivorship bias: The companies who falter as they scale may unfortunately go out of business, while those that survive are truly making products that people love — which is something that all entrepreneurs should strive for. The bottom line As always with Helio data, it’s important to stress that these are only a few trends we’re seeing, not fool-proof prescriptions for product success or failure. But there are some interesting takeaways here for upstart brands in the beverage industry and beyond: Consumers respond to products that are unique, so entrepreneurs shouldn’t shy away from mixing it up by offering new things. In the early days, don’t underestimate the importance of the founding team’s quality and chemistry — if something is not right, fix it as soon as possible. Having a small dip in consumer sentiment is natural as you expand your product lineup beyond your flagship offerings. But if you can successfully navigate through the growing pains and maintain your brand’s overall quality, things may start to look up as your reach gets bigger. The views and opinions expressed in this post are not necessarily those of CircleUp or its affiliates. Data from Helio is for illustrative purposes only. It is not meant as an indicator of future company performance or as a measure of suitability for investment.
What Makes a New Beverage Brand Win
0
what-makes-a-new-beverage-brand-win-1164a2bf58b2
2018-04-11
2018-04-11 05:55:23
https://medium.com/s/story/what-makes-a-new-beverage-brand-win-1164a2bf58b2
false
965
null
null
null
null
null
null
null
null
null
Food
food
Food
110,550
CircleUp
Providing capital and resources to early-stage consumer and retail brands
4db8b4cf32a2
circleup
2,010
343
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-25
2018-04-25 17:12:27
2018-04-25
2018-04-25 17:16:51
1
false
en
2018-04-25
2018-04-25 17:16:51
0
1164a4861005
2.739623
0
0
0
In many respects, Artificial Intelligence (AI), a capability that allows a computer or machine to reason and perform tasks that typically…
5
The Next Industrial Revolution is Being Driven by Artificial Intelligence Shutterstock In many respects, Artificial Intelligence (AI), a capability that allows a computer or machine to reason and perform tasks that typically require human intelligence, is only in its infancy, with its impact just beginning to be realized. As was the case with previous general-purpose technology revolutions, such as the development of steam and combustible engines, electricity and the silicon chip, AI is a disruptive technological event that is sending ripples across every aspect of modern life. The greatest challenge for every business is be the ability to keep up with advancing technologies. The driving factor behind the long-prophesied arrival of AI is the increased ability for cloud computing to tap into vast quantities of both structured and unstructured data in order to solve complex problems that are generally associated with human expertise. Consider this: In 2013, artificial intelligence encompassed 4.4 zettabytes; however by 2020 the digital universe — the data we create and copy annually — will reach 44 zettabytes, or 44 trillion gigabytes. We need AI just to keep track of the data we are amassing. However, AI is more than just a storage platform; it’s the harnessing and extrapolation of the data where the real innovation is taking place in terms of creating smarter algorithms that solve specific problems. The data analysis being generated, even at this early stage of AI, is providing accurate and actionable insights and knowledge gleaned from thousands upon thousands of public and private sources that no human can tackle alone. Today, AI is dramatically impacting industries all across the board: ● In health care, virtual AI assistants can quickly and efficiently scan patient records to diagnose illness, create custom treatments and monitor outcomes in order to greatly improve care management. ● Businesses are using AI chatbots to interact with consumers to provide personalized customer service solutions and, more impressively, develop proactive customer engagement in real-time to maximize service and product outreach. ● Fund managers and stock investors around the world are using AI to sift through massive amounts of data in a way that extends far beyond human capabilities, and creating investment strategies that deliver returns that outperform basic indexes and human analytics. Startups and technology providers are rushing to create platforms and targeted niche solutions for solving specific enterprise challenges, as reflected in the fact that the number of active U.S. startups developing AI systems has increased 14x since 2000. The share of jobs requiring AI skills in the US has grown 4.5x since 2013. In addition, in its 2016 Artificial Intelligence Market Forecasts, the online research and consulting company Tractica projects that AI will be a $37 billion industry by 2025. Now is an ideal time to begin identifying the key AI players in your industry. These are exciting times. Kevin Kimberlin, Chairman of the advanced technology and business development firm Spencer Trask & Co. and co-founder of Trensant, the AI-powered data analytics platform, refers to AI as the “Internet of Ecosystems.” Kimberlin believes that AI will become bigger than the Internet, or the Internet of Things or Big Data alone. “AI has the ability to connect any ecosystem in any industry, from the early phases of development all way through to consumer delivery and enable businesses to work better,” said Kimberlin. Don’t be complacent. Just because you can’t see AI, doesn’t mean it isn’t there working out of sight. There is actually a phenomenon called “the AI Effect,” wherein a person essentially blindly incorporates AI into various aspects of daily life. Today we must pose the question to Siri or Echo. Tomorrow, these “personal assistants” will anticipate the question. Tasks with well-defined rules, such as driving, will be performed by AI-driven technology. A future with AI means businesses will engage with every individual customer in a personalized and meaningful way. Spencer Trask & Co. is an advanced technology development company. The firm works with entrepreneurs, CEOs and corporate partners to start and grow high impact ventures.
The Next Industrial Revolution is Being Driven by Artificial Intelligence
0
the-next-industrial-revolution-is-being-driven-by-artificial-intelligence-1164a4861005
2018-04-25
2018-04-25 17:16:52
https://medium.com/s/story/the-next-industrial-revolution-is-being-driven-by-artificial-intelligence-1164a4861005
false
673
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Spencer Trask
null
ce4bc1cce0a7
SpencerTrask
1
12
20,181,104
null
null
null
null
null
null
0
# In my case, I was using Keras to build the models with TensorFlow backend with GPU support from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Input, LSTM, Dense, merge, Flatten from keras.models import load_model from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from sklearn.decomposition import PCA, KernelPCA import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns #Used TA-Lib for creating additional features. More on this later. from talib.abstract import * from talib import MA_Type TARGET = 'USD_CAD' granuality = 'H1' LOOK_BACK = 20 SPLIT = 0.99 # data split ration for training and testing DATA.head() # Compute various features that were not available in the raw data DATA['hour'] = DATA.index.hour DATA['day'] = DATA.index.weekday DATA['week'] = DATA.index.week DATA['volume'] = pd.to_numeric(DATA['volume']) DATA['close'] = pd.to_numeric(DATA['close']) DATA['open'] = pd.to_numeric(DATA['open']) DATA['momentum'] = DATA['volume'] * (DATA['open'] - DATA['close']) DATA['avg_price'] = (DATA['low'] + DATA['high'])/2 DATA['range'] = DATA['high'] - DATA['low'] DATA['ohlc_price'] = (DATA['low'] + DATA['high'] + DATA['open'] + DATA['close'])/4 DATA['oc_diff'] = DATA['open'] - DATA['close'] DATA['spread_open'] = DATA['ask_open'] - DATA['bid_open'] DATA['spread_close'] = DATA['ask_close'] - DATA['bid_close'] inputs = { 'open' : DATA['open'].values, 'high' : DATA['high'].values, 'low' : DATA['low'].values, 'close' : DATA['close'].values, 'volume' : DATA['volume'].values } DATA['ema'] = MA(inputs, timeperiod=15, matype=MA_Type.T3) DATA['bear_power'] = DATA['low'] - DATA['ema'] DATA['bull_power'] = DATA['high'] - DATA['ema'] # Since computing EMA leave some of the rows empty, we want to remove them. (EMA is a lagging indicator) DATA.dropna(inplace=True) # Add 1D PCA vector as a feature as well. This helped increasing the accuracy by adding more variance to the feature set pca_input = DATA.drop('close').copy() pca_features = pca_input.columns.tolist() pca = PCA(n_components=1) DATA['pca'] = pca.fit_transform(pca_input.values.astype('float32')) t = DATA[['close', 'bull_power', 'bear_power']].copy() t['pct_change'] = t['close'].pct_change() t.dropna(inplace=True) corr = t.corr() mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True f, ax = plt.subplots(figsize=(5, 5)) cmap = sns.diverging_palette(220, 10, as_cmap=True) sns.heatmap(corr, mask=mask, cmap=cmap, ax=ax) import matplotlib.colors as colors import matplotlib.cm as cm import pylab norm = colors.Normalize(DATA['high'].values.min(), DATA['high'].values.max()) color = cm.viridis(norm(DATA['high'].values)) for col in DATA.columns.tolist(): if col != 'pca': plt.figure(figsize=(10,5)) plt.scatter(DATA[col].values, DATA['pca'].values, lw=0, c=color, cmap=pylab.cm.cool, alpha=0.3, s=1) plt.title(col + ' vs pca') plt.show() def create_dataset(dataset, look_back=10): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back)] dataX.append(a) dataY.append(dataset[i + look_back]) return np.array(dataX), np.array(dataY) # Create scalers scaler = MinMaxScaler() scaled = pd.DataFrame(scaler.fit_transform(DATA), columns=DATA.columns) x_scaler = MinMaxScaler(feature_range=(0, 1)) x_scaler = x_scaler.fit(DATA.drop('high', axis=1).values.astype('float32')) y_scaler = MinMaxScaler(feature_range=(0, 1)) y_scaler = y_scaler.fit(DATA['high'].values.astype('float32')) # Create dataset target_index = scaled.columns.tolist().index('high') dataset = scaled.values.astype('float32') X, y = create_dataset(dataset, look_back=LOOK_BACK) y = y[:,target_index] train_size = int(len(X) * SPLIT) trainX = X[:train_size] trainY = y[:train_size] testX = X[train_size:] testY = y[train_size:] model = Sequential() model.add(LSTM(20, input_shape=(X.shape[1], X.shape[2]), return_sequences=True)) model.add(LSTM(20, return_sequences=True)) model.add(LSTM(10, return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(4, return_sequences=False)) model.add(Dense(4, init='uniform', activation='relu')) model.add(Dense(1, init='uniform', activation='relu')) from keras.callbacks import ModelCheckpoint from keras.callbacks import TensorBoard model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mae', 'mse']) filepath="weights.best.hdf5" checkpoint = ModelCheckpoint(filepath, monitor='val_mean_squared_error', verbose=1, save_best_only=True, mode='min') # Enable this line if you want to monitor the trianing progress via TensorBoard # tensorboard = TensorBoard(log_dir="logs/{}".format(time())) callbacks_list = [checkpoint] history = model.fit(trainX, trainY, epochs=100, batch_size=500, callbacks=callbacks_list, validation_split=0.1) from keras.callbacks import LearningRateScheduler import keras.backend as K def scheduler(epoch): if epoch%10==0 and epoch!=0: lr = K.get_value(model.optimizer.lr) K.set_value(model.optimizer.lr, lr*.9) print("lr changed to {}".format(lr*.9)) return K.get_value(model.optimizer.lr) lr_decay = LearningRateScheduler(scheduler) callbacks_list = [checkpoint, tensorboard, lr_decay] history = model.fit(trainX, trainY, epochs=100, batch_size=500, callbacks=callbacks_list, validation_split=0.1) predictions = pd.DataFrame() predictions['predicted'] = pd.Series(np.reshape(unscaled_pred, (unscaled_pred.shape[0]))) predictions['actual'] = testY[-pred_count:] predictions['actual'] = predictions['actual'].shift(2) predictions['actual_pure'] = DATA[label].values[-pred_count:] predictions['low'] = DATA['low'].values[-pred_count:] predictions['high'] = DATA['high'].values[-pred_count:] predictions['timestamp'] = DATA.index[-pred_count:] predictions['actual_pure'] = predictions['actual_pure'].shift(2) predictions['low'] = predictions['low'].shift(2) predictions['high'] = predictions['high'].shift(2) predictions.dropna(inplace=True) p = predictions.reset_index().copy() ax = p.plot(x='index', y='predicted', c='red', figsize=(40,10)) ax = p.plot(x='index', y='actual', c='white', figsize=(40,10), ax=ax) ax = p.plot(x='index', y='actual_pure', c='green', figsize=(40,10), ax=ax) plt.fill_between(x='index', y1='low',y2='high', data=p, alpha=0.4) # Zoom into the first 200 of test set p = predictions[:200].reset_index().copy() ax = p.plot(x='index', y='predicted', c='red', figsize=(40,10)) ax = p.plot(x='index', y='actual', c='white', figsize=(40,10), ax=ax) ax = p.plot(x='index', y='actual_pure', c='green', figsize=(40,10), ax=ax) plt.fill_between(x='index', y1='low', y2='high', data=p, alpha=0.4) plt.title('zoomed, first 200') plt.show() # Zoom to the last 200 of test set p = predictions[-200:].reset_index().copy() ax = p.plot(x='index', y='predicted', c='red', figsize=(40,10)) ax = p.plot(x='index', y='actual', c='white', figsize=(40,10), ax=ax) ax = p.plot(x='index', y='actual_pure', c='green', figsize=(40,10), ax=ax) plt.fill_between(x='index', y1='low',y2='high', data=p, alpha=0.4) plt.title('zoomed, last 200') #Distribution of difference between actual and prediction predictions['diff'] = predictions['predicted'] - predictions['actual'] sns.distplot(predictions['diff']); plt.title(TARGET + ' Distribution of difference between actual and prediction') g = sns.jointplot("diff", "predicted", data=predictions, kind="kde", space=0) plt.title(TARGET + ' Distributtion of error and price') # Correctly predicted prices predictions['correct'] = (predictions['predicted'] <= predictions['high']) & (predictions['predicted'] >= predictions['low']) sns.factorplot(data=predictions, x='correct', kind='count') predictions['diff'].describe() # Output count 1333.000000 mean 0.000048 std 0.000205 min -0.001998 25% -0.000064 50% 0.000046 75% 0.000154 max 0.001255 Name: diff, dtype: float64 print("MSE : ", mean_squared_error(predictions['predicted'].values, predictions['actual'].values)) # Output: MSE : 4.41091e-08
36
null
2018-04-23
2018-04-23 01:52:47
2018-04-24
2018-04-24 19:15:37
13
false
en
2018-05-26
2018-05-26 20:58:20
11
116554a424b5
7.192453
9
1
0
Since I spend my most of working hours on motion sensor and image data, I thought it’d be fun to dig around some other interesting areas to…
5
Forex USDCAD Price Predict Every 15 minutes Since I spend my most of working hours on motion sensor and image data, I thought it’d be fun to dig around some other interesting areas to apply my skills. Then I thought: “Hey, there are tons of easily accessible data around stock and foreign exchange rate. Let’s see if I can make something useful out of them.” I’ve never had any training in financial sector, other than learning how to file my taxes (but I gave up learning it after a few hours and decided to delegate that task to my accountant). So, the following is the result I’ve achieved through some trials and errors over a few weekends: Photo by Christine Roy on Unsplash I am a firm believer in KISS (Keep it simple, stupid) principle. I’ve tried my best to keep the code short and sweet. If anything is unclear, let me know and I will try my best to explain it. Outlines Obtaining Data Feature Exploration Shaping Data Training Testing Postmortem Obtaining Data The first thing I did was to obtain the Oanda. There is a Python library that encapsulates Oanda’s REST API v2 that was easy to deal with. For my model, I used historical USDCAD rates since Jan 1. 2012 (about <6 years worth) which I think, was a decent size. Let the coding begin First import libraries. Configuration variables: Below is what the data look like: Feature Exploration Adding more features which helped improving the accuracy. Also, by adding hour, day, and week, I think it helped with mitigating a seasonality problem. Note: I’ve used TA-Lib to compute EMA then computed Bears Power, and Bulls Power as features. What my data looked like after adding all the features: Let’s verify that bull_power and bear_power adds some values by creating more variance in feature sets, we want to check the correlation heat map between these features: Correlation plot. pct_change (Percentage change in CLOSE price) row is what we want to check. Okay, there was some degree of correlation which can be useful. Let’s check PCA feature vector: There was to be some degree of seperation in PCA plot. It wouldn’t hurt to utilize it. Overall correlation pair plots (This was pretty useless and took a very long time to generate, but thought it looked cool) Shaping Data Note: I based the below code out from machinelearningmastery.com. Seriously good stuff there. Function that converts Pandas DataFrame into LSTM friendly format: Scale, reshape, and group the data into training and testing. Training Now, let’s create a relatively small LSTM network to do our prediction. Compile the model, make sure to save the best weight found during the training process, and let the training begin! Once the initial training is done, you end up with a fairly sub-optimal weights. In my case, the validation mean squared error was around 0.49. Not very ideal but it did get our weights closer to the optimal weights. To improve the weights towards the global optimal, I retrained the model with LearningRateScheduler added. By taking a smaller learning rate at every 10 epoch, the validation mean squared error went from 0.49 to 0.000000203. Testing Once the training was complete, I’ve loaded the best weights discovered to my model and checked if the prediction worked as intended. The blue region represents High and Low prices of the currency pair at that time. The blue region represents High and Low prices of the currency pair at that time. Distribution of difference between actual and prediction Not bad. Out from 1333 tests using the most recent data, most of the predictions were well within the price range. Meaning that at some point in the next 15 minute, the price will intersect with the predicted price. However, note thatt he prediction was off by $0.001255 at max, which could be pretty big, it would be dangerous if it happens often. Postmortem I’ve tried running the above model against the actual market for a few weeks. It performed well for a few days but three things were observed that made the system not very useful. It was not capable of reacting to sudden news nor any political movements. In which cases, those unpredictable events ultimately wiped out the entire profit. One random tweet can ruin the whole thing. Each prediction was rather conservative and therefore profit gained from a trade was low. Cost(spread) eats up everything. The combination of the three problems have led me to come up with a new method. I will write about it in the next post. Update: I was asked to publish the notebook but I have been very busy to clean up the notebook and release. For those who are interested in trying this out, I had a similar one written at Kaggle that you can fork. This post references an opinion and is for information purposes only. It is not intended to be investment advice. Seek a duly licensed professional for investment advice.
Forex USDCAD Price Predict Every 15 minutes
18
forex-usdcad-predict-price-every-15-minutes-116554a424b5
2018-05-26
2018-05-26 20:58:22
https://medium.com/s/story/forex-usdcad-predict-price-every-15-minutes-116554a424b5
false
1,535
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Dave Yungoo Kim
Software Developer Focused on Business Applications of A.I. Toronto, Canada Based.
ade1f0e322e9
daveyungookim
37
41
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-24
2018-05-24 01:18:00
2018-05-24
2018-05-24 01:25:56
5
false
en
2018-05-24
2018-05-24 01:37:42
5
11662155d05f
3.927673
0
0
0
By Cara Davies
5
What is Good Data? By Cara Davies Data science — it’s so hot right now. But before you go racing to build predictive models, it’s important to assess the quality of your data. At EdjAnalytics, we recognize that most of the data companies collect serves multiple purposes or can be siloed by departments and, therefore, isn’t always in the perfect format for statistical analysis. While your record-keeping may be perfect for your needs in accounting or human resources, how can you tell if the data you have collected will be useful for modeling? The first thing we must understand about the data is what it all means. While internal users might understand the shorthand and abbreviations sprinkled throughout your dataset, external data analysts will be stumped by obscure abbreviations and data coding, such as ICD and CPT in the medical field. Even something that seems straightforward, such as a field of dates labeled “admission date”, might need further explanation once we dive into the data and realize that some individuals have multiple admission dates throughout the dataset. Does this date refer to the date the patient first entered the hospital, or the date of admission into each new ward or department? To avoid this confusion, it is useful to supply a data dictionary, or a list of specific variable definitions and keys, for our analysts to use. This data dictionary should explain what each variable means and how it was calculated or derived from various inputs. Data dictionaries also describe the correct format for the data, which should allow us to easily differentiate a typo from a new category while describing the data’s origin and proper usage. When our analysts possess all of this meta-data, it is much easier to recognize any inconsistencies in the data and begin to form testable hypotheses about relationships between various factors. Example of a data dictionary, showing variable name, data type, description of the data, and examples of what the data should look like. Another important aspect of data for predictive modeling is a deep history across all variables. While it might be possible to find relationships between variables without this deep history, we may miss key factors for prediction that would be instantly obvious with a longer history. For example, if we want to predict ice cream sales for a local shop, having sales data from the last few months won’t be nearly as effective as having sales data from across the course of the year; we would miss all the seasonal variability in ice cream sales, which peak throughout the summer months. Additionally, having ice cream sales data from the past three years could be even more useful to determine if that seasonal peak was a fluke — perhaps coinciding with a major promotion that drove sales — or if that seasonal peak can be seen throughout the data to varying degrees every year. While it might not take a data scientist to tell you that you are likely to sell more ice cream in the summer, numerous cyclical trends exist in the real world and ensuring that you have a long, detailed history represented in your data can help uncover these trends. Looking only at data in the shaded region we might anticipate slow, steady growth in the next few months, and would miss the rapid peak as the weather warms up! Finally, the structure of the data itself can help facilitate model-building. At EdjAnalytics, we work primarily with structured data in clearly-defined tables and databases. Relational data — that is, data stored in multiple tables with one or more overlapping ID variables that can be accessed using the programming language SQL — helps data analysts and scientists quickly find relevant information and pull it into one comprehensive analysis. A client might have human resources data on individual employees (such as age, job title, and years of employment) which they might want to link to accounts data (number of orders filled, dollar amount of revenue generated, and so on) to look for patterns in employee efficiency. The fastest way to perform the necessary analysis is to connect the relevant information from the employee table to the relevant outcomes in the accounts table using a relational database. Without a relational database, data analysts would need to search through multiple lines of repeated information in a flat file that contains information about both employees and accounts. Storing data relationally therefore can speed up the preliminary steps of data cleaning and processing, as relationships between variables are clear and easy to understand, and unnecessary data is easily filtered out. Linking these tables quickly shows that the 35-year-old salesman is responsible for over $15,000 in orders, while the 49-year-old district manager had over $57,000 in sales. By contrast, this table includes repeated information for the salesman and unnecessarily includes information about the receptionist, who did not sell anything. In summary, good data should have a data dictionary to explain the origin and meaning of all the included variables, and it should be stored relationally for ease of use. Additionally, when building complex predictive models, a deep history is necessary to uncover any cyclical trends that might be lurking in the data. Of course, there are many more factors to consider when preparing data for predictive analytics, which we will address in future blog posts as well. Keeping these three things in mind, however, can help facilitate your data science projects, and improve the collaboration between your company and our specialists at EdjAnalytics.
What is Good Data?
0
what-is-good-data-11662155d05f
2018-05-24
2018-05-24 01:37:43
https://medium.com/s/story/what-is-good-data-11662155d05f
false
820
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
EdjAnalytics
Founded on the idea that a team of expert data scientists working on diverse, complex problems creates a breadth of experience valuable across many industries.
1cd54396beb7
edjanalytics
1
16
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-11
2018-01-11 04:15:49
2018-01-11
2018-01-11 04:13:19
1
false
en
2018-01-11
2018-01-11 04:15:50
0
11664746144c
10.932075
0
0
0
null
5
Required Product for AI Product Manager: Systematic Thinking First, experience and methods 1, the difference between experience and methods Need to give directions to others, if the other is a person who will not see the map at all, we can only take a “landmark-based” way, a detailed description of all the landmarks throughout the journey, such a guide The method is simple and easy to use, but such a way of doing things loses their meaning once these landmarks are lost in a new environment. And if one learns to read the map, there is no hard road to him anymore. “Follow the Landmarks” is our experience. The experience sounds complex, simple to be done, the learning cost is very low, easy to use and easy to use and effective, but rigid, it will quickly become obsolete as the environment changes. “The ability to see the map” is what we call the way of thinking. It sounds simple, complicated to do, extremely abstract, not easy to understand, more difficult to learn, flexible, versatile, and less obsolete. People tend to learn experience because it is fixed enough and simple enough. But experience changes constantly and requires us to constantly learn new experiences. Therefore, if we only learn from experience and do not learn the methods, we are dumbfounded at the moment when the change is coming. Even if we learn a lot, there is still no strong ability to deal with the problems. 2, the relationship between experience and standards In the field of e-commerce, Alibaba has survived, so Alibaba has shaped the standard of e-commerce. However, in the field of AI, there is no experience, there is no standard, who survived, who is experience, who is the standard. In order to deal with a completely new field, we must learn the method, just as we should know clearly from the perspective of walking in the northwest and south-east. Although the method is abstract and difficult to learn, it has a longer life span and stronger applicability. Once we master the method, we can Use your strengths to create experiences. To distinguish between what we learn is experience or method, there is a quick and easy way to identify what we learned can be used across scenarios. In practice, most of the lessons we have learned are in fact wrong without the help of thinking methods. These wrong experiences will make us learn more silly. Standards are the product of successful experience. Do not use complex to pursue complex, but to use simple to explain the complex. 3, with a way to think of ways to replace the guess To illustrate the problem, the teacher cited an example of Kodak film. Kodak film was what beat? Many people replied that it was a digital camera. However, to the surprise of all, the world’s first digital photo was even invented by Kodak. It was not a digital camera that beat Kodak. In the face of the teacher’s questions, the present students gave a variety of answers, however, they were not accurate nor directional. From here we can find that in the actual life situation, most people first through intuition guess, and then through their own depth thinking verification. This poses a problem, many things seem right, but in fact wrong. When the snail denied the “digital camera” answer, it entered a cold session, followed by trainees guessing again and again. However, these assumptions did not help the real answer. Faced with a problem, the correct solution should be the above logic in turn. The first step, combined with the previous experience delineated a range of thinking. The second step is to guess within this range, and keep narrowing the range by guessing. In the face of the question of “what Kodak was defeated,” the first thing to think about is “what can defeat a company.” Such a mode of thinking relies on methods instead of relying on experience. Kodak’s core business is photo processing, and once the business is compromised, Kodak is defeated, so it’s easy to think of a product that has revolutionized rinsing. The advent of printers has brought new ideas to consumers, such as beat-and-shoot, and at such efficiencies, it takes less than a week for the films to be washed out of the market. Second, know and learn Half, one half, is more terrible than all Most people’s questions seem right, but they can not stand the test. Individuals resolutely oppose the fragmentation of learning, if not all the points in a line, but affect the cognitive. The use of information has two elements, access to costs and identification costs. Before the advent of the Internet, the cost of using information was mainly obtained. However, in the information explosion, the cost of identification has greatly increased. Especially for the older generation, they are in an age where the quality of information is high and the speed is slow, so the capability of screening information is very weak. However, the time is exactly the opposite (media are not re-emphasized in quality), so they are easily lost in mistakes In the information (deceived). 2, the product manager must have the ability to accurately define one thing to be able to reproduce There is a difference between knowing and learning. Know that refers to the history of Apple’s hometown, learned that you not only know everything about Apple, you can copy to yourself. The teacher cited an example of hunger marketing by millet company. In the hungry marketing, which word is the most important? Hungry is the result, not the reason. Millet 1 2011 listing, htc with the configuration to sell more than 5000, but millet sold in 1999. Hunger is the beginning of all this. The key point of hunger marketing is that products should create a bright spot that allows users to thirst, far beyond the user’s expectations of products, reduce the supply of hunger point to maximum, and then lead to marketing. This logic can be used in anything. When we can use something else elsewhere, we learn it. 3, learned from the case of millet hungry marketing, how to apply China Mobile has had such a case, in a region, some people spend 80–90 yuan, the main point of payment is 3G, 4G traffic, this part of the population accounted for 40% of the local population, how to consume per capita by the way of marketing To 120? Through the lottery to create a user-hungry highlights, the annual bill free of charge. Such activities far exceed the user’s expectations, and then through the withdrawal of only 10 individuals a month to reduce supply, the hunger point zoom. Finally, through the monthly bill over a hundred talents can pumping, and share to a friend’s virus marketing, the activities will be released. Product managers to develop a habit, precisely define all the problems around, some of the phenomenon in the end is what? Even friendship, communication, can be defined. The definition of the standard is that in any one scene can be reproduced. Third, systematic thinking 1, systematic thinking and fragmented thinking differences There is a fish tank with fish inside, a camera at angle A and a camera at angle B (vertical to direction A and B). There are individuals in the house that can only see fish through A and B. Systematic thinking sees “A and B are different angles of the fish,” and the fragmented mindset sees “a mystical connection between A and B.” When such two people communicate, it is difficult to understand each other. Systematic thinking refers to looking at the problem, not only to see the performance, but also to see the back of the link. 2, physical thinking, from simple to complex cognitive system When learning physics, the initial model of motion was “an object moving at a constant velocity on a smooth surface”; when friction was added, force and acceleration were introduced into the system; and in circular motion, we further learned velocity, acceleration , The force of three in a constantly changing movement system. If not step by step to learn, but to learn a circular movement from the beginning, will be very difficult. This is a systematic way of thinking, its process is from simple to complex (step by step overlay). 3, find the main contradiction in the physical system System is composed of several parts of the interaction, with a specific structure, the specific function of the organic whole, these systems are progressive, the next system is an extension of the previous system. There is a relationship between systems, the strongest relationship is the relationship between father and son. When multiple systems are affected by a single structure, there must be an association between these systems. The relationship and structure of systems are relative, and the direction that determines them is the solution to the problem. Galileo throws two different quality iron balls on the Leaning Tower of Pisa, landing at the same time. However, throwing feathers and iron balls, the two will not be landing at the same time. The former main influence factor is the acceleration of gravity, the latter is the main influencing factor of air resistance. If we do not have a systematic mindset, it is easy to see only a single factor in this experiment and fall into a paradox. When making a product, you should use simple, pure logic as much as possible, with simple simulation of complex, rather than complicated with complex pursuit. Consider the necessity of negating general rules with a few examples, noting the scope of the product and the main problems to be solved, rather than refuting ourselves with factors that seem relevant but not actually relevant. 4, through the construction of the core system framework to explain the complex phenomenon In the analysis of the target system, we must first determine the target system’s system framework, the core description is what? For example, the core system of Newtonian mechanics is (F = ma). The linear motion with friction is the smallest version of Newtonian mechanics. Continuously removing the weakly connected subsystems in the target system until the subsystems are very strong (threw the iron ball without throwing feathers), these subsystems constitute the core system (or the smallest system) of the target system. Observe the core system and define the structure of the core system, then the structure is the core framework of the target system. The system framework is not a fact, it is a theory we use to explain facts and there is always a deeper system framework. (That is, do not complicate the complex pursuit of complex, simple to explain the complex, the system framework is simple, we can abstract complex things) Compared with AI, human ability is from simple to complex, abstracted ability, but AI is complex to complex. In the field of the Internet, technology continues to close the bottleneck, there will be continuous business model has been developed. The essence of business is to buy and sell one, equivalent exchange, the birth of the currency instead of the general equivalent. Now there are free business models that exchange customers for free mode, making money by selling these aggregated traffic. The process of physical learning is like playing mahjong and playing chess, constantly observing the field phenomenon, and then extract the more general law of winning. When the discovery of new rules can not confirm their past rules is wrong, you have to continue to observe the test validation. When we learn to think systematically, we still have to go to some old areas to borrow some system frameworks, abstract things can be reused, and then as a new world to explore the rules. Scientific exploration is an approximation that links far-flung systems to some basic theory. The underlying framework is reusable. So the process of high school physics learning can be translated as: When we want to learn a complex system problem, we must first learn the system of the smallest subsystem, when we have the smallest subsystem, the minimum subsystem to add a condition, resulting in a new system to learn and understand the new system. Repeat this process, and to ensure that the conditions of the system is still understandable, we can gradually understand more and more complex systems. Fourth, how to apply physical thinking Our understanding of the world is systematic and will not be constrained by any single dimension. 1, Erlong Musik — refining the system framework or the smallest subsystem When you think, you should (as physics does research) evaporate the problems you encounter until the sedimentation of the most fundamental principle of the problem, rather than through analogy. Physics can really deduce many new things counterintuitively similar to quantum mechanics. SpaceX example. Although material science and control technology has been rapidly developed, aerospace has not significantly advanced. Once the multi-stage non-recoverable rocket is a temporary solution that needs to usher in some changes. Musk confirmed the possibility of SpaceX through a series of inferences and calculations. Why we have not done so, because we are stuck in the past experience. Without careful reasoning, the rocket in the past under what conditions designed, and now this condition has changed? The sight of us all is shackled by the thinking of the rocket in the past, user research will never be able to find something new in the new scenario, the past experience is useless. Why do Tesla, even if the charge is 50% loss, is still better than the gasoline engine. Engine utilization of fuel only 30%, but the power plant utilization of energy up to 90%, so that the electric car has a very large energy use advantages. 2, iPhone as the representative of the new mobile phone to get rid of Nokia as the representative of the underlying causes of the old cell phone? The teacher first prompts us to think about how to think about this issue. Instead of setting a position, comparing two systems, removing all the same things, the rest is the answer, and we need to be able to learn from the answers. After evaporating a lot of subsystems, we can find out that the difference between the iPhone and the traditional mobile phone lies in the physical keyboard and the virtual keyboard. The multi-touch input provided by the iPhone has brought revolutionary changes to the UI and interaction design, which is also a good source for the iPhone user experience. The decision-making power of the product manager to the user experience is like a shopping guide selling the product at a premium to the elderly. Product manager’s design capabilities can be about the user experience, but fundamentally determined by the input and output devices. Old-fashioned touch screen is a single touch, the mouse is a single point, the experience is particularly like PC, a single point is not good enough to replace the keyboard, because the mouse with the keyboard is the best experience. So the final single-point touch screen as iPhone. From the iPhone example we can learn: User experience, interactive logic and input and output devices to match; The difference between Mac and Windows is that the Mac supports multiple screens, one screen per software, and different underlying designs. VR input is not mature enough, the input and output do not match. Five, with physical thinking to see AI 1, when we are talking about product design, what are we talking about? The standardization of product value, once standardized, can be copied; valuable workflow, and then the process of interface, and then reproduce; standards include two levels: the underlying standards and business standards; industry just appeared chatting technology, technology is almost The chat business, web2.0 are basically chatting behind the business, no one chatting technology, Internet business people are concerned about the business. 2, artificial intelligence in the end can do? Dry things innocent things dry, dry people do not want to do dirty living. Look at the X-ray, driverless. The biggest change that deep learning brings is that machines understand and handle abstract concepts. 3, natural language processing in the end is what? Natural language processing is a language of interaction, NLP is a new way to interact, we can enhance the understanding of NLP by analyzing the evolution of the GUI. Gossiping robots are like Windows desktops, which allow you to do more than Dos, do not need to know the command line, and you can see many folders. We need to add a program icon to the desktop. The application is currently represented by the new NLP The most interactive lack of things. 4, Chatbot technology itself has basically been ready today (enough, does not affect us to do demo verification), the key is the need to add more program icons (app). That is, today is not the lack of technology, but to use the right scene, the lack of App, product managers have to learn to assess the scene (not landing
Required Product for AI Product Manager: Systematic Thinking
0
required-product-for-ai-product-manager-systematic-thinking-11664746144c
2018-01-11
2018-01-11 04:15:51
https://medium.com/s/story/required-product-for-ai-product-manager-systematic-thinking-11664746144c
false
2,844
null
null
null
null
null
null
null
null
null
Experience
experience
Experience
8,646
About Product Manager
I am a product manager beginner. Currently in China, I try to translate some of their thoughts and share them with everyone.
bf816565f977
allthepm
2
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-05
2018-09-05 10:35:52
2018-09-05
2018-09-05 11:02:32
1
false
en
2018-09-05
2018-09-05 11:02:32
16
1166cce89215
2.701887
2
0
0
[SHOCKING]
5
tecHindustan Myths of Artificial Intelligence and its Facts At the Rage Artificial intelligence is all rage, from Google assistant to self-driving cars to chatbots and drones, it is taking routine tasks away from human hands. Today every tech organization, whether a startup or a corporation, have AI in their offerings, in some way or the other. Here, it becomes easier to be misled by the myths of artificial intelligence and misconceptions about the Artificial Intelligence. So, let’s cut the clutter, here are some factors warranting between myth and fact. Myth: Computers are likely to become more intelligent than humans in the next 50 years Fact: The computer scientists are still divided over whether Artificial Intelligence would ever overtake Human Intelligence. Although computers are capable of analyzing and storing a huge amount of data that is much more than humans capabilities, they lack the intuition (sixth sense) that makes us human. Our ability to make a decision not only on probabilities but also emotions and risks makes us different. Whether AI will be able to inherit these traits of a human being is remains unclear. Without the ability to read emotions and display, the AI could never have a complete skill set that defines human intelligence. Myth: Myth of Artificial Intelligence- It Will Put Humanity To End Fact: Many people come up with some questions like “What happens if machines become self-aware?” “If AI starts thinking about itself, will it act for itself as well?” Such questions have created a ground base for the movies like “The Terminator” and ever since the movie has been scaring ordinary people. Making them think about “Will AI take humanity to its end?” The answer to this question is probably not. The artificial intelligence systems run according to the pre-defined parameters and solve the problems for which they were invented; unless someone is making a mistake while creating these parameters, the system is least likely to develop the “evil” tendencies. Related Reads: Machine Learning Vs. Deep Learning Fact: AI can have control over People We usually think that only other human beings can have control over us as we interact with them in the real world, but the fact is that our emotions, feelings, and thoughts; all are manipulated by machines all the time. Most of the ads that we see online are already designed using Artificial Intelligence. Let us take an example of Amazon. They keep a record of every search you make. They later use that information to arrange adverts for those products, that you visit on other websites. The term for this approach is “retargeting”. As we continually come across the same product AI begins to control our interests, and many people end up making the purchase. So, AI may not take complete control over us but it can motivate us to act in some specific ways. Interesting Reads: Quantum Computing Explained Simply Myth: Myth of Artificial Intelligence- it is hack proof Fact: As AI demonstrates intelligence, it is still a little more than a complex computer program. That means it is possible to hack. Polished cybercriminals may change the parameters used for controlling the AI, allowing it to develop in unexpected ways. The future with AI isn’t really scary as it seems to be. Self-awareness of an AI might be dangerous to some extent but it can also make it capable of defending itself from cyber attacks hence making it more secure with every attempt by a cybercriminal. With each advancement we make in the AI systems, we take a step closer to a world where AI will play a major role in our day-to-day life. More reads by us on Medium other than Angular vs React 2018: Angular vs React 2018 Programming Jokes Best technology to learn for future Flutter vs React Native Tips to write clean code Follow us on: Instagram | Twitter | Facebook | LinkedIn Originally posted: The Myths of Artificial Intelligence and its Facts
Myths of Artificial Intelligence and its Facts
21
myths-of-artificial-intelligence-1166cce89215
2018-09-05
2018-09-05 11:02:33
https://medium.com/s/story/myths-of-artificial-intelligence-1166cce89215
false
663
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
tecHindustan
tecHindustan is one of the best IT companies in India. We craft brilliant Web and Mobile Solutions for Startups, Enterprises and other Businesses.
b61bb5300d33
techindustan
73
130
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-16
2018-04-16 15:55:30
2018-04-05
2018-04-05 16:06:39
2
false
en
2018-04-16
2018-04-16 15:59:24
20
116773ff532c
1.855031
0
0
0
TWiML Talk 125
5
Human-in-the-Loop AI for Emergency Response & More with Rob Munro TWiML Talk 125 In this episode, I chat with Rob Munro, CTO of the newly branded Figure Eight, formerly known as CrowdFlower. Figure Eight’s Human-in-the-Loop AI platform supports data science & machine learning teams working on autonomous vehicles, consumer product identification, natural language processing, search relevance, intelligent chatbots, and more. Subscribe: iTunes / SoundCloud / Google Play / Stitcher / RSS Rob and I had a really interesting discussion covering some of the work he’s previously done applying machine learning to disaster response and epidemiology, including a use case involving text translation in the wake of the catastrophic 2010 Haiti earthquake. We also dig into some of the technical challenges that he’s encountered in trying to scale the human-in-the-loop side of machine learning since joining Figure Eight, including identifying more efficient approaches to image annotation as well as the use of zero shot machine learning to minimize training data requirements. Finally, we briefly discuss Figure Eight’s upcoming TrainAI conference, which takes place on May 9th & 10th in San Francisco. Thanks to our Sponsor A huge thanks to Figure Eight for sponsoring this episode of the podcast! At Figure Eight’s Train AI you can join me and Rob, along with a host of amazing speakers like Garry Kasparov, Andrej Karpathy, Marti Hearst and many more and receive hands-on AI, machine learning and deep learning training through real-world case studies on practical machine learning applications. For more information on TrainAI, head over to www.figure-eight.com/train-ai, and be sure to use code TWIMLAI for 30% off your registration! For those of you listening to this on or before April 6th, Figure Eight is offering an even better deal on event registration. Use the code figure-eight to register for only 88 dollars. About Robert Robert at Figure Eight Robert on LinkedIn Mentioned in the Interview Figure Eight The Design of Everyday Things Scaling Deep Learning: Systems Challenges & More with Shubho Sengupta Train AI Conference Zero-Shot Learning Register for the AI Summit Check out @ShirinGlander’s Great TWiML Sketches! TWiML Presents: Series page TWiML Events Page TWiML Meetup TWiML Newsletter “More On That Later” by Lee Rosevere licensed under CC By 4.0 Originally published at twimlai.com on April 5, 2018.
Human-in-the-Loop AI for Emergency Response & More with Rob Munro
0
human-in-the-loop-ai-for-emergency-response-more-with-rob-munro-116773ff532c
2018-04-16
2018-04-16 15:59:25
https://medium.com/s/story/human-in-the-loop-ai-for-emergency-response-more-with-rob-munro-116773ff532c
false
390
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
TWiML & AI
This Week in #MachineLearning & #AI (podcast) brings you the week’s most interesting and important stories from the world of #ML and artificial intelligence.
ca095fd8e66c
twimlai
292
33
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-08
2018-01-08 17:24:37
2018-01-08
2018-01-08 17:27:26
4
false
en
2018-01-08
2018-01-08 17:27:26
2
1167afe64e02
3.175472
1
0
0
by Frank Evans, Data Scientist
5
How a Data Scientist Built a Web-Based Data Application by Frank Evans, Data Scientist I’m an algorithms guy. I love exploring data sets, building cool models, and finding interesting patterns that are hidden in that data. Once I have a model, then of course I want a great interactive, visual way to communicate it to anyone that will listen. When it comes to interactive visuals there is nothing better than JavaScript’s D3. It’s smooth and beautiful. But like I said, I’m an algorithms guy. Those machine learning models I’ve tuned are in Python and R. And I don’t want to spend all my time trying to glue them together with web code that I don’t understand very well and I’m not terribly interested in. I managed to create an interactive data application, thought, that was the keystone of a post about building topic models on the presidential State of the Union addresses. This “xap” — a data application built on the Exaptive platform — relies primarily on three open source technologies: HTML web inputs, Python to build the machine learning model, and JavaScript D3 for the visuals. Plus, the visuals communicate with one another such that hovering over one of the lines in the D3 line plot will dynamically create another D3 word cloud of the most important terms that describe that topic. I didn’t have to write any glue code. Here is a behind the curtain look at the Exaptive elements that make up this xap: The xap is grouped into those same 3 functional areas — user input (HTML), algorithm (Python), and visualization(D3.js), plus a fourth to show a progress bar when the app is processing a model and control a universal color scheme across plots. Each section is made up of several components. Each component takes inputs on the left and outputs to the right. Which outputs are connected to which inputs is governed by a drag and drop wire. Inside each component, the best technology to accomplish that task is encapsulated. The platform takes care of communications between components. Clicking on the component in the algorithmic layer shows the Python code used to build the Latent Dirichlet Allocation Topic Model. This is the part I wrote. The output is being fed to some JavaScript in the visualization layer that uses D3: This is the part I did not write. In fact, this is far better and more scalable D3 than I can write. All I have to do now is connect the output data of my topic model component to the inputs of the visualization components. Not only that, but the output of one visualization can be used as an input into another one to make the plots dynamic and interactive. The same applies to the User Input components. They use JavaScript to dynamically create the HTML inputs that let a user run their own topic model. I didn’t write those either. I just dragged them in from the standard toolbox. Even though they are different technologies, with no glue code I wired them together in just a few minutes, and they can supply the inputs for my Python topic model component. I’m an algorithms guy (please stop me if I start repeating myself), and what I want more than anything else is to be able to effectively communicate the models I’ve built and get them into the hands of other people to see what they can find. But somewhere out there is someone who is a great visualization person that would love to have good algorithmic components to wire into their visuals. They built these visualization components I am using. They make my work better, and I hope my algorithms can do the same for them. If I can effectively connect and collaborate with those interested in visualization, design, user experience, subject matter expertise, and even other types of algorithms, then we can amplify the quality of each other’s work.
How a Data Scientist Built a Web-Based Data Application
1
how-a-data-scientist-built-a-web-based-data-application-1167afe64e02
2018-06-16
2018-06-16 13:57:19
https://medium.com/s/story/how-a-data-scientist-built-a-web-based-data-application-1167afe64e02
false
656
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Exaptive
Our mission is more data-driven innovation, and we believe interoperability, modularity, and community make them happen.
b64a2f7d224a
exaptive
9
3
20,181,104
null
null
null
null
null
null
0
library(tm) library(topicmodels) library(SnowballC) library(tidytext) library(ggplot2) library(dplyr) setwd('C:/Users/Susan/Documents/textmining') filenames <- list.files(getwd(),pattern='*.txt') files <- lapply(filenames, readLines) docs <- Corpus(VectorSource(files)) writeLines(as.character(docs[[5]])) c("The Food and Drug Administration on Wednesday approved the first-ever treatment that genetically alters a patient’s own cells to fight cancer, a milestone that is expected to transform treatment in the coming years.The ##new therapy turns a patient’s cells into a “living drug,” and trains them to recognize and attack the disease. It is part of the rapidly growing field of", "immunotherapy that bolster... toSpace <- content_transformer(function(x, pattern) {return (gsub(pattern, '', x))}) docs <- tm_map(docs, toSpace, '-') docs <- tm_map(docs, toSpace, ':') docs <- tm_map(docs, toSpace, '“') docs <- tm_map(docs, toSpace, '”') docs <- tm_map(docs, toSpace, "'") docs <- tm_map(docs, removePunctuation) docs <- tm_map(docs, removeNumbers) docs <- tm_map(docs, removeWords, stopwords('english')) docs <- tm_map(docs, stripWhitespace) myStopwords <- c('can','say','said','will','like','even','well','one', 'hour', 'also', 'take', 'well','now','new', 'use', 'the') docs <- tm_map(docs, removeWords, myStopwords) removeSpecialChars <- function(x) gsub("[^a-zA-Z0-9 ]","",x) docs <- tm_map(docs, removeSpecialChars) docs <- tm_map(docs, content_transformer(tolower)) docs <- tm_map(docs,stemDocument) writeLines(as.character(docs[[5]])) cthe food drug administr wednesday approv firstev treatment genet alter patient cell fight cancer mileston expect transform treatment come yearsth therapi turn patient cell live drug train recogn attack diseas it part rapid ##grow field immunotherapi... dtm <- DocumentTermMatrix(docs) dtm <<DocumentTermMatrix (documents: 22, terms: 3290)>> Non-/sparse entries: 7323/65057 Sparsity : 90% Maximal term length: 18 Weighting : term frequency (tf) nytimes_lda <- LDA(dtm, k = 4, control = list(seed = 1234)) nytimes_lda A LDA_VEM topic model with 4 topics. nytimes_topics <- tidy(nytimes_lda, matrix = "beta") nytimes_topics # A tibble: 13,160 x 3 topic term beta <int> <chr> <dbl> 1 1 adapt 3.204101e-04 2 2 adapt 8.591570e-103 3 3 adapt 1.585924e-100 4 4 adapt 5.643274e-101 5 1 add 6.408202e-04 6 2 add 1.677995e-100 7 3 add 3.793627e-04 8 4 add 7.110579e-04 9 1 addit 3.204101e-04 10 2 addit 5.634930e-04 # ... with 13,150 more rows nytimes_top_terms <- nytimes_topics %>% group_by(topic) %>% top_n(10, beta) %>% ungroup() %>% arrange(topic, -beta) nytimes_top_terms %>% mutate(term = reorder(term, beta)) %>% ggplot(aes(term, beta, fill = factor(topic))) + geom_col(show.legend = FALSE) + facet_wrap(~ topic, scales = "free") + coord_flip() + ggtitle('Top terms in each LDA topic') nytimes_lda <- LDA(dtm, k = 9, control = list(seed = 4321)) nytimes_topics <- tidy(nytimes_lda, matrix = "beta") nytimes_top_terms <- nytimes_topics %>% group_by(topic) %>% top_n(10, beta) %>% ungroup() %>% arrange(topic, -beta) nytimes_top_terms %>% mutate(term = reorder(term, beta)) %>% ggplot(aes(term, beta, fill = factor(topic))) + geom_col(show.legend = FALSE) + facet_wrap(~ topic, scales = "free") + coord_flip() + ggtitle('Top terms in each LDA topic') nytimes_lda_gamma <- tidy(nytimes_lda, matrix = "gamma") nytimes_lda_gamma # A tibble: 198 x 3 document topic gamma <chr> <int> <dbl> 1 1 1 8.191867e-05 2 2 1 4.471408e-05 3 3 1 4.180694e-05 4 4 1 3.184344e-05 5 5 1 4.540474e-05 6 6 1 9.998000e-01 7 7 1 2.972724e-05 8 8 1 3.492409e-05 9 9 1 3.275588e-05 10 10 1 3.918000e-05 # ... with 188 more rows tidy(dtm) %>% filter(document == 1) %>% arrange(desc(count)) # A tibble: 177 x 3 document term count <chr> <chr> <dbl> 1 1 driver 11 2 1 app 7 3 1 car 6 4 1 teenag 5 5 1 drive 4 6 1 parent 4 7 1 phone 4 8 1 compani 3 9 1 famili 3 10 1 inform 3 # ... with 167 more rows
25
290c1c193147
2017-09-05
2017-09-05 15:22:23
2017-09-05
2017-09-05 16:19:48
3
false
en
2017-09-14
2017-09-14 17:13:25
3
11688837d32f
5.557547
14
0
0
(This article first appeared on my website)
5
Topic Modeling of New York Times Articles Courtesy of Pixabay (This article first appeared on my website) In machine learning and natural language processing, A “topic” consists of a cluster of words that frequently occur together. A topic model is a type of statistical model for discovering the abstract “topics” that occur in a collection of documents. Topic modeling is a frequently used as a text-mining tool for the discovery of hidden semantic structures in a text body. Topic models can connect words with similar meanings and distinguish between the uses of words with multiple meanings. For this analysis, I downloaded 22 recent articles from business and technology sections in the New York Times. I am using the collection of these 22 articles as my corpus for the topic modeling exercise. Therefore, each article is a document, with an unknown topic structure. Load the library. Set the working directory. Load the files into corpus. Read the files into a character vector. Create corpus from the vector and inspect the 5th document. Data Preprocessing Remove potential problematic symbols. Remove punctuation, digits, stop words and white space. Define and remove custom stop words. I decided to go the step further and remove everything that is not an alpha or numerical symbol or space. Transform to lowercase. Stem the document. It looks right after all the preprocessing. Right now this data frame is in a tidy form, with one-term-per-document-per-row. However, the topicmodels package requires a DocumentTermMatrix. We can create a DocumentTermMatrix like so: Topic Modeling Latent Dirichlet allocation (LDA) is one of the most common algorithms for topic modeling. LDA assumes that each document in a corpus contains a mix of topics that are found throughout the entire corpus. The topic structure is unknown — we can only observe the documents and words, not the topics themselves. Because the structure is unknown (also known as latent), this method seeks to infer the topic structure given the known words and documents. Now we are ready to use the LDA() function from the topicmodels package. Let’s estimate an LDA model for these New york Times articles, setting k = 4, to create a 4-topic LDA model. Word-topic probabilities. This has turned the model into a one-topic-per-term-per-row format. For each combination the model has beta — the probability of that term being generated from that topic. For example, the term “adapt” has a 3.204101e-04 probability of being generated from topic 1, but a 8.591570e-103 probability of being generated from topic 2. Let’s visualize the results to understand the 4 topics that were extracted from these 22 documents. Source: deepPiXEL The 4 topics generally describe: iPhone and car businesses Tax, insurance corporates in Houston Restaurant reservation, Google, and Uber’s new CEO New technology and banking Let’s set k = 9, see how do our results change? Source: deepPiXEL From a quick view of the visualization it appears that the algorithm has done a decent job. The most common words in topic 9 include “uber” and “khosrowshahi”, which suggests it is about the new Uber CEO Dara Khosrowshahi. The most common words in topic 5 include “insurance”, “houston”, and “corporate”, suggesting that this topic represents insurance related matters after Houston’s Hurrican Harvey. One interesting observation is that the word “company” is common in 6 of the 9 topics. In the interest of space, I fit a model with 9 topics to this data set. I encourage you to try a range of different values of k (topic) to find the optimal number of topics, to see whether the model’s performance can be improved. Document-topic probabilities. Besides estimating each topic as a mixture of words, topic modeling also models each document as a mixture of topics like so: Each of these values (gamma) is an estimated proportion of words from that document that are generated from that topic. For example, the model estimates that about 0.008% of the words in document 1 were generated from topic 1. To confirm this result, we checked what the most common words in document 1 were: This appears to be an article about teenage driving. Topic 1 does not have driving related topics, which means that the algorithm was right not to place this document in topic 1. Summary Topic modeling can provide ways to get from raw text to a deeper understanding of unstructured data. However, we always need to examine the results carefully to ensure that they make sense. So, try yourself, have fun, and start practicing those topic modeling skills!
Topic Modeling of New York Times Articles
84
topic-modeling-of-new-york-times-articles-11688837d32f
2018-06-08
2018-06-08 17:36:32
https://medium.com/s/story/topic-modeling-of-new-york-times-articles-11688837d32f
false
1,327
Everything you need to know about living in Swift world. Tutorials, codes, articles and more.
null
null
null
SwiftWorld
swiftworld
SWIFT,IOS,APPLE,IOS APP DEVELOPMENT,APP DEVELOPMENT
null
Data Science
data-science
Data Science
33,617
Susan Li
Changing the world, one article at a time. Sr. Data Scientist, Toronto Canada. Opinion=my own. https://www.linkedin.com/in/susanli/
731d8566944a
actsusanli
5,933
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-27
2018-04-27 03:16:01
2018-04-27
2018-04-27 03:38:21
18
false
en
2018-04-27
2018-04-27 03:38:21
33
11697a794f6a
4.480189
1
0
0
null
5
Artificial Intelligence APAC — Issue #11 If you like this newsletter, please send it to a friend so they can subscribe too. Come 26–27 June 2018, the upper echelons of the tech world will gather in Hong Kong for the 3rd annual edition of this now staple of MIT Technology Review in Asia — EmTech Hong Kong 2018. This year, the conference program will focus on 4 key topics: Future Cities, Rewriting Life, FinTech and Cybersecurity and Intelligent Machines. If these topics are of your interest, this is an event you shouldn’t miss! Enter Artificial Intelligence HK Discount Code: ETHK8001 at Check-out to Enjoy 10% off Industry News 24 EU countries sign AI pact in bid to compete with US and China — www.euractiv.com Twenty-four EU countries pledged to band together to form a “European approach” to artificial intelligence in a bid to compete with American and Asian tech giants. Facebook Is Forming a Team to Design Its Own Chips — www.bloomberg.com Facebook Inc. is building a team to design its own semiconductors, adding to a trend among technology companies to supply themselves and lower their dependence on chipmakers such as Intel Corp. and Qualcomm Inc., according to job listings and people familiar with the matter. Alibaba is developing its Own AI chips, too — www.technologyreview.com Alibaba announced that it’s building a chip called Ali-NPU — for “neural processing unit” — designed to handle AI tasks like image and video analysis. The firm says its performance will be 10 times that of a CPU or GPU performing the same task. The project is led by Alibaba’s R&D arm, DAMO Academy. The firm also announced that it has acquired a Hangzhou-based CPU designer called C-SKY. Microsoft to improve shipping operations with AI — techwireasia.com MICROSOFT Research Asia (MSRA), is partnering with Orient Overseas Container Line Limited (OOCL) to use artificial intelligence (AI) within the shipping industry. OOCL aims to optimize its shipping network operations, with the help of deep learning research. The partnership will see both companies jointly conducting Financial Services AI will wipe out half the Banking jobs in a decade, experts say — www.mercurynews.com Artificial intelligence will wipe out half the banking jobs in a decade, experts say For AI Engineers Training Drone Image Models with Grand Theft Auto — www.slideshare.net CCRi Data Scientist Monica Rajendiran’s presentation from Charlottesville’s 2018 #tomtomfest Machine Learning conference. Evolved Policy Gradients blog.openai.com Around Asia-Pacific Region China’s AI dream is well on its way to becoming a reality — www.scmp.com Andy Chun says China seems to have all the pieces in place to achieve the goals of its artificial intelligence strategic road map — from a vibrant start-up culture to government support and a population enthusiastic about technology Huawei working on an Emotional AI — www.cnbc.com Huawei is working on an emotion AI that would make conversations between users and virtual assistants more meaningful, the company’s executives told CNBC. Japan: Robots are going to redefine Japan’s skylines — www.technologyreview.com Japanese companies are turning to robots to help build their skyscrapers. India: Twitter Co-Founder Invests In Indian AI-Based Health Startup Visit — analyticsindiamag.com “In India, for every doctor there are 2,000 patients lined up in-clinics… This is where the technology approach by Visit comes in” Thailand commits to ‘long-term partnership’ with Alibaba — www.nationmultimedia.com Thailand has entered into a strategic partnership with Alibaba to drive the development of Thailand’s digital economy and the Eastern Economic Corridor under the Thailand 4.0 policy. UK Plans $1.3 Billion Artificial Intelligence Push fortune.com The United Kingdom is planning a $1.3 billion investment into artificial intelligence technologies, with companies like Microsoft helping. Vietnam: Robotics In Vietnam — jumpstartmag.com By Keina Chiu | Fluctuating Variances in production volume and tight deadlines are issues that SMEs often face. While robots could be the ultimate solution, These 3 countries are more prepared for automation than anyone else, here’s why — www.techrepublic.com A study found that every country is grappling with how to respond to AI and automation, writing that “societies are…in for a long period of trial and error.” Join AI Meetup Events in Hong Kong! Join Hong Kong AI Meetup: https://www.meetup.com/Artificial-Intelligence-HK Brought to you by: Artificial Intelligence & Deep Learning Ltd https://www.linkedin.com/in/EugeniaWan/ Follow us on Facebook! https://www.facebook.com/ArtificialIntelligenceAPAC [ YOUR LOGO HERE ] — email [email protected] The above articles represent the authors’ own opinions or the respective news organizations’ and do not reflect the views of Eugenia Wan, Artificial Intelligence & Deep Learning Ltd nor eFusion Capital Ltd etc.
Artificial Intelligence APAC — Issue #11
1
artificial-intelligence-apac-issue-11-11697a794f6a
2018-04-27
2018-04-27 13:44:11
https://medium.com/s/story/artificial-intelligence-apac-issue-11-11697a794f6a
false
750
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Eugenia Wan
AI Engineer. Data Scientist. Duke Alumni. CTO of MachineLearning.ai
55b1647b25b
eugeniawan
90
510
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-27
2018-05-27 21:15:41
2018-05-27
2018-05-27 22:35:14
5
false
id
2018-05-27
2018-05-27 22:35:14
4
116992ca79e3
2.893711
1
0
0
tahap yang paling awal dalam analisa data adalah mengukur Central tendency, Asymmetry dan Variability. Explolatory data analisis merupakan…
2
Asymmetry , Variability dan Relationship Data tahap yang paling awal dalam analisa data adalah mengukur Central tendency, Asymmetry dan Variability. Explolatory data analisis merupakan cara untuk mendeskripsikan suatu data. dimana kita melakukan eksplorasi data untuk mengetahu karaterisktik dan sifat-sifat data tersebut. dengan lebih mengenal data yang akan di analisa maka kita dapat menentukan langkah-langkah selanjutnya yang dapat kita lakukan. “An over-the-shoulder shot of a person writing code on a laptop” by Tirza van Dijk on Unsplash Central Tendency dan Asymmetry Mungkin kita lebih mengenal central tendency dengan Mean , Median dan Modus, Saya rasa semua yang membaca tulisan ini sudah paham dengan 3 pengukuran diatas. jadi saya akan bahas apa yang bisa didapatkan dari pengukuran diatas. salah satunya adalah Skewness. dengan mengetahui skewnessnya kita akan tahu dimana data terkonsentrasi dan dimana letak outlier pada data tersebut. untuk lebih jelasnya dapat perhatikan gambar dibawah ini. picture by http://kineticmaths.com/index.php?title=Skew jika mean lebih kecil dari median maka data tersebut memiliki negative skewed, yang artinya data terkonsentrasi di atas-rata-rata dan outlier terletak pada data dibawah rata-rata. jika mean,median dan mode nilainya sama maka memiliki normal skew sehingga akan membentuk simetrikal distibution. untuk positif skew kebalikan dari negatif skew. Variability cara yang paling umum untuk mengetahui variability suatu data dapat menggunakan variance , standard deviasi dan covariance. pada umumnya rumus dibagi menjadi sample dan populasi. kita akan cenderung membahas pada sample data. variance sendiri digunakan untuk megukur penyebaran data point dari nilai rata-rata. untuk menghitungnya kamu dapat mebuka artikel ini https://id.wikihow.com/Menghitung-Variasi . picture by 365 career jika nilai variansnya rendah berarti data menyebar di sekitar rata-rata jika snilainya tinggi maka sebaliknya. karena unit dari variance adalah kuadrat maka agar benar-benar bermanfaat maka dicari standard deviasi. rumus standar deviasi sendiri adalah hanya melakukan akar pada nilai varians, sehingga memiliki skala yang sama dengan dataset. lalu bagaimana dengan coefficient of variance, rumus dari coefficient of variance adalah standard deviasi dibagi dengan mean. mengapa coefficient of variance? coefficient of variance digunakan untuk membandingkan 2 / lebih dataset sedangkan standard deviasi hanya untuk 1 dataset. untuk lebih lanjut dapat perhatikan dataset dibawah picture by 365 career dari 2 dataset tersebut memiliki standard deviasi yang berbeda tetapi mempunyai coefficient of variance yang sama. dapat diperhatikan bahwa pola penyebaran kedua data sama walaupun memiliki nilai yang berbeda. coefficient of variance sendiri tidak memiliki unit measurement sehingga cocok untuk perbandingan. kita juga dapat menghitung co-variance dari 2 data dengan rumus Relationship bayangkan kita ingin mengetahui dari data penjualan rumah. apakah luas rumah mempengaruhi dari harga sebuah rumah atau sebaliknya. nah dengan covariance kita dapat menjawab pertanyaan tersebut. rumus covariance sendiri adalah rumus co-variance nilai dari covariance akan membantu kita mendapatkan sense of direction dari data. dimana jika nilainya > 0 maka kedua variabel bergerak bersama, jika nilainya < 0 maka bergerak berlawanan jika nilainya 0 maka kedua variabel bersifat independen. agar dapat lebih mudah dalam mengambil instuisi maka dapat dicari nilai koefision korelasi. yang didapatkan dari nilai covariance(x,y) / std(x) * std(y) . secara matetmatik nilai dari koefisien korelasi adalah -1 hingga 1 . semakin mendekati 1 berarti variabel x searah dengan variabel y ilustrasinya adalah jika nilai x naik maka nila y naik, jika nilai x turun maka nila y turun. jika nilai koefisien korelasi mendekati 0 maka kedua variabel tersebut semakin tidak saling berhubungan . jika nilainya antara -1 dan 0 berarti kedua variabel berlawanan arah. jika nilai x naik maka nilai y turun dan sebaliknya
Asymmetry , Variability dan Relationship Data
1
asymmetry-variability-dan-relationship-data-116992ca79e3
2018-05-28
2018-05-28 05:48:34
https://medium.com/s/story/asymmetry-variability-dan-relationship-data-116992ca79e3
false
546
null
null
null
null
null
null
null
null
null
Statistics
statistics
Statistics
5,433
Rangga Rizky A
Data-Driven Human , Bachelor of Science Fiction
24ad0fec8834
ranggaantok
13
24
20,181,104
null
null
null
null
null
null
0
library(fpc) library(factoextra) data("multishapes", package = "factoextra") # retrieve all data point from columns x and y df <- multishapes[, 1:2] # radius 0.15 unit, minimum neighbor point = 5 db <- fpc::dbscan(df, eps = 0.15, MinPts = 5) plot(db, df, frame = FALSE)
4
null
2018-02-27
2018-02-27 13:32:38
2018-03-01
2018-03-01 15:29:08
8
false
th
2018-03-01
2018-03-01 15:29:08
1
116b5d5c9873
2.586164
5
0
0
​ทำความเข้าใจกับ DBSCAN วิธีการใช้งาน และ ตัวอย่างโค้ดภาษา R
3
Clustering — DBSCAN คืออะไร Clustering หรือการจัดกลุ่มของข้อมูล เป็นหนึ่งใน method ของ Unsupervised Learning โดยทั่วไปแล้วก็จะมี algorithm เช่น K-Means หรือ Hierarchical Clustering ซึ่งทั้งสอง algorithm นี้จะเหมาะสำหรับ cluster ที่รวมกันเป็นกลุ่มก้อน (compact) และแยกออกจากกันอย่างชัดเจน ตัวอย่างการ clustering จาก dataset iris Hierarchical Clustering (Cluster ของดอกไม้ 3 species ใน genus Iris โดยการใช้ Hierarchical Clustering) K-Means (Cluster ของดอกไม้ 3 species ใน genus Iris โดยการใช้ K-means โดยที่ k = 3) แต่ในกรณีที่ cluster ที่มี pattern ที่ต่างออกไป เช่น จากรูป เราจะสามารถแบ่งข้อมูลได้เป็น 5 กลุ่ม ด้านบนมี 2 cluster รูปวงรี ด้านซ้ายล่าง 2 cluster ที่เป็นเส้นตรง ด้านขวาล่าง 1 cluster ที่รวมเป็นกลุ่มก้อน จากข้อมูลชุดนี้ ถ้าเราใช้ algorithm K-Means ในการแยก cluster โดยที่ k = 5 จะได้ผลลัพธ์ดังนี้ จากรูปจะสังเกตได้ว่า K-Means จะไม่ได้แบ่งกลุ่มตามที่เราต้องการ และเราจำเป็นต้องกำหนดค่า k ด้วยตัวเอง (จำนวน cluster ที่ต้องการจะแบ่ง) ใน dataset ที่มีข้อมูลที่ไม่รวมกันเป็นกลุ่มก้อน, มี pattern เป็นรูปทรงต่างๆ หรือมี outliner เช่น ( Ester et al. 1996) ซึ่ง algorithm DBSCAN จะสามารถแก้ปัญหานี้ได้ DBSCAN คืออะไร ทำงานอย่างไร DBSCAN (Density-based spatial clustering of applications with noise) เป็นการหาบริเวณที่ข้อมูลเกาะกลุ่มกัน ซึ่งสามารถคำนวณได้จาก data point ที่อยู่รอบๆ ในรัศมีที่กำหนด การที่จะใช้ DBSCAN ได้ จำเป็นต้องมี 2 parameter คือ 1. eps รัศมีจากจุดศูนย์กลาง 2. MinPts จำนวน data point ขั้นต่ำสำหรับการกำหนด center Figure 1 ตำแหน่ง x จะเรียกว่า Core point เพราะในรัศมีจาก x นั้น มี Neighbor point อย่างน้อย 6 จุด Figure 2 ตำแหน่ง x จะเรียกว่า Core point เพราะ มี Neighbor point อย่างน้อย 6 ตำแหน่ง y จะเรียกว่า Border เพราะ y มี Neighbor point ไม่ถึง 6 แต่อยู่ในรัศมีของ core point x ตำแหน่ง z จะเรียกว่า Border เพราะ z มี Neighbor point ไม่ถึง 6 แต่อยู่ในรัศมีของ y ซึ่ง y ก็อยู่ในรัศมีของ Core point x ทำให้ z ถือว่าอยู่ใน cluster เดียวกันกับ x และ y ตำแหน่ง n จะเรียกว่า Noise หรือ Outlier เพราะจุดนั้นไม่ได้อยู่ในรัศมีของ Core point ใดๆเลย​ ซึ่ง Noise นั้นจะเป็นข้อมูลที่เราต้องการตัดออกไป และไม่รวมอยู่ใน Cluster Algorithm ของ DBSCAN 1. ในแต่ละ data point จะคำนวณหา neighbor point ทั้งหมดในรัศมี eps ถ้า data point ไหนมี neighbor point มากกว่าหรือเท่ากับ MinPts ให้ data point นั้นเป็น core point และสร้างเป็น cluster ใหม่ 2. ในแต่ละ core point ถ้ามี neighbor ที่เชื่อมต่อกับอีก core point ได้ ให้รวมเป็น cluster เดียวกัน 3. ถ้า data point ไหนไม่เชื่อมต่อกับ core point ก็จะให้ data point นั้นเป็น Noise ซึ่งจะไม่อยู่ใน cluster ใดๆเลย ตัวอย่างโค้ดในภาษา R dataset ที่จะใช้คือ multishapes ซึ่งนำมาจาก library factoextra, multishapes เป็น table ที่มีสอง column คือ x ,y package สำหรับการเรียกใช้ function dbscan คือ fpc เมื่อ plot แล้ว จะได้ผลลัพธ์คือ data point ทั้งหมดถูกแบ่งเป็น 5 cluster โดยจุดสีดำจะถือว่าเป็น Noise ซึ่งจะถูกตัดทึ้ง สรุป DBSCAN เป็น algorithm หนึ่งในการทำ clustering โดยไม่ต้องกำหนดจำนวน cluster ที่ต้องการแบ่งเหมือนกับ K-means DBSCAN เหมาะสำหรับข้อมูลที่กระจายตัวกันแบบไม่เป็นกลุ่มก้อน มี pattern เป็นรูปทรงต่างๆ ที่ k-means ไม่สามารถจัดกลุ่มได้ และเหมาะกับการตัด Noise หรือ Outliner ออกไป อ้างอิง DBSCAN: density-based clustering for discovering clusters in large datasets with noise — Unsupervised Machine LearningEasy
Clustering — DBSCAN คืออะไร
18
clustering-dbscan-คืออะไร-116b5d5c9873
2018-06-15
2018-06-15 03:50:13
https://medium.com/s/story/clustering-dbscan-คืออะไร-116b5d5c9873
false
385
null
null
null
null
null
null
null
null
null
Dbscan
dbscan
Dbscan
13
Chakri Lowphansirikul
null
2d5ea4458e8d
artificialcc
69
72
20,181,104
null
null
null
null
null
null
0
null
0
8ad06567192c
2018-06-07
2018-06-07 11:08:37
2018-06-07
2018-06-07 11:10:17
4
false
en
2018-06-07
2018-06-07 11:10:17
0
116c3eeab612
4.80566
0
0
0
Technology one of the best friend of mankind in today’s era which assisting us in numerous ways. Well, tech become one the necessity good…
5
Earprints, tail therapy and calorie scanners: Japan’s latest inventions Technology one of the best friend of mankind in today’s era which assisting us in numerous ways. Well, tech become one the necessity good or normal goods in the economics as it alters the human path completely. When it comes to technological advancement then the first country which stock on the top of the list is Japan. This country is one of the leading and most advanced country in the tech era. Hardworking and discipline residents have modified the thinking and their ability by grasping new ways to simplify their life. In 2017, Sharp presented the 8K era through its stunning 27-inch monitor of the identical resolution due to the proclamation of next month, Lenovo unveiled a Star Wars Jedi Challenge game that permitted operators to involve in virtual reality lightsaber combats also there were robots plenteous. This is, after all, Japan. Top reports feature eight of the strangest discoveries showcased at the occasion. Change to the flawless chew It guises like a Bluetooth hands-free manoeuvre, nonetheless bites can is truly determining your chewing hits as you eat. Draped from the ear, it is slighter and lighter than prevailing bite hostages (they’re usually devoted to the chin) and can regulate a user’s bite hurry, a number of bites per minute besides kind of bite using a waveform perceived on the back of the ear. Piercing has somewhat gamified the idea: the manoeuvre syncs up to a smartphone app which classifies fallouts by animal sort i.e. deliberate chewers are tortoises. The creation will go on auction next year. Sharp utters it will then stake the data it gathers with Japanese campuses. The goal, it says, is to permit the user to achieve the flawless chewing ways. Fairly why we prerequisite to doing this will seemingly become purer in time. Ear print to swap the thumbprint? Thought your thumbprint was protected? Think over. Scholars at Michigan State University last year demonstrated it can be slashed using little further than an inkjet copier. Cross the threshold the thumbprint. Advanced by US-based knowledge firm Descartes Biometrics, it efforts like this: Initially, the user downloads the ERGO software onto their smart device. The user kicks the manoeuvre to the lateral of their head and swarms the centre of the touchscreen on their ear. An all-encompassing is then guided into the ear, and owing to the “exclusive geometry of the ear,” the noise that is reverberated back is precise to each individual. ERGO uses radars embedded in contemporary Android smart devices since no extra hardware is mandatory. Verification takes around one second, and the business speaks, recovers with use, loading up to ten examinations of the user’s ear. A multitude of workshop robots equips Omron artificial intelligence (AI) Involuntary Transportation Mobile Robot machinery are intended to work together in workshops. The white robots, flanged in January, wrinkle data as they wander around, making maps in their “brain” which they then practice to self-sufficiently navigate their atmosphere. All the regulator has to do is set the robot’s terminus and it will direct its own route. Laser devices on all verges of the “body” permit the robot to perceive unanticipated objects — an individual, for instance. Exhausting those maps, it can redirect to the target. The robots can mobile at 1.8 meters per second, also carry a supreme load of 130 kg — through the prodigious the load the gentler they travel. Besides their application isn’t inadequate to workshops. At Incheon Airport in South Korea, an itinerant robot has been rationed customers beverages. The calorie scanning machine Lunch is complete. However, don’t pleat in. First, you require to put it over your calorie scanning machine. That’s the awareness behind CaloRieco. Its ultraviolet scanner trials nutrients within an exactness assortment of 20%, rendering to producer Panasonic. Current calorie scanners gross amid two and three minutes, the tech here exerted in just 10 seconds. As well as individuals eager to lose weight, the creation is also intended for those with diabetes and other diet-affected health situations. The scanner supplies nutritional data, also Panasonic hopes that ultimately it will be gifted to advocate recipes conferring to users’ requirements. The value and launch date are mutual to be absolute. Superstores of tomorrow Online stores are nought new, nonetheless what if corporeal shops went online, too? That’s the impression behind Usockets, a succession of automated supermarket price tags which nurture into a central arrangement that customs real-time data accomplish a store. Not sufficient footfall in the dairy passageway? The arrangement’s heat map can recognize that and robotically lower prices on stuff that essential to sell. It gets unvarying smarter. A skill named LinkRay permits buyers to scan price tags through their smartphones to stimulate videos, offering more extra product data. The expertise is still under expansion but stretches an intuition into what the shops of the future could guise like. A digital transformation Ever conjectured what you had glimpse like with denser eyebrows? Or wearisome fake eyelashes? Panasonic’s Cosmetics Design Tool allows users trial through these potentials and further. The graphics excision software in video imitation mode customs a live video as its canvas. Users can smear makeup on their image, receiving a realistic forecast of their virtual transformation. Encounters of diverse sizes can be designated to draw hairs on the face or put on blusher. The technology is beleaguered at makeup stores, wedding photography workshops and cosmetic makes. The application is ended, and Panasonic declares it is observing for business associates, such as make-up products, to take it to the marketplace. The faultless babysitter? “Good morning folks, did you have a decent sleep?” enquires the Cocotto. Owed as the seamless childcare partner by Panasonic, the careering ball-shaped android can express drowsy children to go to bed, download songs from the cloud to sing to little ones, and aid a child’s educational expansion. Parents teach the rounded social robot, make it their coworker as much as the child’s acquaintance. Oh, and it has some extremely cute facial expressions. Qoobo Rehabilitation Robot A headless robotic cat valour does not sound that relaxing, nonetheless be certain of us it is. The Qoobo has been advanced by Japanese commercial Yukai Engineering and subsidized by a Kickstarter crusade. When a user huffs the cushion-shaped in addition sized toy, its tail woggles. The more energetic the patting, the harder the wagging. The Qoobo will unveil next June and merchandise for $100, with an eight-hour battery lifespan.
Earprints, tail therapy and calorie scanners: Japan’s latest inventions
0
earprints-tail-therapy-and-calorie-scanners-japans-latest-inventions-116c3eeab612
2018-06-07
2018-06-07 11:10:19
https://medium.com/s/story/earprints-tail-therapy-and-calorie-scanners-japans-latest-inventions-116c3eeab612
false
1,088
Founded in 2015, we are a start-up working towards building a creative ecosystem that offers everyone a platform to launch themselves in areas of literature, arts, music and just about everything creative. Now write on www.storymirror.com and reach an audience of more than 3 M.
null
storymirror
null
StoryMirror
storymirror
WRITE,READ,WRITING CONTEST,PUBLISHING,BOOKS
story_mirror
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ojasvi Balotia
Never argue with an idiot they’ll drag you down to their level and beat you through experience.
dd3d6f4827be
ojasvibalotia1998
4
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-26
2018-03-26 00:00:20
2018-03-26
2018-03-26 00:07:03
4
false
en
2018-03-29
2018-03-29 02:53:06
32
116d13195a5d
6.149057
0
0
0
I recently listened to a radio talk-show host interview the technology editor for the New York times, regarding Artificial Intelligence…
1
Our human Physiology makes us Intelligent I recently listened to a radio talk-show host interview the technology editor for the New York times, regarding Artificial Intelligence (AI). Of course the technology editor used the metaphor of a neural network to describe AI. The interviewer immediately picked-up on the idea that AI systems were the facsimile of human brains. Even though the editor kept reminding her a neural network was a metaphor it appeared the interviewer quickly became comfortable with the idea that nodes in AI networks were the facsimile of human synapse. She did not appear comfortable when the interviewee began to explain that nodes were not synapses but rather simply a way of describing the decomposition of a much larger mathematical algorithm. This is a chart created by Fjodor Van Veen of the the Asimov Institute illustrating what he calls a “mostly complete” chart of neural networks. Regardless of how the nodes are configured or the label (e.g. Feed Forward, Markov Chain, Deep Convolutional Network, etc.) used to distinguish one configuration from another, all configurations include some grouping or “layering” of nodes that includes input layer, hidden layer and output layer. An excellent summary of the functions of different layers of nodes is provided by David J. Harris on the Stack Exchange. “Each layer can apply any function you want to the previous layer (usually a linear transformation followed by a squashing nonlinearity). The hidden layers’ job is to transform the input layer into something that the output layer can use. The output layer transforms the hidden layer activations into whatever scale you wanted your output to be on” Harris continues; “A feed-forward neural network applies a series of functions to the data. The exact functions will depend on the neural network you’re using: most frequently, these functions each compute a linear transformation of the previous layer, followed by a squashing nonlinearity….The roles of the different layers will depend on the functions being computed.” Going forward it’s important to keep in mind that AI applications, like a “neural network”, are indeed metaphors for algorithms to which we have assigned the label “artificial intelligence” (AI). Neural networks are not “intelligent”, like a human being, because, while they have memory and logic, they lack emotion and feelings, both of which are critical physiological elements of human intelligence. According to eminent neuroscientist Antonio Damasio, “we are only just beginning to understand the foundations of human intelligence and consciousness that cannot be captured in an algorithmic formula divorced from the functions of the body and the long evolution of our species and its microbiome.” According to Science Fiction writer Joseph Reinemann, even Vulcans, like Mr. Spock, “feel” — in fact they feel much, much more passionately than most. But they don’t express it. Vulcans, like Spock keep their feelings carefully controlled by deconstructing them and thus robbing them of their potency. In doing so, they’re able to prevent them from affecting any of their outward actions. Controlling their emotions is a constant struggle for all but the most devoted of Vulcans — those who have undergone the Kohlinar ritual. Kohlinar masters are said to not feel anything at all. Spock almost achieved this… but V’Ger, the sentient, massive entity en route to find its “Creator”, destroyed anything it encountered by digitizing it for its memory chamber. Calling out for its creator stirred something within Spock and ruined his chances of achieving Kohlinar. The objective of a machine-learning model, the prevailing type of AI, is the identification of statistically reliable relationships between certain features of the input data and a “target variable” or outcome. Newton’s third rule of induction is the first unwritten rule of machine learning. “Whatever is true of everything we’ve seen is true of everything in the universe.” The authors of “The Simple Economics of Machine Intelligence” have been even more precise in their explanation of what machine learning is and is not. According to Ajay Agrawal, Joshua Gans, and Avi Goldfarb, all professors at the University of Toronto’s Rotman School, machine intelligence technology is, in essence, a “prediction technology”, relying on statistics and probability. However the human mind is far from “prediction technology”. According to Dr. Damasio, “Feelings tell the mind without any word being spoken of the good or bad direction of the life processes at any moment within its respective body.” — A. Damasio Dr. Robert Burton’s work compliments Dr. Damasio’s when he writes: “… the (human) brain creates the involuntary sensation of ‘knowing’ and how this sensation is affected by everything from genetic predispositions to perceptual illusions common to all body sensations.” — Robert A. Burton M.D. All sensory receptors for vision, hearing, touch, taste, smell, and balance receive distinct physical stimuli and transduce the signal into an electrical action potential. This action potential then travels along afferent neurons to specific brain regions where it is processed and interpreted. Synesthetes — or people who have synesthesia ( when more than one of a person’s senses are “crossed”) may see sounds, taste words or feel a sensation on their skin when they smell certain scents. They may also see abstract concepts like time projected in the space around them. Scientists used to think synesthesia was quite rare, but they now think up to 4 percent of the population has some form of the condition. This is how a young lady with synesthesia “sees music” which most of us only hear. Synesthesia exemplifies how human intelligence is much more complex than what we currently characterize as “Artificial Intelligence (AI)”. Human intelligence manifests itself as sensory information which enters the neocortex by way of the thalamus. The transfer of sensory signals from the periphery to the cortex is not simply a one-to-one relay but a dynamic process involving reciprocal communication between the cortex and thalamus. Neural networks, a principle type of artificial intelligence, are typically organized in layers. Layers are made up of a number of interconnected ‘nodes’ which contain an ‘activation function’. Patterns are presented to the network via the ‘input layer’, which communicates to one or more ‘hidden layers’where the actual processing is done via a system of weighted ‘connections’. The hidden layers then link to an ‘output layer’ where the answer is output as shown below: An AI neural network, like the one illustrated above, is not physiological. It is mathematical. In the above illustration data in the input layer is labeled as xwith subscripts 1, 2, 3, …, m. “Nodes in the hidden layer are labeled as hwith subscripts 1, 2, 3, …, n. Note for hidden layer it’s n and not m, since the number of hidden layer neurons might differ from the number in input data. The hidden layer nodes are also labeled with superscript 1. This is so that when you have several hidden layers, you can identify which hidden layer it is: first hidden layer has superscript 1, second hidden layer has superscript 2, and so on. Output is labeled as y with a hat. If we have m input data (x1, x2, …, xm), we call this m features. Secondly, when we multiply each of the m features with a weight (w1, w2, …, wm) and sum them all together, this is a dot product like this: This is not how a human brain works. The brain’s logical and reasoning mechanisms, contained in the the Cerebral Cortex, work along with sensory and emotional mechanisms, contained in the Hippocampus, to retrieve feelings and thoughts that combine to drive behavior. Humans make use of fundamental processes of life regulation that include things like emotion and feeling, but we connect them with intellectual processes in such a way that we create a whole new world around us. Once again Dr. damasio sums it up best, “At the point of decision, emotions are very important for choosing. In fact even with what we believe are logical decisions, the very point of choice is arguably always based on emotion.” __________________________________________________________________ Notes: https://www.asimovinstitute.org/author/fjodorvanveen/ https://stats.stackexchange.com/questions/63152/what-does-the-hidden-layer-in-a-neural-network-compute http://ngp.usc.edu/usc-neuroscientist-antonio-damasio-argues-that-fee...acity-for-cultural-creation-a-map-of-the-computational-mind-he-says/ Domingos, Pedro. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (p. 66). Basic Books. Kindle Edition. Robert A. Burton, On Being Certain , Believing you are right, even when you’re not, St. Martin’s Griffin, New York, 2008 https://www.mnn.com/health/fitness-well-being/stories/what-is-synesthesia-and-whats-it-like-to-have-it https://towardsdatascience.com/multi-layer-neural-networks-with-sigmoid-function-deep-learning-for-rookies-2-bf464f09eb7f https://www.technologyreview.com/s/528151/the-importance-of-feelings/ http://bigthink.com/experts-corner/decisions-are-emotional-not-logical-the-neuroscience-behind-decision-making
Our human Physiology makes us Intelligent
0
our-human-physiology-makes-us-intelligent-116d13195a5d
2018-03-29
2018-03-29 02:53:07
https://medium.com/s/story/our-human-physiology-makes-us-intelligent-116d13195a5d
false
1,444
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
william smith
Husband for 46 years. Dad forever! Very lucky man.
c1e5f41d4eec
WillmsmithSmith
66
17
20,181,104
null
null
null
null
null
null
0
# Since BTS webpage allows to get dataset by month, I downloaded first and merged it. import glob import pandas as pd import os, os.path # Set the file path path = './assets/rawdata/' # Make reproducible, whether the combined dataset existed or not if os.path.isfile('./assets/newdf_2013.csv') == True: newdf_2013 = pd.read_csv('./assets/newdf_2013.csv', low_memory=False) print("File existed!", newdf_2013.shape) else: # pull 2013 datasets, segmented down by months, and merge files = glob.glob(path + str(2013) + "/*.csv") df_2013 = pd.concat((pd.read_csv(f, low_memory=False) for f in files)) print("Shape of 2013 raw datasets: ", df_2013.shape) # Select only relevant columns if os.path.isfile('./assets/newdf_2013.csv') == True: print("File existed and already preprocessed") else: cols = ['Year', 'Month', 'DayofMonth', 'DayOfWeek', 'Carrier', 'FlightNum', 'TailNum', 'Origin', 'Dest', 'CRSDepTime', 'DepTime', 'DepDelay', 'TaxiOut', 'WheelsOff', 'CRSElapsedTime', 'ActualElapsedTime', 'AirTime', 'Distance', 'WheelsOn', 'TaxiIn', 'CRSArrTime', 'ArrTime', 'ArrDelay', 'Diverted', 'Cancelled', 'CancellationCode', 'NASDelay', 'SecurityDelay', 'CarrierDelay', 'LateAircraftDelay', 'WeatherDelay'] newdf_2013 = df_2013[[col for col in df_2013.columns if col in cols]] print("Shape of 2013 modified datasets: ", newdf_2013.shape) # only select Washington Metro airport if os.path.isfile('./assets/washington.csv') == True: print("File existed and already preprocessed") was_air = pd.read_csv('./assets/washington.csv', low_memory=False) print("Washington airports dataset dimension: ", was_air.shape) else: was_airports = ['DCA', 'IAD', 'BWI'] a = newdf_2013[newdf_2013.Origin.isin(was_airports)] b = newdf_2014[newdf_2014.Origin.isin(was_airports)] c = newdf_2015[newdf_2015.Origin.isin(was_airports)] d = newdf_2016[newdf_2016.Origin.isin(was_airports)] e = newdf_2017[newdf_2017.Origin.isin(was_airports)] a_= newdf_2013[newdf_2013.Dest.isin(was_airports)] b_= newdf_2014[newdf_2014.Dest.isin(was_airports)] c_= newdf_2015[newdf_2015.Dest.isin(was_airports)] d_= newdf_2016[newdf_2016.Dest.isin(was_airports)] e_= newdf_2017[newdf_2017.Dest.isin(was_airports)] was_air = pd.concat([a, b, c, d, e, a_, b_, c_, d_, e_], ignore_index=True) print("Washington airports dataset dimension: ", was_air.shape) # save dataset for the future usage was_air.to_csv('./assets/washington.csv', index=False)
6
null
2018-07-24
2018-07-24 20:58:31
2018-07-27
2018-07-27 20:56:11
9
false
en
2018-07-27
2018-07-27 20:56:11
5
116fb7a8d94e
6.090566
3
0
0
As an aviation geek, I usually search which exact aircraft that I will get for my long-haul flights. Start making questions and get…
5
Can aircraft tail number be a good predictor of flight delays? As an aviation geek, I usually search which exact aircraft that I will get for my long-haul flights. Start making questions and get answers: Is it a brand new aircraft?; Do I make a good seat selection?; Does it have a prior scheduled flight to fly into my airport?; If so, does it operate on time?; If not, how long do I need to wait for my airplane arrive?; and so on. When you make these search, you may end up yourself finding with the 4–6 digits number, which we call it as the tail number or the registration number, one of the unique identifier of the aircraft. If you ever had taken airplane photo when you get excited getting onboard, please zoom in the tail part of it. You will find this number printed on both port and starboard side of the aircraft. Like this. after 14 hours long flight and captured its tail number This unique number inspired me to gear up my data science project. I believed that this tail number can be a good predictor of the flight delays. And I stretched this inference to — what about aircraft makers or models, can it be found by this number — and can it be variables that I can use them for my prediction models? That’s how I started the project by having this ambitious assumption: if there is a certain relationship between aircraft and delays, it can be linked to the fleet operation strategies to the aviation companies — whether to diversify or simplify their fleet inventories. On-Time Performance open-source data in 29 millions data points The dataset, the on-time performance, can be obtained from the Bureau of Transportation’s Static website. In my case, I dived into 5 recent years of the data from 2013 to 2017, and it ended up with 29 millions data points of the entire domestic passenger flights. The CSV file contains from basic information of each flights(e.g. date, scheduled time, actual time, etc) to specific detailed log, such as minutes of delays, reason of delays, flight status(cancelled, delayed or diverted), etc. Once you fetch the dataset and complete the merge, make sure to have a basic analysis on the dataset which columns and features you want to mainly take a look. Here I selected around 30 features out of 110 columns. Dumped columns were flight diverted information or somewhat duplicated geographical airport/market data. For the computational reasons, I subset it to local market data in both outbound and inbound flights. In result, it came down to 2.2 million data points. As I previously mentioned, I was going to plan capture aircrafts’ identification with model, maker and its age by provided tail no in the dataset. In the 2.2 million datasets, there were around 6,000 unique tail numbers found. Therefore I decided to use Selenium to web-scrap from flightradar24.com which renowned as its live flight tracker service. This site contains detailed aircrafts code, serial number, age and delivered date to paid subscribe users. As I followed the site, after some tests with Selenium I successfully scrapped thousands of aircrafts information. You can find it more on my GitHub. Once all the dataset is prepared, let’s have exploratory data analysis and feature engineering. I have several golden nuggets I found from the data. The national capital area has three international/national airports: Baltimore/Washington International Thurgood Marshall Airport(IATA Code name — BWI: Baltimore, MD), Ronald Reagan Washington National Airport(DCA: Arlington, VA), Washington Dulles International Airport(IAD: Dulles, VA) Out of these three airports, BWI has the largest flights counts, followed by DCA and IAD. Interestingly, all the airports showed some dipped in February and spiked up in March. My guess is as same as yours, the Spring break! When you flight counts in both in- and out-bounds, Southwest ranked the first and followed by United, American, and Delta. (As you can see, subsidiary airlines were not re-categorized by which they were operated for, and as it also separately showed US Airways in the dataset) Before we are saying which airlines performed the best and the worst — guess the main reason to read this far to quench your thirst and curiosity — let’s have one final technical point. I wanted to use departure and arrival time logged in the dataset as a predictor to my model. With the help of Ben Shaver, I found Mr. Ian London’s blog post about cyclical time conversion. The code and thought process of the part in my project inspired from the blog and credit goes to Mr.London and Ben. Basically, we are converting time a sine transform and a cosine transform. Once calculate the sin_time and cos_time, you can scatter plot it into 24-hour clock shape. As you can see from the different colored clock above, the flight delays made extra unscheduled work in the airport. Here is the analysis of the departure and arrival flights which made more than 15 minutes delayed from its scheduled time. In the first page with the red line showed the mean value of delayed rate. With the second page, as the code and plot credit to Mr. Fabien Daniel, his tutorial in Kaggle, represented how each airlines’ mean delayed minutes in both departure and arrival sequences. Challenges: too many unknown values in the aircrafts info Okay, let’s get back to the original inference and assumption. Can we predict flight delays with aircrafts models and makers? My short answer is Yes it can be, but my longer answer is I cannot get the meaningful result at this moment. The reason behind this is, I got too much unknown aircrafts information from the flightradar24.com. Out of 6,000 unique tail number, after I successfully finished my selenium bot work, I got around 20% of unknown or null data. Therefore my challenge and new journey here is, I need to find the better source to collect aircraft information based on its tail number. If anyone has thoughts or suggestions on my ambitious projects, I will be more than happy to chat with and learn from you. As a new data scientist, I am always up to listen to all the feedback and ready to tune my work. All of my current work and process can be found on my GitHub. Thank you very much for reading it.
Can aircraft tail number be a good predictor of flight delays?
17
can-aircraft-tail-number-be-a-good-predictor-of-flight-delays-116fb7a8d94e
2018-07-27
2018-07-27 20:56:11
https://medium.com/s/story/can-aircraft-tail-number-be-a-good-predictor-of-flight-delays-116fb7a8d94e
false
1,296
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Kihoon Sohn
Data Scientist | Patient Listener
8e4c1dbe157c
kihoon.sohn
14
14
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-11
2018-03-11 23:38:20
2018-03-12
2018-03-12 01:12:30
1
false
en
2018-04-22
2018-04-22 17:32:04
0
11739948fdfb
2.283019
0
0
0
The possibility for the life we seek and the life that we desire exist somewhere in a dimension, there is only some special mix of inputs…
5
Management lessons from machine learning backward propagation theory The possibility for the life we seek and the life that we desire exist somewhere in a dimension, there is only some special mix of inputs in the right proportion that can bring them to life, how to find this inputs and their proportion, let’s turn to how we build AI models. Without using so much tech. buzz-words, I will explain how we can optimise our lives and companies using this technique. Machine learning thrills me and it’s like my favourite part of computing. The fact that I can model some inputs into some forms of output without bothering about what happens within is really interesting. Various machine learning techniques can help the computer figure out the best way to work out the middle part, leading to most times — the most desired output. Even though it’s possible to be able to build a machine learning model without understanding the nitty-gritty of what’s happening inside (as most libraries like tensorflow and tflearn etc has abstracted of the dirty parts), it could be interesting to try to understand how the computer figure out the middle part of training a deep learning model. A mathematical concept that is known as ‘calculus’ which I consider as just a fancy name because it simply means ‘slope’ between two points, in this case, in the machine learning graph. The goal while training a machine learning model is often to minimise the error which is calculated by an error function and then optimise the accuracy. This process of optimisation is achieved by something called backward propagation. During the forward propagation, the computer will start with some numbers (Weights and biases) and keep manipulating those numbers through a backward propagation process till it’s able to get a satisfying level of accuracy. I don’t want to lose my non-techie reader, but let’s stop there for now. This is simply how life works. Nobody knows what works in today’s world without finding out as all variables are changing constantly. Things change so fast that the moment you’re reading a book on building a life model or a business model, it’s getting outdated already. The only way to strike that brilliance is to do it like computers do it when they’re training on data. Iterate, and replay each session to find out what could have influenced or tilted your earlier result in your favour. This will form the new parameter to iterate on, roll it in, and replay again. Keep updating your input with each iteration till you start noticing the result isn’t changing much, and at that point, you will need to seek new challenges as you may be operating at your best possibility already. Unlike computers, humans can burn out easily as one iteration may take a lot of time. You need to only do this for battles that will change your world when won and not for trivial things. You need to consciously try to make sure that you’re able to finish one iteration as fast as possible and be frugal with resources. You need to eat well, exercise and be mentally strong to follow through. Nothing worthwhile comes very easy. If you like this post, press and hold the clap button. It’d mean a lot to me.
Management lessons from machine learning backward propagation theory
0
machine-learning-backward-propagation-theory-for-life-organisation-improvement-11739948fdfb
2018-04-22
2018-04-22 17:32:05
https://medium.com/s/story/machine-learning-backward-propagation-theory-for-life-organisation-improvement-11739948fdfb
false
552
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Israel Oyinlola
null
d7a066cf1db7
oyinlola.israel.kayode
4
3
20,181,104
null
null
null
null
null
null
0
null
0
244ef586c71e
2018-04-18
2018-04-18 12:51:29
2018-04-18
2018-04-18 22:51:26
1
false
en
2018-09-03
2018-09-03 11:16:32
3
1173f44f5a03
2.256604
3
0
0
Artificial Intelligence, Machine Learning and related fields are in a constant state of change. We want to inform but also encourage…
4
AI MUST READS — W16 2018, by City AI Artificial Intelligence, Machine Learning and related fields are in a constant state of change. We want to inform but also encourage discussions on well presented topics we think are necessary in the context of putting AI into production. Every week we’re picking applied AI’s best articles plus adding a discussion starter 1. The future will be tokenized: how blockchain will free you to control your financial destiny The future will be tokenized: how blockchain will free you to control your financial destiny I believe in freedom.medium.com Justin Lee is a regular contributor to Medium and has written some fantastic opinion/factual pieces, and although this article may not be specifically focused on the technologies regarding Artificial Intelligence, I think its fair to say that Blockchain and artificial intelligence go hand in hand. I’m not going to profess to yet having a great understanding of blockchain as a whole, but I do understand the ideas contained within this article (Which says a lot about the “ease of read”), and it gives a great idea of the endless possibilities that could spawn from blockchains growth. Perhaps some more about the current drawbacks of Blockchain would be productive? 2. Fast track to the other side of the AI hype collapse Fast track to the other side of the AI hype collapse Getting to solving actual problemstowardsdatascience.com Said Aspen has written one of the more realistic articles about the current situation and the (potential) future of AI. “ To us, it is evident that the reporting of AI is a bit over the top — a tad on the dramatic side.” Since starting this curation of “must-read articles”, a constant trend throughout has been that a large portion of the articles have exaggerated in every related aspect — the danger, the potential future and current applications etc. It’s refreshing to read something that at least attempts an honest overview. 3. Cambridge Analytica scandal ‘highlights need for AI regulation’ Cambridge Analytica scandal 'highlights need for AI regulation' Britain needs to lead the way on artificial intelligence regulation, in order to prevent companies such as Cambridge…www.theguardian.com Duh. Perhaps its me, but this article reeks of the need to add “Cambridge Analytica” into the title in order to draw in users. The House of Lords Select Committee produced these five principles for a cross-sector AI code separate to the Cambridge Analytica scandal. Instead the article seems to focus more of the fact that Lord Clement-Jones has been quoted pointing out the obvious - “These principles do come to life a little bit when you think about the Cambridge Analytica situation,” he told the Guardian. “Whether or not the data analytics they carried out was actually using AI … It gives an example of where it’s important that we do have strong intelligibility of what the hell is going on with our data.” Surely a greater focus on the recommendations produced within the Lords report and the fact that the UK has taken a great step towards AI regulation would be a more productive article than tacking two separate stories together for the purpose of clickbait. Send me any articles to [email protected], whether they be your own, a colleagues or your mortal enemies.
AI MUST READS — W16 2018, by City AI
18
ai-must-reads-w16-2018-by-city-ai-1173f44f5a03
2018-09-03
2018-09-03 11:16:32
https://medium.com/s/story/ai-must-reads-w16-2018-by-city-ai-1173f44f5a03
false
545
Making knowledge on #appliedAI accessible
null
cityai
null
Applied Artificial Intelligence
cityai
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,COMPUTER SCIENCE,NATURALLANGUAGEPROCESSING
thecityai
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Joe Lord
Apprentice at Sage UK working on emerging technologies and Intern at City.AI curating weekly ‘ AI Must Reads’.
43fdd3607588
joe.lord
43
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-07
2017-12-07 20:06:03
2017-12-07
2017-12-07 20:06:52
1
false
en
2017-12-07
2017-12-07 20:06:52
3
117430102669
2.45283
0
0
0
It seems that more and more of our interactions are with automations and algorithms. Everything from your car manufacture and food…
5
HUMANIZING AUTOMATION It seems that more and more of our interactions are with automations and algorithms. Everything from your car manufacture and food production to your Facebook feed and content consumption. And, as marketers, we interact with these technologies on a daily biases while strategizing and optimizing online marketing campaigns. So, how do we work within this new world? We strive for a balance between machine learning and human problem solving. The World of Search Engine Automation In the world of Search Engine Marketing (SEM), algorithms are becoming a powerful tool for effective campaigns. Because of this, it’s important for us to work with and improve upon the capabilities of algorithms. In an article for Search Engine Land, Frederick Vallaeys says, “Automation is taking over a lot of the tasks humans have historically done in PPC; but as this shift continues, there will be plenty of new opportunities for PPC experts and agencies to provide value to their clients.” Vallaeys suggests that there are four ways to do this: marketers will help teach machines to learn, provide the creativity machines lack, be the ones to avert any disasters, and will employ the empathy that machines lack. Teaching Machines First, as we teach machines to learn, we are able to make better decision based off the answers we are receiving from the algorithms we deploy. And no, machines are not ready to take over the world. As Vallaeys says, “The reality is that machine learning is still very dependent on humans. We program the algorithms, we provide the training data, we even manipulate the training data to help the machine get it right.” That said, machines are able to analyze thousands of data sets quickly, allowing us the time to review and implement the data we get. Creating Solutions Second, marketers are able to provide creative solutions that machine are unable to provide. It is about working collaboratively with automations and using old concepts in news ways to test. In a 2005 experiment, Playchess.com had a chess tournament where teams could compete with other players or a computer. According to chess grandmaster Garry Kasparov, “The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.” This is a huge advantage for any digital marketing campaign. Avoiding Disasters Third, marketers will be able to avert campaign issues as they come. Problems will occur during a campaign, but being able to mitigate any issues that may arise is something humans have over machines. It is about the questions we ask that helps turnaround a campaign that is underperforming. By working in conjunction with algorithms, we are able to make quicker, smarter decisions. Understanding Empathy And finally, humans have empathy. As of now, machines are not able to make a connection with a client or product. There is no emotional response to the outcome of a campaign. I think Vallaeys puts it well when we says, “Understanding the nuances of your client’s business (which will help you come up with new ideas to test), understanding their fears about PPC, understanding their frustrations with the last account manager and so on. All this will help you have a more productive relationship with them.” Keeping Humans Relevant Machines are wonderful things. They provide us with endless opportunities and conveniences. Understanding how to work with them is key to progressing your work and providing better results. So, the good news is, humans are not obsolete… yet.
HUMANIZING AUTOMATION
0
humanizing-automation-117430102669
2017-12-07
2017-12-07 20:06:53
https://medium.com/s/story/humanizing-automation-117430102669
false
597
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
KeyMedia Solutions
KeyMedia Solutions partners with agencies and direct clients to act as an extension of their digital capabilities - Online Advertising and Startup consulting
a10650fc827a
KeyMediaCEO
429
415
20,181,104
null
null
null
null
null
null
0
null
0
ada5bde467e7
2018-02-19
2018-02-19 19:39:21
2018-02-19
2018-02-19 19:50:36
2
false
en
2018-02-19
2018-02-19 19:50:36
5
1174d899a478
2.972013
0
0
0
There are various definitions of human intelligence, however I wanted to take one of the simplest definition I found on Wikipedia, which…
5
Does your business have a brain ? There are various definitions of human intelligence, however I wanted to take one of the simplest definition I found on Wikipedia, which mentions that human intelligence is the ability to generate new information by combining the data we receive from the outside with that of which we have in our memory. In the same way I will define the concept of intelligence in a business, taking into account my experience in the world of technology and business. This is because in the course of my career I have had to be at the business side looking to create business intelligence and from the side of technology selling business intelligence systems to businesses. Before defining what I mean for the “brain of a company”, it is important to make an analogy between the human being and a company. In a business it´s well known the concept of hardware as the physical computer where the software component resides, which are information systems that allow the operation of the company to be more efficient. In this analogy, the hardware component of the human being is the physical body, which we have to feed and exercise it to keep it in shape for a healthy life, while the software component of the human being becomes his intelligence and spirit. Leaving aside the theological and religious concept that is not part of this article, I will consider that the software is only the human intelligence. Following this analogy, I will define the brain of a company as follows: It´s the information system that has similar capacity initially defined as human intelligence: To generate new information by combining data received from the outside with that of which we have stored in the databases inside the company. I personally think that still many years have to pass by until we can achieve such an integrated system that has that functionality in a company. To achieve this integrated system, it will depend on the maturity of its management and on smart investments in the world of artificial intelligence. Information systems need to learn from millions of data that comes from the company external environment, using technology known as “machine learning”. This data compared with historical information stored in the company databases will detect changes in the environment and recommend the necessary changes inside to adapt internal action plans of the company. This was defined in my previous article on How smart is your company? When that time comes, the “brain” of the company will have the capability similar to the initially defined as human intelligence and robots will surely take the business decisions to grow the company using its artificial intelligence. Until this happens, what can we do? We need to have our eyes wide open and take the following steps: 1. Change the brain to our people: Investing in the digital transformation of our company, which is not only investing in technology, but primarily a cultural change of people working in our company. They need to be prepared for the technological changes that are happening globally. 2. Feed your databases: Capture all possible data from the external environment of the company, such as suppliers, customers, competitors, regulators, employees, partners, etc. We must invest in storing many different unstructured information from multiple sources of information in various formats such as text, documents, images, videos, audio and geo references. 3. Artificial intelligence: Invest in modern systems with artificial intelligence and machine learning, so that your companies systems have the ability to consume large volumes of unstructured data to detect behavior patterns in our environment and learn from the changes in the market. 4. Processing power: In order to be capable of processing these large volumes of information, it is necessary to have the ability of parallel processing. It is necessary to invest in parallel processing hardware and software technology to escalate easily as the business grows to disruptive scales. If you liked this article, I invite you to share it on your social networks and to follow me in Linkedin, Facebook and Twitter where I share daily news.
Does your business have a brain ?
0
does-your-business-have-a-brain-1174d899a478
2018-02-19
2018-02-19 19:50:38
https://medium.com/s/story/does-your-business-have-a-brain-1174d899a478
false
686
You can read articles about how to use Artificial Intelligence in your business to make it more profitable
null
luis.barragan.scavino
null
Artificial Intelligence in your Business
artificial-intelligence-in-your-business
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,BUSINESS,CEO,PROFIT
lbarragan
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Luis Barragan Scavino
Passionate about digital transformation and innovation #ArtificialIntelligence #MachineLearning #Fintech
efc8871c8c8e
Luis.Barragan
38
80
20,181,104
null
null
null
null
null
null
0
null
0
781ef5d00ff2
2017-11-16
2017-11-16 00:32:01
2017-11-16
2017-11-16 17:12:30
10
false
en
2017-11-16
2017-11-16 17:14:58
4
1174e6cc078b
3.721698
3
0
0
by Hamdan Azhar and Farhan Mustafa @Grafiti
5
Youth Voting in the 2016 & 2017 Elections 🙋🏽‍️🙋🏼‍📈 by Hamdan Azhar and Farhan Mustafa @Grafiti Youth voting up 31% from 2013 Gubernatorial elections in Virginia, reflecting nationwide uptick for youth voting in 2016 Presidential elections. Last week, in the Virginia gubernatorial election, Democratic candidate Ralph Northam handily defeated Republican Ed Gillespie, 54–45%, in the highest turnout Virginia governor’s race in two decades. Now, the latest analysis by the research group CIRCLE at Tufts University finds that voter turnout among young people aged 18–29 doubled to 34% in 2017 compared to just 17% in 2009. (In contrast, youth voting was stable in the New Jersey governor’s race.) Source: CIRCLE ​The Census Bureau has published estimates of voting rates by age group for presidential elections dating back to 1964. They find systematic and consistent differences in voting rate, with over 60% of people aged 45+ voting in 2012, compared to 49% of 25–44 year olds, and just 38% of 18–24 year olds. Youth voting since the Reagan era peaked under the election of President Obama in 2008 — and despite suffering a sharp drop in 2012, notably increased in 2016. Source: U.S. Census Bureau, Voting and Registration Table Youth voters tend to come out strong after 8-year terms, perhaps excited by voting for change. But youth voting during presidential elections shows different patterns. While youth voting suffered its largest drop during President Clinton’s mid-term election, it surged to its highest jump during the President Bush’s mid-term election in 2004, a trend that continued through President Obama’s 2008 election. After a drop in 2012, youth voting jumped slightly again during the 2016 elections. Source: U.S. Census Bureau, Voting and Registration Table ​Youth voting also differs greatly between states. In the 2012 presidential election, across the United States, 38% of 18–24 year olds cast ballots. However, in Hawaii, Texas, and West Virginia, the voting rate among 18–24 year olds was under 23%. Only five states had a voting rate of at least 50% among 18–24 year olds, with Mississippi leading the nation at 62%. In comparison, in the same election, the average voting rate among 25+ year olds was 59%, a difference of 21 percentage points. Moreover, the voting rate among 25+ year olds was greater than 50% in forty nine out of fifty states. (California came in 50th place at 49.3%). Source: U.S. Census Bureau, Voting and Registration Table Source: U.S. Census Bureau, Voting and Registration Table The 2014 midterm elections were especially poor in terms of youth turnout. In fifty out of fifty states, voting rate among 18–24 year olds was under 30%, with Maine leading the nation at 30% and Hawaii coming in last at 8.3%. (Ed: it also led to a horrible mustard stain of a chart below, but we decided to keep it, to remind us of the stain on our democracy :/) Source: U.S. Census Bureau, Voting and Registration Table Turnout among adults aged 25 and older, while lower than it was in presidential election years, was still strong especially in Maine (64%) as well as Colorado, Wisconsin, and a handful of other mostly Midwestern states where there were key Senate races. Source: U.S. Census Bureau, Voting and Registration Table Youth voting in the 2016 presidential elections was slightly higher compared to 2012, overall, with noticeable increases in voting rate among 18–24 year olds in Kentucky (from 37% to 51%), Nebraska (from 36% to 50%), and Wyoming (from 32% to 53%). Source: U.S. Census Bureau, Voting and Registration Table The adults, however, while showing significantly higher turnout than youth voters, actually declined in total voter turnout compared to the 2012 presidential elections. Source: U.S. Census Bureau, Voting and Registration Table Let’s hope the recent trend in Virginia hints at wider youth participation — and actually lead to a bump in mid-term elections in 2018! Chances are that it might not happen, given the history of voting patterns. Voting among all age groups drops during mid-term elections in relation to presidential elections. However, the 2018 elections will be the biggest test of Trump’s populist politics, with even more at stake than in previous elections — a nation’s sanity. Source: U.S. Census Bureau, Voting and Registration Table What do the youth voting numbers look like in your state? And what should we do about it? Leave us a comment below with your thoughts!
Youth Voting in the 2016 & 2017 Elections 🙋🏽‍️🙋🏼‍📈
74
youth-voting-in-the-2016-2017-elections-️-️-1174e6cc078b
2018-05-28
2018-05-28 02:09:00
https://medium.com/s/story/youth-voting-in-the-2016-2017-elections-️-️-1174e6cc078b
false
655
Grafiti is the first search engine for graphs & charts.
null
grafiti.search
null
Grafiti
grafiti
DATA VISUALIZATION,DATA,NEWS,MEDIA,CULTURE
grafitiapp
Data Science
data-science
Data Science
33,617
Farhan Mustafa
CoFounder @Grafitiapp. Data Raconteur. Former Journalist at Al Jazeera English.
1f7568709458
Mustafa3000
41
41
20,181,104
null
null
null
null
null
null
0
null
0
cf94505cf50f
2018-04-13
2018-04-13 23:36:39
2018-04-13
2018-04-13 23:37:09
3
false
en
2018-04-13
2018-04-13 23:37:09
4
117532c1f387
1.074528
3
1
0
Learn the fundamentals of decision trees in maching learning
5
Fundamentals of Decision Trees in Machine Learning [Udemy Free Course] Learn the fundamentals of decision trees in maching learning Discount — Free Lectures — 11 Lectures Skill Level -Beginner Level Language — English Published -4/2018 Take this course! Course Includes Learn the fundamentals of decision trees in machine learning Using the SPSS Modeler Building a CHAID model Using a lift and gains chart Exploring algorithms Building a tree interactively Take this course! What Will I learn? 💓 Full lifetime access 📱 Access on mobile and TV 📋 Certificate of Completion Requirements ⚓ Basic understanding of statistics Who is the target audience? 🙇 Anyone interested in learning machine learning 🙇 Data science specialists Take this course! Note: If the coupon doesn’’t work for you, please let us know and check our website for other courses! We are affiliated to Udemy.
Fundamentals of Decision Trees in Machine Learning [Udemy Free Course]
21
fundamentals-of-decision-trees-in-machine-learning-udemy-free-course-117532c1f387
2018-04-20
2018-04-20 23:22:33
https://medium.com/s/story/fundamentals-of-decision-trees-in-machine-learning-udemy-free-course-117532c1f387
false
139
You can expect the latest and the greatest courses to be available for free. Here is the one place where you can find these coupons for free
null
programmingbuddyclub
null
100% Free Udemy Coupons
100-free-udemy-coupons
UDEMY,LEARN TO CODE,PROGRAMMING,WEB DEVELOPMENT,ENTREPRENEURSHIP
programminbuddy
Machine Learning
machine-learning
Machine Learning
51,320
Programming Buddy Club
null
6f59ddac6802
programmingbuddyclub
7,622
29
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-04
2018-05-04 15:34:55
2018-05-04
2018-05-04 16:06:09
1
false
en
2018-05-10
2018-05-10 08:44:42
2
1175489eb6c1
6.026415
3
0
0
In 1942, Science Fiction author Isaac Asimov postulated the Three Laws of Robotics thus :
5
When did we begin to think about AI? Why isn’t it too early to create a global collaboration for humankind’s safety? In 1942, Science Fiction author Isaac Asimov postulated the Three Laws of Robotics thus : Law 1 : A robot may not injure a human being or, through inaction, allow a human being to come to harm. Law 2 : A robot must obey orders given it by human beings except where such orders would conflict with the First Law. Law 3 : A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. So much is happening in AI, as it finds more and more applications in different areas each day. We are moving towards an increasingly uncertain future for humankind. Don’t get me wrong, I’m ALL for AI. But, as I’ve mentioned in a previous blog, AI development today, more than ever before, needs to be provided with a worldwide integrated and holistic approach. This is to ensure it develops in directions that are consistent with the future safety of mankind. Individual institutions, companies, and even national governments, no matter how well-funded or powerful, cannot manage this task! Tim Berners Lee et al created www, the World Wide Web, in the 1990s, as a way of sharing information. Www quickly grew in scale, and has provided us with huge benefits. While much of what is available in www is visible thanks to unified protocols, search engines and the like, it does have its hidden or dark side as well. That is often a cause for concern to some of us, though it doesn’t have visible impact on us day-to-day. The thing about AI is that it is mostly hidden! Very few can get access to inspect (even fewer can understand) the workings or logic of any given AI system today. And there is so much further development feverishly happening in AI’s across the world. In 2018 and ahead we need to form a global body, perhaps a United Nations for AI, or UNFAI. (Read my piece about UNFAI here). UNFAI would be a bit like the regular United Nations, except it will be composed of scientists, researchers, even philosophers, among others, charged with guiding the world’s AI research into safe directions by building enough safeguards into AI development, perhaps with the help of AI itself. See my UNFAI piece here. Are we being too paranoid, about trying to regulate and guide AI? Some people might be thinking here, hey — isn’t all this too early ? AI is just beginning to unfold, it’s a young science, let’s give it more time to settle down and then see how it goes? My answer is — It is no longer too early to be thinking of governing and directing the development of AI, in fact it’s as late as it could get! Is AI really a young science ? The world’s interest in AI, if one looks at the literature, seems to have started in the 1940’s. During and post WWII, the work done on cryptography and code–breaking led to an understanding of the potential capabilities of computers. Dr Alan Turing of the UK, who helped break the famous wartime German Enigma code, saw that machines could soon become competent enough at human-like tasks, to become indistinguishable from a real human. In 1950 he proposed the now-famous “Imitation Game”, or Turing Test, to test a computer’s “intelligence”. In the Turing test, a computer could be considered to have displayed human-like intelligence, if a human interrogator asking the same questions to a computer and a human being (say both on the other side of a brick wall), could not make out which was which, from the answers typed out by both. The term “Artificial Intelligence” itself, appears to have emerged in the US, from a 1956 conference at Dartmouth College organised by John McCarthy of Dartmouth and later, Stanford University. The objective of this conference was to explore ways to make machines that could learn, reason, solve problems, and self-correct in case of failures. This conference was attended by leading researchers from a variety of disciplines such as Engineering, Mathematics, Psychology and Computer Science, and it led to the establishment of AI as a field with very high research (and public) interest. In 1963, a compilation of the leading AI researches titled “Computers and Thought” was published by Edward Feigenbaum and Julian Feldman in the US. This included articles by luminaries such as Alan Turing, John McCarthy, and Marvin Minsky of Harvard and MIT. It included work and thinking on, for example, computer programs that could play chess, solve calculus problems or prove mathematical theorems, recognize pictures, use human-like language, and even display social behaviour like humans. The US had taken the early lead in the 1940’s, 50’s and 60’s .. then Japan staked its claim.. Top US research institutions and universities continued to produce scientific publications establishing their AI thought leadership. The Japanese, by the 1970’s world leaders in consumer electronics and automobiles, wanted to prove their ascendancy in computing as well. They set up the Government funded Fifth Generation Computers initiative in 1982 to create massively parallel processing to provide a platform to lead developments in AI. Unfortunately the planned processing power of the Japanese supercomputer was overtaken by off-the-shelf processor chips and the program was shelved by the 1990s. The next big jump happened in the early part of the 21st century, when processors began to deliver the computing power that AI tasks required — like language processing, image recognition, speech synthesis, and real time decision making (eg self-navigating robots or cars). So it seems to be true that AI is a really young science and most of AI thinking and developments has happened in recent years. Nothing, however, can be further from the truth! The human quest to produce thinking life is as old as humankind! The truth is that humans were thinking (dreaming? creating?) intelligent machines and AI, well prior to the twentieth century. How much prior? Perhaps five thousand years prior! How is this correct? Well, if one looks at the ancient Greeks, they had the myth of the “bio-techna”, life crafted through science. The mythical Greek blacksmith Hephaestus manufactured a mechanical bronze giant Talos, who guarded their island. Talos had artificial blood, intelligence and perhaps even emotions — and was eventually drugged and deceived by the sorceress Medea for its demise. This story dates back to between 150 and 400 BCE. There are other Greek stories about AI’s from the same era. But you can go further back — like over 500 years prior, to the ancient Chinese civilisation. The engineer Yen-Shi presented an artificial human at King Mu’s court. Its behaviour was so realistic that he had to hurriedly open it for the king to see the different artificial parts. This was as the king got incensed at the humanoid’s flirtatious behaviour with the court’s ladies! Still another thousand years back, in ancient Indian texts one finds mentions of vimanas — self flying aircraft (we are just at the dawn of self driving cars today!), and of near-invincible robot soldiers endowed with emotions ( who were finally defeated by inducing in them ego, jealousy and fear!). Some of these texts are considered five thousand years old! Over time, there have been many many more mentions of AI : such as the homunculus of Paracelsus in the sixteenth century, or Mary Shelley’s Frankenstein (nineteenth century), and many more. Many of these stories carry clear warnings for us. The truth is, that for at least four or five thousand years, human beings have toyed with the fascinating thought of being able to create intelligent, thinking machines as servants; machines alive in our own image. Inextricably linked with this, for thousands of years humans have chronicled that these machines and their intelligence could (or did!) exceed their human masters, or went out of control. In some cases they were defeated by other humans or Gods, in many cases they destroyed or came close to destroying their masters! Nearly five thousand years later, today, Elon Musk has stated that in the face of AI he fears for humankind. The one consistent detractor of this fear, Mark Zuckerberg, has very recently, in the US Congressional hearings, shown up to be either unaware of the workings of his AIs at Facebook (or has put up a pretty good impression of ignorance). I believe this is the tip of the iceberg, and we could potentially see the end of not just our privacy a la Facebook, but perhaps much more, if we don’t immediately take steps to control where AI development is going. Control does NOT mean holding back on AI development! Control DOES mean taking charge of the direction of development, setting up the rules of engagement, so to speak. This is imperative for our future survival perhaps! We need the three laws of robotics, but in a form and relevance that is applicable to our state of development of AI. And the sooner we get to work on this, the better for all of us as mankind!
When did we begin to think about AI?
130
when-did-we-begin-to-think-about-ai-1175489eb6c1
2018-05-10
2018-05-10 08:44:44
https://medium.com/s/story/when-did-we-begin-to-think-about-ai-1175489eb6c1
false
1,544
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mannan Singh
I’m a high school science student (Grade 12) at The Shri Ram School, Gurgaon, India. I explore a wide range of subjects and think about their interconnections.
4ae271d1af24
mannan.singh27042001
3
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-17
2018-05-17 16:19:51
2018-05-17
2018-05-17 16:54:37
2
false
en
2018-05-17
2018-05-17 16:54:37
1
1175c1aa6e24
2.990881
1
0
0
From Homo habilis to Homo comunicans
4
17th of May — International Communication Day From Homo habilis to Homo comunicans Communication. Information. These words are turning in my head for quite a while. As a philologist, I was taught to differentiate vocables and their meaning in their most subtle nuances. I have learned that words have power beyond normal reach, they sneak everywhere and remain somehow active at the subconscious level, lurking for the right moment to get out. As species, the humankind owes the evolution due to the effective and articulate communication. Language as a milestone in our remote history is as important as the discovery of the fire. From homo habilis to homo comunicans Just as we have invented tools to ease our physical work, we have refined step by step the means for communication. We are far away from the times when only a few privileged knew how to communicate using a feather and the parchment. We have turned this ability into a skill and it got little by little a fundamental part of our lives. Semiotics applies? Why not? We communicate with all we have, all around us is a sign, a symbol waiting to get a signification… Our trivial gestures and mimic got a name: body language. In the meantime, this is taught or trained, just like any other foreign language. From homo comunicans 1.0 to homo comunicans 2.0 Our technology has gone so far, that our communication skills need to find other dimensions. Tools to communicate? There are plenty of them all around: mail, telegraph, telephone, fax, radio, TV. You name it: it was or it is there already. So back to school? I guess so. Fact 1: letters and written words have changed the world. Fact 2: printed words got power. Fact 3: we’ve got Internet. Digital trap The Internet has brought the world at our feet, the borders are no longer where the geography wants them to be. We have learned to use (smart)devices to stay in contact every day with the world. These tools became the prolongation of ourselves, they correct or improve us where we do not perform properly. Scary thought that the technology can and will invade little by little what we liked to consider as our territory. It should not: it is not the tool which is wrong, it is the use we give to it which can harm. We are in the digital era and devices are used to help us work or to communicate. And nowadays, the information must travel fast and reach everyone in due time. From one side, this is a relief, from the other side it creates pressure. I see actually a paradox here: we have so many ways to communicate and we use so many channels to express ourselves, that we forget sometimes which is actually the right role they play. No Facebook, Instagram or Twitter acocunt? No way. Are you not on LinkedIn? Weird, don’t you think? The 15 minutes fame of Andy Warhol mutated into “I’m online now, therefore I am”. Awareness day Every 17th of May, since 2006, all around the world we celebrate the International Communication Day. The 2018 theme is precisely about this challenge we face: enabling the positive use of the Informatics and the artificial intelligence for all. Please, stay away from the stereotypical point of view: Artificial Intellingence does not mean only robots like Sophia or the ones you have seen in the SF movies. Artificial Intelligence refers to the ability of these devices/programs to learn from their previous input and effect, just like we do. Unlike humans, these devices have no own will and no creativity. They will be able to replace us only for some tasks, but they will never replace us for good and they will not be a threat to our future. The wisest approach to have: use them and learn how to use them well. They are only tools to help us communicate. They are intelligent, but they remain only our tools. Addendum I have found this video which resumes very well what communication means for us or, at least what this in a free world should mean.
17th of May — International Communication Day
1
17th-of-may-international-communication-day-1175c1aa6e24
2018-06-09
2018-06-09 02:26:17
https://medium.com/s/story/17th-of-may-international-communication-day-1175c1aa6e24
false
691
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
BitTube News
null
8a583756dc3
news_93138
0
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-11
2018-04-11 11:22:55
2018-04-18
2018-04-18 16:26:27
5
false
en
2018-04-18
2018-04-18 16:26:27
0
1176c47b22de
2.474843
1
0
0
Hi all!
5
An Introduction To Linear Regression Hi all! My data science immersive classmates and I had to present a lightning talk on various data science topics. Guess who was assigned linear regression? OK, what is a linear regression? Linear regression (LR) is an approach to see the relationship between a dependent variable and one or more dependent variables using a straight fit line. A linear regression model with one variable is known as a single linear regression (SLR) while a linear regression model with two or more variables is known as a multiple linear regression (MLR). The dependent variable is sometimes called the response variable while the independent variable is also called the predictor variable. Below is a simple example of a multiple linear regression: Y = B0 + B1X + B2X X: Value of independent variable Y: Value of dependent variable B0: Constant (value of Y when X=0) B1 and B2: The regression coefficient (shows how much Y changes for each unit change in X) Linear regression has many real-life applications including epidemiology, finance (e.g. capital asset pricing model), and economics (e.g. consumption spending). Assumptions of Linear Regression There are key characteristics seen in all linear regression models: Linearity: X and Y have a linear relationship 2. Independence: Errors (residuals) are independent of one another 3. Normality: The errors follow a normal distribution 4. Equality of Variances: The residuals are equal across the regression line 5. Independence of Predictors: Independent variables are independent of one another You may be thinking, “Alright, LR sounds good and all, but why is important? Well, let me tell you! Predicting the Future: Let’s assume you have a book store and you wish to find out what your projected sales would be for next quarter. Helpful information, right? Linear regression is a model that can give you an estimate on what that amount of sales could be. Optimization: After a LR model is created, you’ll be able to see which variables (attached to the co-efficients) are making the greatest impact on your dependent variable and be able to up that ante accordingly. Correcting Prior Assumptions: What if you assumed that a particular variable is essential to your operations? LR will give you some evidence to dispute that claim or not. Although linear regression is a good method, there are some drawbacks to the model: 1. LR is limited to linear relationships 2. LR is not good for modeling binary relationships 3. While generating a linear regression model, we only look at the mean of the dependent variable 4. It must be noted that correlation does not cause causation.
An Introduction To Linear Regression
1
an-introduction-to-linear-regression-1176c47b22de
2018-04-19
2018-04-19 15:29:45
https://medium.com/s/story/an-introduction-to-linear-regression-1176c47b22de
false
435
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Mario Arthur-Bentil
Data Scientist | Chef | Storyteller
c3ed89dac222
m.arthurbentil
1
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-17
2018-03-17 17:55:46
2018-03-17
2018-03-17 18:15:03
2
false
en
2018-03-17
2018-03-17 18:18:28
0
1176da279515
2.055031
0
0
0
There’s too much I’d have to talk about in one article without overloading you at once. But in short, the world order as you see it today…
3
The New World Order is coming. This isn’t a theory nor a time waster – it’s true and happening as we speak. There’s too much I’d have to talk about in one article without overloading you at once. But in short, the world order as you see it today is not how it really is. Democracies are fake. Politicians are distractions. Conglomerates are drivers of central globalisation. Legislation is conditioning our behaviour rather than protecting us. How does this affect you? Well at the moment, Facebook, Twitter, email accounts and literally anything digitally connected to your profile does it’s “innocent” job. At least on the whole. But when you hear in the mainstream news about “escalations with Russia/China” and “benefits of 5G”, shit your pants. 5G is part of a long term goal where globalists want to monitor and control everything in the world – easily. There will be no more independently run governments as we have in the format now. Without understanding this concept, we will all eventually accept 5G without any protest because of what we call PROBLEM, REACTION, SOLUTION. It’s a classic technique used to enforce new legislation or behaviours with mass consent. Problem: World War 3 (will happen) due to a series of previously orchestrated problems and reactions. Reaction: Citizen unrest and chaos will take place primarily in the West, Russia & China. Solution: After that, the people (being us as a terrorirised civilisation) will accept a solution (read out clearly on the news) to stop this from happening again. “One world government, will be formed to stop wars and maintain peace and order”. As a result these are just 2 implementations that will be waiting for our next generation: 5G to monitor and centralise access to all internet devices and data. Operated by an AI that acts as if there is no one person/network controlling the ecosystem. Always on Digital – “Faster connection, improved reception, infinite cloud storage, microtagging”. Say goodbye to your privacy. Cryptocurrency – the new management of financial value WW3 – third strike and you’re out, huh? Fun fact: Did you know that SSL (the green padlock we see in browsers) is an algorithm designed by the CIA? This kind of government intrusion is already happening. My point is, I am concerned for my friends, co-workers and family. I also take this risk more than others. At the end of the day, the less data I feed into the digital system, the more peace of mind I can have about the security of my future and privacy. Protect yourselves and more importantly your kids if you have any, they will thank you for it.
The New World Order is coming.
0
the-new-world-order-is-coming-1176da279515
2018-03-17
2018-03-17 18:25:20
https://medium.com/s/story/the-new-world-order-is-coming-1176da279515
false
443
null
null
null
null
null
null
null
null
null
Nwo
nwo
Nwo
29
WiseSquirrel
null
cd86f122671d
treadstoneexpress
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-06
2018-06-06 10:27:21
2018-06-06
2018-06-06 10:36:46
7
false
en
2018-06-06
2018-06-06 11:14:15
75
1177f8c2928d
5.068868
3
0
0
Vandaag VOA Big Data, 14 juni AO functioneren Rijksdienst en 2 fiches: AI en Europese gegevensruimte
1
Datanieuws binnen en buiten het Rijk 06–06–2018: Vandaag VOA Big Data, 14 juni AO functioneren Rijksdienst en 2 fiches: AI en Europese gegevensruimte Infographic (s) van de week: Van EZK: Infographic Roadmap Digitaal Veilige Hard- en Software De tweede kamer heeft een nieuwe agenda, die op zichzelf al een datavisualisatie is Herkomst Nederlandse bevolking steeds diverser blijkt uit #WRR onderzoek Data vacatures bij het Rijk ACM: Data Analist Belastingdienst: Data analist Belastingdienst: Data investigation specialist Belastingdienst: Bestuurssecretaris Investeringsagenda IenW/ILT: data-scientist ADR: Senior IT auditor Agenda: 6–6–2018: A symposium on public security under algorithmic control (Den Haag) 12 juni 2018: ICTU Grand Café 16:00–18:00 14 juni: Kennisplatform Big Data over Linked Data (zie hiernaast) do 14 juni, Leeuwarden: DAT! Symposium over Big Data en Business Intelligence 18–06–2018: AI EUROPE | STAKEHOLDER SUMMIT A European Strategy for Artificial Intelligence vr 22 juni, Turfmarkt 147 Den Haag: Gebruikersbijeenkomst data.overheid.nl 2 juli 13:00–17:00: Bijeenkomst Artificial Intelligence in overheidsdomein Datanieuws binnen het Rijk Woensdag 6 juni is het VAO Big Data met Minister Dekker, een niet geheel neutraal nieuwsbericht nav eerdere debat: Geen openheid over doorlichten burgers Op 26–06 (13:00–17:30) organiseert de RAFEB een Congres over de ontwikkelingen en mogelijkheden van financiële data en (big) data-analyses Een Date met data voor iedereen bij de Rijksoverheid die geïnteresseerd is in data en data-analyses of werkzaam in een financiële functie binnen de Rijksoverheid. Aanmelden kan hier Twee Fiches: Fiche 5: Mededeling naar een gemeenschappelijke Europese gegevensruimte en Fiche 7: Mededeling Kunstmatige intelligentie voor Europa De eerstvolgende netwerkbijeenkomst van het Kennisplatform (Big) Data vindt plaats op donderdag 14 juni. Deze bijeenkomst staat geheel in het teken van Linked Data. Erwin Folmer (Kadaster) geeft een introductie op Linked Data en vertelt hoe het Kadaster andere overheidsorganisaties ondersteunt bij het publiceren van Linked Data. Aansluitend laten sprekers van DUO, UBR/KOOP en het CBS diverse praktijktoepassingen van Linked Data zien. Van 09:30 tot 13:00 meer info en aanmelden via [email protected] Datanieuws buiten het Rijk Data en AI binnen overheid Van Hetan Shaw Netflix for healthcare? The algorithmic society is upon us Van TheKing’sFund: Using data in the NHS: the implications of the opt-out and GDPR In The Financial Times: “To fix healthcare let AI do the dull, routine work” In de FT: AI risks replicating tech’s ethnic minority bias across business Over de Huishoudbot in Wired: HOW TO GET A ROBOT TO (ONE DAY) DO YOUR CHORES Beetje een promo praatje: Deze 4 blockchain-innovaties gaan de energietransitie versnellen In Payments Journal: Accessible Data and Artificial Intelligence: Is this the Future of Fintech? Van Policy network.org: Mastering the digital transformation Van McKinsey: Breaking away: The secrets to scaling analytics Op Nature.com: Technology and satellite companies open up a world of data. It is now easier than ever to interrogate vast realms of mapping information for your resear Data en AI algemeen: Berenschot geeft eerste inzicht in blockchain ecosysteem 2018 (met een mooi visueel overzicht Op Nature.com: Science needs clarity on Europe’s data-protection law. As a commendable European law on personal data comes into force, the research community must not let excessive caution about data sharing, however understandable, become the default position Van The Register: UK.gov’s use of black box algorithms to decide stuff needs watching. PS: Don’t forget to try to cash in on public data — MPs Op Dutch Cowboys: Algoritme kan gewelddadige protesten voorspellen via social media. Kunstmatige intelligentie proeft de sfeer op Twitter Op USA today: Tesla in Autopilot mode crashes into parked police cruiser Op Siliconrepublic.com: How do we deal with the dawn of the robot age? Over AI en de arbeidsmarkt: Van het WE forum: These are the 3 key skill sets workers will need to learn by 2030 Van MIT Sloan: Why AI Isn’t the Death of Jobs Op medium: Automation and The Rise of Meaningful Work Over dat het tijd wordt dat we werk maken van data, AI en ethiek: In de FT: Google to disclose ethical framework on use of AI In de FT: Stanford to step-up teaching of ethics in technology Long Read in The NY Books: The Digital Poorhouse. over twee boeken en oa het pleidooi voor een Hippocratische eed voor data scientisten En op smarthealth.nl: Wie controleert de hogepriesters van de AI algoritmen? AI, Data: Ethiek en privacy: Blog: AI Winter Is Well On Its Way Op Buzzfeed: Google Backs Away From Controversial Military Drone Project. Google’s deal with the Pentagon, Project Maven, will end in 2019, and the tech company will not pursue another. In de NY times: Data Subjects of the World, Unite! Van Harvard: Finding a link to the human in algorithms setting justice MIT Sloan: The Risk of Machine-Learning Bias (and How to Prevent It) De Digital Impact Toolkit: Managing and governing digital data voor non profits Het stopt niet by Cambridge Analytica: op The NY times: Facebook Gave Device Makers Deep Access to Data on Users and Friends Op digitaleoverheid.nl: AP controleert overheid op Europese privacywet The Outline.com: THE GOVERNMENT WANTS YOUR MEDICAL DATA. The U.S. is constructing its own data set from one million people to be used for medical research; whether it will help people or reinforce structural issues remains a question. Op the Star: DNA for sale: Ancestry wants your spit, your DNA and your trust. Should you give it to them? Informatief: What is automated individual decision-making and profiling? In de FT: “China emerges as Asia’s surprise leader on data protection” Grappig:van @guygunaratne HR analytics: Van FastCompany: Our obsession with performance data is killing performance: In business, education, law enforcement, surgery, and more, one researcher finds that performance outcomes drop when people know how they’re being measured. Organizationview.com: AI in HR — how to understand what is happening Van McKinsey: Will artificial intelligence make you a better leader? Voor de Nerds: Van datawrapper.de: What to consider when choosing colors for data visualization Op Linkedin: What does it take to Lead Data Scientists? Over hoe De Casteljau’s Algorithm werkt Op LinkedIn: How to recruit the best data expert before the summer In The Guardian: Why thousands of AI researchers are boycotting the new Nature journal 12 Interesting Reads for Math Geeks Op silicon republic.com: 6 tips for someone who wants to be a data scientist van venturebat.coM: As machine learning evolves, we need to update the definition of ‘data scientist’ Op KD nuggets: Python eats away at R: Top Software for Analytics, Data Science, Machine Learning in 2018: Trends and Analysis Van @revodavid Cartoon van de week Twee mooie data gerelateerde Fokke en Sukke’s deze week: Over slimme apparaten En over de Tesla Colofon Twitter: @bettyfeenstra “DGOO werkt aan een moderne overheid”
Datanieuws binnen en buiten het Rijk 06–06–2018:
6
datanieuws-binnen-en-buiten-het-rijk-06-06-2018-1177f8c2928d
2018-06-12
2018-06-12 06:21:14
https://medium.com/s/story/datanieuws-binnen-en-buiten-het-rijk-06-06-2018-1177f8c2928d
false
1,065
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Betty Feenstra
Data driven, Head of Policy Information @ DG Public Administration, Ministry Internal Affairs and Kingdom Relations, Amsterdam, NL
6768e21844e9
bettyfeenstra
93
80
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-29
2018-01-29 07:16:27
2018-02-02
2018-02-02 11:13:56
3
false
en
2018-02-02
2018-02-02 11:13:56
0
1178637d975c
3.833019
3
1
0
IT and the “Information revolution” was originally touted as a productivity enhancing force. The machines would take over our mindless…
2
Quantity of life IT and the “Information revolution” was originally touted as a productivity enhancing force. The machines would take over our mindless tasks and us humans would grow and prosper — however, it is now clear that the opposite has happened: technology and the availability of endless amounts of information has not set us free, instead it has turned us into the machines. IT over welfare in the public sector As a nation on the forefront of the IT revolution, Denmark has seen its share of new IT systems meant to revolutionise the public sector and the way public sector workers spend their most valuable asset: their time. However, there has been very few noticeable gains for the worker and the citizen, as Knud Aarup — former director of the national board of health and wellness states: “An IT-system does not create intimacy or a connection between the users of the system which does not already exist. Poor case handling does not become great, just because its made digital” Introducing a digital system does not improve the experience or the productivity — so why do we spend billions on building these? The answer to that is provided by another dane: John Gøtze, Expert in IT-archtecture at the university of Copenhagen: “The systems are not built for the actual users of those systems. It is not the needs of the caseworkers that is front and center, and was also never the doctors needs that was addressed with the digital health platform” So neither the citizens nor the operators of public sector IT systems have seen noticeable improvements in their quality of life. So who does actually benefit? — again John Gøtze: “The systems are introduced to support a new law or rule being introduced in the public sector” IT is used to support bureaucracy and rules, it is used to measure performance and increase efficiency — but sadly by building systems that cater to rules and bureaucracy, users and operators of the systems were ignored, which ultimately has made efficiency and productivity go down, resulting in unhappy “customers” and employees. These systems were not tested for usability, nor were they built to make the users more effective. Less time for patients The original promise of IT: making our lives easier so we can focus on the important things is gone. IT is now used for monitoring and measuring to satisfy bureaucratic needs, rather the user needs, this has a direct effect on our lives: Doctors spend valuable time typing, journaling and documenting and less time diagnosing patients: Danish doctors spend only 55% of their time with their patients, since 2011 their weekly time with patients have decreased with 4.5 hours The tasks normally performed by secretaries are now handled by doctors with 10 years of education. Instead of using their valuable knowledge diagnosing and helping patients they are instead looking at screens and performing mindless tasks such as requesting an appointment for a blood sample — using a system that looks and performs like a 1990s website. Healthcare usability The danish healthcare platform is estimated to cost 2.7 billion dkk (363 million euros). The irony is that with the increased focus on ensuring that health staff deliver quality — less quality is actually being delivered to a lower number of patients. Out of our hands While the public sector is suffering under increased bureaucracy and complexity, the private sector is rapidly moving in the same direction, but in a much less obvious way. Software vendors are now touting AI and machine learning as the tools that will set the editors free. These artificial systems will handle all the complex decisions in content and commerce systems, ensuring that customers get a truly personalised experience — meanwhile the operators of these systems are complaining of a poor editing and managing experience. In the financial sector the same thing is happening, loan application approval has been centralised and will in the coming years be completely processed by AI-driven systems. While you can argue that both tasks are suited for AI, you can also argue that these 2 examples show a greater shift in the IT industry. IT has stopped being about enabling people to do great things, the focus is on enabling the machines do all the complex tasks, and leave the mindless clicking and typing to the humans. Let’s not forget that humans are actually really good at solving complex problems that also require emotional intelligence. The doctor, the banker, the case worker and the ecommerce manager are being stripped of their independant thinking and the skillset that they have build up over many years — it’s not about keeping people in jobs where they are not needed, it is about society and companies missing out on important knowledge and initiative because we have been tricked into believe systems and machines can do it better. Here’s a thought — how about we leverage IT, AI and machine learning to ensure that humans spend less time clicking, browsing and liking, and instead are set free to diagnose, emphasise, strategise and let those machines do all that productivity killing grunt work — after all, that was the original promise of the information revolution.
Quantity of life
15
quantity-of-life-1178637d975c
2018-02-09
2018-02-09 20:32:21
https://medium.com/s/story/quantity-of-life-1178637d975c
false
870
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Per Ploug
null
3462f450e9a0
ploug
14
2
20,181,104
null
null
null
null
null
null
0
null
0
cf94505cf50f
2018-01-07
2018-01-07 08:39:59
2018-01-07
2018-01-07 08:40:34
3
false
en
2018-01-07
2018-01-07 08:40:34
4
117a5a771faf
1.180189
12
3
0
Learn the fundamentals behind the open source robotics framework — ROS
5
ROS Basics: Program Robots! [Udemy Free Course] Learn the fundamentals behind the open source robotics framework — ROS Discount — Free Lectures — 14 Lectures Skill Level -Beginner Level Language — English Published -1/2018 Take this course! Course Includes Master the basics of ROS Build distributed software and drivers for a robot Learn to program robots in a professional way Take this course! What Will I learn? 💓 Full lifetime access 📱 Access on mobile and TV 📋 Certificate of Completion Requirements ⚓ You should be able to get around a linux based operating system ⚓ You should at least have beginner level experience with a programming language Who is the target audience? 🙇 Anyone who wants to build and program robots with Robot Operating System 🙇 Robotics enthusiasts and hobbyists 🙇 Well suited for electronics and computer science students Take this course! Note: If the coupon doesn’’t work for you, please let us know and check our website for other courses! We are affiliated to Udemy.
ROS Basics: Program Robots! [Udemy Free Course]
129
ros-basics-program-robots-udemy-free-course-117a5a771faf
2018-06-18
2018-06-18 09:21:45
https://medium.com/s/story/ros-basics-program-robots-udemy-free-course-117a5a771faf
false
167
You can expect the latest and the greatest courses to be available for free. Here is the one place where you can find these coupons for free
null
programmingbuddyclub
null
100% Free Udemy Coupons
100-free-udemy-coupons
UDEMY,LEARN TO CODE,PROGRAMMING,WEB DEVELOPMENT,ENTREPRENEURSHIP
programminbuddy
Programming
programming
Programming
80,554
Programming Buddy Club
null
6f59ddac6802
programmingbuddyclub
7,622
29
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-24
2018-01-24 14:30:24
2018-01-25
2018-01-25 10:06:43
12
false
fr
2018-01-29
2018-01-29 11:50:59
38
117ad7766ec2
8.931132
7
0
0
Le 18 janvier 2018, la Mission Villani a souhaité rencontrer les fondateurs des communautés françaises de l’Intelligence Artificielle (IA)…
5
Mission Villani et les communautés de partage de l’Intelligence Artificielle Les communautés françaises de l’IA participent à la Mission Villani. En partant de la gauche, Anastasia Lieva, Olivier Guillaume, Franck Bardol, Olivier Raffard, Damien Gromier, Igor Carron, Alexia Audevart, Anil Narassiguin, Cedric Villani Le 18 janvier 2018, la Mission Villani a souhaité rencontrer les fondateurs des communautés françaises de l’Intelligence Artificielle (IA) et du Machine Learning (ML). A cette occasion cet article décrit les communautés françaises IA & ML. Après une très bref exposé des notions principales de l’IA. Il précise ensuite comment ces communautés auto-organisées diffusent l’information et la connaissance plus rapidement et efficacement que les structures hiérarchiques classiques. Il éclaire enfin sur leur apport à la compréhension des bouleversements technologiques en cours. IA et Machine Learning de quoi parle-t-on ? L’intelligence Artificielle est née au milieu des années 50 dans les universités américaines. Le souhait des pères fondateurs de l’IA (John Mc Carthy, Marvin Minsky, Allen Newell, Herbert Simon, et Claude Shannon) en s’appuyant sur les travaux de Alan Turing étaient de reproduire la pensée humaine à l’aide d’un ordinateur. L’objectif de ces pionniers était alors de résoudre des problèmes totalement hors de portée des ordinateurs programmés avec des programmes figés. Les pères de l’IA Cet effort de recherche a par la suite donné naissance à plusieurs courants explorant chacun des voies distinctes. Quelles sont les approches possible pour composer une IA ? Très brièvement, deux courants principaux ont émergés. Le premier dit “symbolique” a cherché un ensemble de règles, d’enchainements de déductions qui permettraient d’imiter la pensée humaine afin de résoudre des problèmes par la logique formelle. Le second dit “numérique” ou “connexioniste” utilise des calculs statistiques et mathématiques complexes pour faire émerger des connaissances et des décisions à partir d’un océan de données massives (Big Data). C’est cette seconde approche qui triomphe actuellement et qui semble avoir terrassée — provisoirement — sa rivale en donnant naissance au Machine Learning puis au Deep Learning. Le Machine Learning découvre automatiquement des règles générales en observant des exemples particuliers. Découvertes de règles par une IA à partir d’exemples La puissance de calcul gigantesque rendu disponible à un prix dérisoire permet d’injecter des milliards d’observations dans ces moteurs de corrélations complexes afin d’en tirer un portrait général; en ombre chinoise pourrait-on dire. Ceci a donné naissance à une science sans théorie. Une science “à la Google” qui renverse la table du cheminement de la pensée humaine classique. C’est-à-dire, hypothèses -> théorie -> vérification avec des exemples (Data). C’est un profond changement de paradigme auquel on assiste. Le gagnant de la course qui débute sous nos yeux est celui qui dispose du plus grand volume d’exemples (Big Data) afin de nourrir les algorithmes d’Intelligence Artificielle qui en extrairont les règles et les généralités cachées. C’est-à-dire, Big Data -> Intelligence Artificielle -> règles. “Le gagnant est celui qui dispose du plus de données, pas du meilleur algorithme” , Peter Norving (Directeur ingénierie @ Google) Peter Norving L’IA provoque une interrogation planétaire sur les bouleversements à venir L’IA connaît aujourd’hui un écho considérable et suscite des craintes et des interrogations jusque dans les milieux politiques. En effet, ces dernières années ont vu apparaître des réalisations tangibles qui semblaient totalement irréalisables il y a encore peu. Des programmes intelligents parviennent à supplanter l’homme dans des activités qui lui étaient jusqu’alors réservées (reconnaissance visuelle, écriture automatique d’articles, véhicules sans chauffeur, ventes prédictives, robot de conversation, traduction automatique…). Des pans entiers de l’économie sont bouleversés par l’arrivée d’acteurs maîtrisant parfaitement ces nouveaux outils. Toutes les industries sont très fortement impactées par ces technologies déstabilisatrices des forces existantes. Robots @ Amazon L’âge d’or du reporting et de la business intelligence tourné vers le passé cède définitivement le pas aux technologies prédictives appuyées sur le Big Data. La compréhension de ces nouveaux outils technologiques s’avère désormais être un enjeu crucial pour les décideurs du monde de l’entreprise. Les gouvernements ne sont pas en reste sur l’inquiétude suscitée par le raz-de-marée technologique. Tous les pays développés (GB, USA, Union Européenne) essaient de prévoir quel impact aura l’IA sur nos sociétés. En France, la mission Villani a été chargée par le premier Ministre d’apporter des préconisations et des enseignements sur le sujet. Pour ce faire, la mission Villani rencontre les experts du domaine afin de dresser un panorama complet et de répondre aux interrogations de la puissance public. C’est à ce titre, que les fondateurs des communautés ouvertes de l’IA ont été reçus le 18 janvier par M. le député et Président de l’OPECST Cédric Villani et les membres de sa mission parlementaire. La rencontre a vu le jour en grande partie grâce aux efforts de FranceIsAI. Cette association oeuvre à l’émergence et à la coordination des communautés IA françaises. La France à la pointe des communautés ouvertes de partage des connaissances d’Intelligence Artificielle Les communautés françaises de Data Science ont émergées depuis maintenant plusieurs années. Elles communiquent et s’organisent la plupart via une plateforme de rencontre fondée sur les intérêts communs de ses membres (meetup.com). Ces communautés adoptent un système ouvert (inscriptions non filtrées ni soumises à agrément), gratuit, hiérarchiquement plat et participatif. C’est-à-dire l’inverse opposé des clubs ou des conférences classiques existantes par ailleurs (droit d’accès, cooptation et paiement d’un droit d’entrée). Ce mode d’interaction participe aux types de collaboration qui émergent à l’heure des communautés digitales. Le même modèle a fondé le mouvement Open Source. Le maître mot des communautés qui voit leur essor amplifié par l’internet est l’absence de structures contraignantes mais surtout l’absence d’impulsion descendante des sociétés classiques (relation hiérarchique d’autorité). Ici il n’y a ni organisation pyramidale ni impulsion centralisée. C’est bien en partant du bas (bottom-up) que les Data-Scientists se sont regroupés spontanément et sans attendre d’impulsions venant d’en haut (top-down). L’ensemble des communautés françaises regroupe 15 000 membres et a des structures dans toutes les villes importantes de France (Paris, Toulouse, Montpellier, Lyon, Bordeaux, Marseille, Nantes, Mulhouse, Rennes …). Souvent, plusieurs communautés co-existent dans une même ville. Elles se distinguent en se spécialisant dans l’exploration d’une branche ou d’une technique particulière (analyse du langage naturel par les ordinateurs, moteurs de recommandation, etc). A Paris, il y a une centaine de groupes actifs en la matière. Il faut noter que la communauté parisienne du nom “Paris Machine Learning” est la plus importante d’Europe avec 6 500 membres et la 4 ème au monde après San-Francisco, New-York et Bombay. Une réunion du groupe parisien “Paris Machine Learning” Une représentation mondiale des Data-Scientist Il est intéressant de constater que des communautés similaires existent partout de part le monde. Le même schéma d’organisation est apparu dans bon nombre de pays et de continent (Europe, Russie, Etats-Unis, Inde et Asie). Ainsi le même désir de se regrouper en communauté d’intérêt est à l’oeuvre quel que soit le pays, la langue et la culture. On assiste à l’émergence d’une représentation mondiale de groupes de Data Science fondée sur le partage des connaissances d’Intelligence Artificielle. Le besoin de formation et la compréhension de ces technologiques n’est pas comblé par les structures traditionnelles L’irruption récente de l’Intelligence Artificielle dans le débat universitaire génère une très forte attente de tous les acteurs. Les cadres et managers d’entreprises cherchent à démêler les fils de ces techniques qui leur paraissent parfois obscurse. Les étudiants (y compris ceux des écoles les plus prestigieuses) ne veulent pas rester à la traine et souhaitent maîtriser tous les aspects de méthodes qui leur assureront un emploi bien rémunéré. une réunion du groupe parisien “Paris Machine Learning” En face, les écoles d’ingénieurs et toutes les structures de formation continue sont restés longtemps absentes de ce champ. C’est moins vrai désormais. Beaucoup d’écoles d’ingénieurs et universités incluent dans leur cycle des formations orientées Big Data, Intelligence Artificielle et Machine Learning. Néanmoins, l’accélération prodigieuse du rythme d’apparition de technologies déstabilisantes provoque un besoin de formation et une soif de compréhension immédiate. Ceci n’est pas en adéquation avec le rythme classique des formations longues qui durent pour la plupart cinq années (Master) voir les doctorats universitaires qui s’étalent encore plus longtemps. C’est probablement le moteur essentiel des communautés IA & ML dont on parle ici. Savoir et comprendre tout de suite ce qui est en train de se jouer et qui chamboule l’économie et la société sans attendre plusieurs années en suivant le rythme des cycles d’enseignement traditionnel. En face de cette demande de connaissances il y a une offre d’expertise tout aussi abondante. Pouvoir exposer les résultats des dernières recherches et les méthodes des entreprises les plus en pointe Face à ce besoin de compréhension, les membres de ces communautés qui ont des résultats tangibles en appliquant des techniques et des méthodes d’Intelligence Artificielle les partage au groupe en de courts exposés de 20 minutes. Chaque réunion voit se succéder 3 à 4 intervenants issus de toutes les sphères du monde économiques (start up, grands groupes) mais aussi des chercheurs du monde académique. Une sélection de corporates venus exposer aux membres du Paris Machine Learning Le défi est de donner une intuition et des pistes de réflexions au plus grand nombre sans forcément rentrer dans les détails techniques qui masquent l’essentiel du propos. Ainsi, après chaque rencontre les membres repartent avec des méthodes concrètes qui leur permettront eux-mêmes de faire face et de résoudre des problèmes d’ampleur. En résumé, un intervenant = un problème ET une solution pour le résoudre. Certains de ces groupes filment les exposés et le mettent en libre accès sur YouTube. Venir rencontrer les meilleurs talents pour un recrutement rapide et efficace Les recruteurs ont très tôt compris l’intérêt de ces réunions. Pouvoir approcher dans le même lieu les profils les plus pointus en matière d’Intelligence Artificielle et de Machine Learning leur épargnent un long travail de prospection. Networking après les exposées (“Paris Machine Learning”) Les discussions qui ont lieu après les exposés leur permettent également de juger l’importance de tel ou tel technique afin d’affiner leur recherche de candidat et bien sur de rencontrer les cerveaux qui vont les programmer. Des interactions fructueuses entre les membres On conclura ce bref tour d’horizon du fonctionnement de ces groupes de partage d’algorithmes par les interactions qui naissent entre les membres lors des réunions. On ne compte plus le nombre de start-ups dont les membres se sont rencontrés lors de ces réunions, des étudiants de doctorat y trouvent des sujets et des directeurs de recherche. Des hackathons sont régulièrement organisés. Ce sont des sprints de codage informatique organisés autour d’un thème et qui durent le temps d’un week-end ou d’une soirée. Le but est d’élaborer un prototype viable et opérationnel afin de résoudre un problème précis. Ainsi, parmi les défis que l’ensemble des communautés françaises ont explorés figurent “vaincre le cancer” (challenge Epidemium), “Intelligence Artificielle sur des données cryptées” (challenge openMined), “génération automatique d’algorithmes d’Intelligence Artificielle” (challenge Auto-ML), “Data for Good” . Hackathon Auto-ML Ce qui frappe dans cette liste de sujets d’attention des Data-Scientists est l’ambition de confronter les solutions d’Intelligence Artificielle à des problèmes d’ampleur qui touchent tout à chacun. “les solutions classiques ne fonctionnent pas. Laissez nous essayer avec nos méthodes”. Tel est le mot d’ordre rappelé par un participant. En définitive, les Data-Scientists experts ou simplement curieux se sont regroupés spontanément en communauté de partage. Les groupes les plus importants comptent plusieurs milliers de membres. L’ensemble de ces communautés fédèrent la majorité des Data-Scientist français. Les groupes français sont parmi les plus importants au monde. Les mêmes regroupements sont à l’oeuvre sur tous les continents. Le fonctionnement de ces communautés de partage obéit à une structure plate, non hiérarchique qui privilégie les interactions entre ses membres (bottom-up) sans besoin d’injonctions autoritaire, centralisée (top-down). Parmi ce qui lie et motive les participants est l’accès aux dernières recherches les plus pointues. La volonté de savoir, comprendre et maîtriser immédiatement les techniques qui sont en train de changer le fonctionnement de secteurs économiques entiers sans attendre la longueur d’un cycle d’enseignement classique. Ainsi que le plaisir d’échanger avec ses pairs. Du bas à gauche vers la droite, Anastasia Lieva, Damien Gromier, Olivier Guillaume, Cedric Villani, Franck Bardol, Olivier Raffard, Igor Carron, Anil Narassiguin et Alexia Audevart L’auteur : Franck Bardol, co-Fondateur et co-organisateur du “Paris Machine Learning” Professeur associé en grande école, algorithmie & programmation Formateur Big Data Marketing Digital, Machine Learning, Intelligence Artificielle, Programmation, … Tech Mentor, evangelist En savoir plus sur les meetups IA & Machine Learning : Excellent numéro spécial IA de l’abcedaire des institutions, numéro 110, l’IA jusqu’à où ? Rapport du Sénat et de l’Assemblée Nationale, Pour une Intelligence Artificielle maîtrisée, utile et démystifiée
Mission Villani et les communautés de partage de l’Intelligence Artificielle
11
mission-villani-et-les-communautés-de-partage-de-lintelligence-artificielle-117ad7766ec2
2018-05-14
2018-05-14 20:16:23
https://medium.com/s/story/mission-villani-et-les-communautés-de-partage-de-lintelligence-artificielle-117ad7766ec2
false
2,009
null
null
null
null
null
null
null
null
null
Intelligence Artificielle
intelligence-artificielle
Intelligence Artificielle
478
Franck Bardol
<Data Expert / AI & Machine Learning programmer / Big Data Trainer>
77dc09151ad2
bardolfranck
246
189
20,181,104
null
null
null
null
null
null
0
null
0
73c1080cf2e
2018-03-25
2018-03-25 20:27:31
2018-03-28
2018-03-28 06:42:19
3
false
fr
2018-03-28
2018-03-28 06:42:19
9
117bfecbdf05
5.361321
18
1
0
La routine est un fondement du management des grandes organisations. Lorsque Frederick Taylor a commencé à s’intéresser à l’optimisation…
4
Extension du domaine de la routine La routine est un fondement du management des grandes organisations. Lorsque Frederick Taylor a commencé à s’intéresser à l’optimisation des tâches dans les entreprises sidérurgiques de la fin du XIXe siècle, il s’est penché tout particulièrement sur les tâches les plus répétitives et les plus faciles à quantifier. C’étaient en effet ces tâches, les plus routinières, qui se prêtaient le mieux à son approche innovante : l’organisation scientifique du travail. Les gains de productivité permis par l’approche tayloriste ont provoqué une révolution du monde de la production. Les tâches routinières, parce qu’elles étaient les plus faciles à intégrer à l’organisation scientifique du travail, sont devenues les plus attractives pour les entreprises comme pour les travailleurs. Du côté des entreprises, les managers appréciaient la routine car elle permettait de rendre les travailleurs interchangeables et de générer de considérables gains de performance. Les travailleurs, quant à eux, embrassaient volontiers les métiers routiniers car les gains de productivité générés dans le domaine des tâches routinières leur étaient en partie redistribués sous forme de meilleures conditions de travail (sécurité de l’emploi, salaires plus élevés, protection sociale plus généreuse). La routine est devenue si créatrice de valeur, pour les entreprises comme pour les travailleurs, qu’on a commencé à investir dans la routinisation du plus grand nombre de tâches possibles. Le monde des services s’est rapproché de l’industrie avec la normalisation des tâches, la standardisation du service rendu et la possibilité de faire grandir les entreprises de service à une échelle sans précédent. Dans la seconde moitié du XXe siècle, époque du triomphe de la production et de la consommation de masse, c’est toute l’économie qui a semblé devenir routinière. Certaines tâches, bien sûr, résistaient à la routinisation. Mais les entreprises ont trouvé des artifices. Dans le commerce de détail on a imposé la routine en faisant disparaître les comptoirs. Les clients ont pris l’habitude de se servir eux-mêmes dans les rayons puis de faire patiemment la queue à la caisse. Les entreprises de la grande distribution ont ainsi pu routiniser leur activité et générer les gains de productivité correspondants. Une autre manière de “forcer” la routinisation a été la formation professionnelle : certaines tâches plus complexes, à force d’être répétées par des salariés bien formés, sont devenues routinières et ont fini par être intégrées, elles aussi, à l’organisation scientifique du travail. La routine, en transition Aujourd’hui, toutefois, la routine est en transition — et cette transition constitue un défi considérable pour les entreprises. Quand elle existe, la routine continue de générer des gains de productivité, mais ceux-ci sont de moins en moins redistribués aux travailleurs. La routine apparaît donc sous un nouveau jour : pour la majorité de la population active, elle n’est plus facteur de sécurité de l’emploi et d’un salaire élevé ; elle est plutôt devenue synonyme d’ennui et d’aliénation. La lassitude éprouvée par les salariés occupant les fameux bullshit jobs montre bien que la routine n’est plus ce qu’elle était. Et c’est un problème pour les entreprises, dont la routine constitue souvent le coeur de métier. Surtout, avec la montée en puissance du numérique, la routine est de plus en plus un faux-semblant. Il y a en effet deux façons de voir la transition numérique du point de vue de la routine. D’une certaine manière, le numérique est synonyme d’amélioration de l’expérience client et de personnalisation accrue : on pourrait en déduire que la routine connaît un reflux. Mais en réalité, la transition numérique entraîne plutôt une extension du domaine de la routine, pour au moins deux raisons. D’une part, le numérique permet de mettre de plus en plus les clients au travail, le cas échéant en les connectant les uns aux autres en réseau. Si d’immenses communautés de clients connectés prennent en charge les tâches non routinières, alors les entreprises peuvent encore plus se concentrer sur ce qu’elles maîtrisent le mieux — la routine et les gains de productivité afférents. La routine, rompue D’autre part, la numérisation des produits et la collecte de plus en plus régulière et systématique de données personnelles permet de routiniser la personnalisation de l’expérience proposée au client. Il s’agit d’une rupture fondamentale. La routine vue par l’entreprise n’est plus synonyme de qualité médiocre pour le client. Au contraire, la routine dans la sphère de la production peut se traduire pour les consommateurs par une expérience toujours plus simple, intuitive et personnalisée. Le numérique bouleverse ainsi la donne stratégique en faisant disparaître l’arbitrage entre routine et personnalisation. Et si les entreprises veulent rester compétitives, elles doivent s’aligner sur ces nouvelles pratiques. Quelles conclusions tirer de tout cela ? D’abord, c’est une bonne nouvelle du point de vue de la croissance économique. L’extension du domaine de la routine permet de générer encore plus de gains de productivité, qu’il faut maintenant apprendre à redistribuer aux clients (sous la forme de prix plus bas) et aux travailleurs (sous la forme de conditions de travail améliorées). Par ailleurs, les tâches routinières ont toujours tendance à se commoditiser, tant elles sont faciles à observer et à répliquer pour les concurrents. Quand une partie de l’activité devient plus routinière, les entreprises en présence finissent toujours par se faire concurrence sur la partie non routinière de leur activité — toutes les tâches où les interactions fréquentes avec les clients brisent la routine et obligent à investir dans des qualités plus difficiles à routiniser : l’écoute, la disponibilité, l’empathie, l’humour, etc. Encore une fois, tout ça est une bonne nouvelle : pour les clients, qui peuvent s’attendre à être de mieux en mieux servis par des entreprises rivalisant d’empressement auprès d’eux ; surtout, pour l’ensemble de l’économie, puisque les tâches non routinières sont aussi celles qui créent le plus d’emplois. La routine, remise en question Pendant longtemps la distinction entre les tâches routinières et non-routinières était riche de sens. On peut, disait-on, automatiser les tâches routinières, alors que les non-routinières ne peuvent l’être. Les robots et l’intelligence artificielle vont occuper tout le terrain de la routine, et il restera aux humains la non-routine… Mais cette distinction n’est plus aussi pertinente qu’avant. L’intelligence artificielle ne ressemble en rien à l’intelligence humaine. Faute de “comprendre” et “raisonner”, elle se contente de “scanner” des quantités immenses de données et de calculer “bêtement”. Or ce type d’intelligence réussit à merveille certaines tâches réputées non-routinières. Ainsi, l’IA fait des diagnostics médicaux fiables en compilant des millions de cas médicaux comme aucun médecin ne peut le faire. En somme, l’intuition et l’expérience, que l’on pensait si précieuses, sont dépassées par l’exploitation des big data par l’IA. Watson d’IBM a gagné au jeu ‘Jeopardy’ A l’inverse, certaines tâches réputées routinières ne sont pas faciles à automatiser. Par exemple, le travail effectué par les personnels de ménage ne pourrait être automatisé qu’à grands frais par de multiples robots ultra sophistiqués. Autant dire, qu’il ne sera pas automatisé avant longtemps. La routine risque donc d’être encore bien chahutée, rompue, et redéfinie par la transition numérique. Frederick Taylor en perdrait son latin ! Découvrez les autres articles de la publication WillBe Group et suivez nous sur Twitter ! Ville et commerce : qu’est-ce qui fait la vitalité urbaine ? A quelles conditions le commerce est-il moteur de croissance et source de vitalité urbaine ? Quelles politiques faut-il…medium.com Travail : la révolution Remote L’âge d’or du bureau traditionnel est derrière nousmedium.com Handicap et numérique : plus d’autonomie pour tous ? On a longtemps appréhendé la question du handicap sous l’angle de l’assistance, avec une logique d’exclusion voire…medium.com Confucius et les managers Déjà l’auteure d’un ouvrage sur L’ Art de la guerre pour les dirigeants, Domitille Germain publie le mois prochain chez…medium.com
Extension du domaine de la routine
53
extension-du-domaine-de-la-routine-117bfecbdf05
2018-05-30
2018-05-30 04:43:10
https://medium.com/s/story/extension-du-domaine-de-la-routine-117bfecbdf05
false
1,275
Strategy and Management Consulting to accelerate value creation
null
null
null
WillBe Group
willbe-group
CONSULTING,DISRUPTIVE INNOVATION,CORPORATE INNOVATION,DIGITAL TRANSFORMATION,BUSINESS TRANSFORMATION
WillBe_Group
Routine
routine
Routine
1,414
Laetitia Vitaud
I write about #FutureOfWork #HR #freelancing #craftsmanship #feminism laetitiavitaud.com
35f5caaa41c9
Vitolae
2,899
1,609
20,181,104
null
null
null
null
null
null
0
null
0
634d4b270054
2018-04-03
2018-04-03 07:51:58
2018-04-03
2018-04-03 07:52:37
1
false
en
2018-06-05
2018-06-05 09:19:59
3
117c1efa32ff
1.158491
1
0
0
The emergence of drone technology has so far helped police department in safe and secure rescue operations. But to operate drone in the…
5
Drones Used As Part Of Rescue For First Time In Ozarks The emergence of drone technology has so far helped police department in safe and secure rescue operations. But to operate drone in the night skies, the operator need to have clearance through the FAA and special certification. “The technology is getting to the point where most departments are able to afford lower end drones,” said pilot Tom Baird. Lately, the drone helped rescuing a man way faster than waterways by boat or on foot. “The drone equipped with 4K and infrared camera, see heat signatures whether that be from a vehicle or someone who may be lost,” explained Baird. “The cool factor and flying the drone and seeing the live stream and seeing beautiful pictures, that’s all fabulous. But when you get called out to do a water rescue or a search and rescue mission for a child that is missing, that’s what makes it all worth it,” added Baird. Source: https://bit.ly/2uK9Yhy About DEEPAERO DEEP AERO is a global leader in drone technology innovation. At DEEP AERO, we are building an autonomous drone economy powered by AI & Blockchain. DEEP AERO’s DRONE-UTM is an AI-driven, autonomous, self-governing, intelligent drone/unmanned aircraft system (UAS) traffic management (UTM) platform on the Blockchain. DEEP AERO’s DRONE-MP is a decentralized marketplace. It will be one stop shop for all products and services for drones. These platforms will be the foundation of the drone economy and will be powered by the DEEP AERO (DRONE) token.
Drones Used As Part Of Rescue For First Time In Ozarks
1
drones-used-as-part-of-rescue-for-first-time-in-ozarks-117c1efa32ff
2018-06-05
2018-06-05 09:20:01
https://medium.com/s/story/drones-used-as-part-of-rescue-for-first-time-in-ozarks-117c1efa32ff
false
254
AI Driven Drone Economy on the Blockchain
null
DeepAeroDrones
null
DEEPAERODRONES
null
deepaerodrones
DEEPAERO,AI,BLOCKCHAIN,DRONE,ICO
DeepAeroDrones
Deepaero
deepaeros
Deepaero
0
DEEP AERO DRONES
null
dcef5da6c7fa
deepaerodrones
277
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-25
2018-09-25 06:11:02
2018-09-25
2018-09-25 06:11:52
3
false
en
2018-09-25
2018-09-25 06:11:52
0
117e1c753037
1.131132
1
0
0
This Saturday was our eighth week of learning and we were privileged to hold our class on Naive bayes algorithm as part of the Devfest…
1
AI Saturdays Nairobi at Google DevFest This Saturday was our eighth week of learning and we were privileged to hold our class on Naive bayes algorithm as part of the Devfest sessions. The room was packed as usual with machine learning enthusiasts. The session was a three part session that began with a theoretical dissection of the principles behind the Naive Bayes algorithm. We then proceeded to do a mathematical example based analysis with case studies. Lastly we dived into a codelab where we looked at the implementation of multinomial naive bayes in sklearn where we made a spam filter and evaluated its precision, recall and F1 score. This event was a good opportunity to create awareness on Ai saturday movemet and encourage other people to sign up as ambassadors and start a chapter in their city. With there being a diverse representation of people from over 26 countries this was a great initiative. The Dev fest crew
AI Saturdays Nairobi at Google DevFest
1
ai-saturdays-nairobi-at-google-devfest-117e1c753037
2018-09-25
2018-09-25 06:11:52
https://medium.com/s/story/ai-saturdays-nairobi-at-google-devfest-117e1c753037
false
154
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Felicity Mecha
null
3bc07df8c9eb
felicitymecha
55
42
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-30
2018-01-30 06:19:21
2018-01-31
2018-01-31 02:50:13
0
true
ja
2018-02-16
2018-02-16 21:27:35
0
117f56431591
3.8
1
0
0
私たちはどうして起業したのでしょうか。どうして、ここまで命を削ってもやろうとしているのでしょうか?もっと楽にお金を稼ぐことは出来るというのに。
4
命を張ってやり遂げること 私たちはどうして起業したのでしょうか。どうして、ここまで命を削ってもやろうとしているのでしょうか?もっと楽にお金を稼ぐことは出来るというのに。 そう考えた時、そもそもお金のために仕事をしていないのではないかと思いました。もちろん、会社という公器はどんどんと大きくなっていただきたいと思いますが、私個人が肥大化する必要もなく、顧客・仲間・支援者に分配されればそれで良いのだと思います。 一度お蔵入りした当初の資料を見ると、貧困からの脱却と世界平和という所にポイントが置かれていますが、NPO的アプローチを考えていない事も分かります。 それは、自律的に活動しなければ、本当の貧困からの脱却は難しいからです。そこで、【思いつきに価値は無い】とよく言われますが、【思いつきに価値を付けて】はどうかと思ったのがことの始まりでした。 自分の考えたキャラクターやプロット、世界観を誰かが使ってみたいと思った時、許可を得るという行程を簡略できたらどうだろうか。黙って使うという行為そのものは悪いし、かと言って相手に許可を得るというのも、なんと話をしたら良いかよくわからない。 そんな時、プラットフォームが簡略化すればいい。そして、誰がいつどうやって使ったものかを記録していけば良い。最初はそれはただのデータベースでやろうとしていました。 そして、表には実装したとは表示していませんでしたが、そのような仕組みは既に組み込んでいました。 そして、外部からの問い合わせはAPIでやろうとしていました。ですが、いまはスマートコントラクトというイーサリアムの仕組みがあります。 つまり、ブロックチェーンの技術を使えば意外といけるんじゃないかと考えているのが今です。 このプラットフォームがあれば、学校の端末からどうにかしてアクセスして【思いつき】をアップしてもらえれば、それが誰かに使われ、自分には対価が支払われるという、無から有を生み出す場を作ろうとしていました。 額は小さいかもしれませんが、そういう時間の切り売りだけじゃない知識労働というものを理解してもらえれば、良いと考えたのです。このプラットフォームがあれば次の宮﨑駿監督になる子が、新たな成長都市から見つかるだろうと思っています。 それをやるために今があります。才能を見つけるためのAIです。ですから、才能と作品を検知する仕組みが必要なのです。これを作るためには素人だけではなく、プロのデータも必要です。そのためには、合理的、合法的にデータを収集できるSaaSを作ろうとしていまう。もちろん、直接的にそのデータにアクセスすることはありません。全てのデータは暗号化されますし、適切にログイン管理されます。 そのデータの統計的な解析をやることでアルゴリズムの精度を上げたいということなのです。 それが出来て、世に届けるまでは諦めない。というか、もう止めることは出来ないわけです。私の人生ですから。 さて、自分もクリエイターであると小説を書くわけですが、何度か書いていると、なんとなくコツというものが見えてくるわけです。 小さくても読んでくれている人がいるわけです。この喜びを知るまでは上手く行きませんでした。 読んでもらうことで強くなります。 昨年作ったfictaにも実装済みでしたが、アレの本当のコンセプトは「作品という料理をきちんと構造化すること」です。 ときには創作のGithubといったり、創作のクックパッドだとか◯◯の◯◯という感じにしていましたが、私自身もあまりにピンときていなかったし、そのビジネスモデルではMVP作ってこれぐらのUUがあって、そこで初めて仮説検証が出来て、ようやくシードみたいなものだという業界の常識を理解していませんでした。 ですので、今は一つのプロダクトで何かを成し遂げるというのは止めて、事業体として捉えて欲しいと考えています。 私たちはサービスを提供するだけではなく、コンテンツも出すし、配信もやるということです。Amazonだってプラットフォーマーがコンテンツも提供しているでしょう? ですが、配信に関してはネットはとっくに支配が終わっているので、今から参入するのは相当に困難です。ならばどこでやるのか?もちろんオフラインでしょう。自分たちだけのネバーランドを作るしか無いのです。 次はコンテンツ単体で何かを提供するという形はなし得ないと思っています。複合事業体+インターネット+AI+IoTのようなSI力こそが今後のトレンドでしょう。 つまり、私のSIでの総合力という経験が生きてきます。 ゼネラリストとして育てられたことが生きてくるわけです。
命を張ってやり遂げること
11
命を張ってやり遂げること-117f56431591
2018-06-14
2018-06-14 03:27:23
https://medium.com/s/story/命を張ってやり遂げること-117f56431591
false
26
null
null
null
null
null
null
null
null
null
Entertech
enter-tech
Entertech
0
Xrosriver
Xrosriver inc.
a18d78393b3d
xrosriver
489
721
20,181,104
null
null
null
null
null
null
0
null
0
c49300e6d22e
2018-04-20
2018-04-20 10:21:39
2018-04-20
2018-04-20 10:30:49
1
false
en
2018-04-20
2018-04-20 10:30:49
5
117f900cd28
4.935849
3
0
0
As the AI revolution gathers pace and influence, there is an increasing focus on “ethics in AI” as sociologists, ethicists and…
4
Why we need a movement for justice in AI not ethics in AI As the AI revolution gathers pace and influence, there is an increasing focus on “ethics in AI” as sociologists, ethicists and technologists battle to inform its progress. As I have read article after article on the subject of ethics in AI, I have been struck by the alarming absence of what harm actually means in the context of AI: oppression. As an anti-oppression education organisation, a notion that consistently emerges in our work at Fearless Futures is that how we frame a problem informs how we come to solve it. While ethics is a wide and diverse field, our suspicion has been that unless we have a language that speaks to the root issues at stake when it comes to AI we will get nowhere. Does “ethics” in its mainstream sense — doing good? — cover what it is required of technologists, policy makers, legislators, and funders to solve the problems described here, for example? Not really. In our view, the root issues must be that structural oppression is in existence across our communities and societies and without active transformation of power relations AI will perpetuate, reproduce and amplify this harm. And if our conception of the problem isn’t framed in this way then our efforts will fail. If there is a disease of the body and our discourse is centred on the person’s chipped nails, then there may well be recommendations for a manicure, but we probably won’t heal the body. If there is a disease of the body and our discourse is centred on the person’s chipped nails, then there may well be recommendations for a manicure, but we probably won’t heal the body. If we are prepared to dig in and acknowledge the disease of the body, then we will do anti-oppression work. In my view then, the quest is for an AI of justice not an ethical AI. I am not an ethicists, so I decided to reach out to Dr. Arianne Shahvisi at the University of Brighton to discuss these very questions with her. She was so erudite and powerful, I thought it would be simplest to share an excerpt of our exchange below. ME: I am trying to get my head around why people have focused on a narrative of “ethics” in AI rather than anti-oppression or justice in AI. What’s going on here? DR. SHAHVISI: Ethics deals with right and wrong, fair and unfair, just and unjust, but it is traditionally employed in ways that manage to avoid discussion of oppression. I know that will sound ridiculous and implausible to you, but unfortunately that’s how it is. I suspect it is a relic of those who have been most influential within the discipline: wealthy white men, usually from a long time ago (the proverbial “pale, male, and stale” writers who are the bulk of philosophy reading lists) who really did/do feel like fully individual efficacious agents in the world, and do not think beyond that positionality. So, when people use “ethics” in an applied sense (“medical ethics”, “business ethics”) they typically refer to the rightness/wrongness of an interaction between two individuals i.e. a doctor and a patient, a researcher and a participant, a service-provider and a client. Ethics is very often highly individualised and atomistic, very libertarian, and is applied without consideration of power or structural factors. So when someone asks you to consider AI ethics, they will typically be considering individual misuses of the technology, e.g. weaponisation, data protection issues, or an individual robot being treated badly. ME: Hmm, that appears to be what I’ve been seeing broadly speaking. There must be ethicists that do focus on anti-oppression though, right? DR. SHAHVISI: Yes, as with most generalisations, there are exceptions. Not all ethics is conducted in this ridiculous way, and there is scope for it to include, and even centre, structural considerations. That’s what I try to do in my work, and that’s what others working in the philosophy of race and gender attempt to do too (in case you want to quickly scan an example, here is an ethics paper of mine that just came out which openly resists this libertarian streak in reproductive ethics, in favour of structural concerns). In fact, I think it’s fair to say that since the work of philosophers like Arendt and Foucault, and the development of feminist theory, many philosophers do consider power and oppression in their academic work, but those subtleties are yet to be transmitted to those people within organisations and sectors who tend to respond only to PR pressure, and often think of ethics as nothing other than a practical box-ticking exercise. My fundamental instinct is that one can have an ethical position AND that position can also not deliver an outcome of justice. If that’s the case, I feel that AI ethics simply isn’t sufficient for the scale and complexity of informing our work in AI (presuming our shared goal is to end structural harm — which I have to on some level presume is not everyone’s end game). What are your thoughts? DR. SHAHVISI: Can you develop a position that is ethically sound, according to a particular ethical theory, yet oppressive? Yes, sadly you can. For example: utilitarianism is one school of thought within ethics which tells us that the right thing to do in a given situation is to maximise wellbeing for as many people as possible. Suppose you had a society in which a minority group had been treated very badly, and were now violently resisting, and seemed intent on harming majority groups. On certain readings, utilitarianism would suggest that it was ethically acceptable to kill all of them in order to protect the majority and keep as many people happy as possible. So that would be an ethically acceptable position, but a very oppressive one. ME: So, what role can ethics play if any at all?! DR. SHAHVISI: I might have painted a rather disparaging picture of my field, but it’s important to remember that ethics is being recuperated, especially as philosophy slowly becomes more diverse. Ethics can and should include considerations of aggregate human units, rather than just individuals. Injustices can and do occur between individuals, but they occur with much greater frequency and intensity between different groups of people (and sometimes those interactions are mediated by an individual encounter, but also often not), in accordance with robust, predictable trends, relating to distributions of social power. You are therefore perfectly justified in arguing in favour of a broader reading of ethics than the traditional atomistic one, in order to better capture the realities of people’s experiences. End of exchange! It’s worth noting that while there is much in the way of superficial writing on ethics in the mainstream technology press, there are some brilliant voices leading the way too, Kate Crawford among them. You may have noticed that Dr. Shahvisi and I consider a central concept in our understanding of inequality — and that is ‘power’ and its asymmetries. Kate Crawford argues this too. I leave you with a quote from her for good measure: “Often when we talk about ethics, we forget to talk about power. People will often have the best of intentions. But we’re seeing a lack of thinking about how real power asymmetries are affecting different communities.” So, let’s move from AI ethics to a movement and action for justice in AI. We then finally might get somewhere.
Why we need a movement for justice in AI not ethics in AI
22
why-we-need-a-movement-for-justice-in-ai-not-ethics-in-ai-117f900cd28
2018-06-11
2018-06-11 12:40:13
https://medium.com/s/story/why-we-need-a-movement-for-justice-in-ai-not-ethics-in-ai-117f900cd28
false
1,255
Engaging people in critical thought to understand and challenge the root causes of inequities, and growing powerful new leadership for transformative change.
null
FearlessFuturesUK
null
Fearless Futures
fearless-futures
DIVERSITY AND INCLUSION,EQUITY,DECOLONISATION,INCLUSIVE DESIGN,DIVERSITY IN TECH
fearlessfutures
Ethics
ethics
Ethics
7,787
Hanna Naima McCloskey
CEO @ Fearless Futures. Educator. Innovator. Design for Inclusion.
6537df10396e
hanna_64239
54
155
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-13
2017-09-13 01:46:17
2017-09-13
2017-09-13 06:03:01
2
false
en
2017-09-29
2017-09-29 05:27:08
5
117f9c7ffa4b
4.262579
3
0
0
The future. In science fiction, the future seems to promise all sorts of developments. In this type of genre; space travel becomes common…
1
Artificial Intelligence: A companion made of glass (Post 1) Photo from How Do We Align Artificial Intelligence with Human Values? The future. In science fiction, the future seems to promise all sorts of developments. In this type of genre; space travel becomes common, time travel is possible, parallel universes have been confirmed to exist, and we are able to live among extraterrestrial life. However, not everything in science fiction is as far from our reality as it seems. As our world has advanced, so have our capabilities involving technology. One major advancement that is being developed right now is the very fragile concept known as artificial intelligence. What is artificial intelligence and why am I interested in it? Artificial intelligence or A.I. can be referred to as intelligence displayed by machines, instead of something that is living and breathing such as humans or animals. A.I.s have been displayed in a variety of ways ranging from programs producing text on their own to having full-fledged bodies that are almost indistinguishable from humans. In these different forms, they have all been able to exhibit intelligence matching or exceeding that which humans have. Their applications seem limitless as they can be used as personal companions or even military soldiers. What has always interested me about A.I.s is the idea that they can be programmed to do a task and eventually surpass their programming. A.I.s are able to do this by absorbing and applying learned information to different situation just as humans and animals do. In my life I have read and watched different forms of science fiction media containing characters that are forms of artificial intelligence. In these stories, A.I.s have played the role of the protagonists or side characters as they have learned to live their own lives or helped characters as their companions. However, A.I.s are not always good. In some stories, they have been the antagonists as they are used against other humans, ran rampant on their own, or even brought upon the destruction of our human race. Why am I researching and writing about this? Photo from Hackers Find ‘Ideal Testing Ground’ for Attacks: Developing Countries As I have mentioned earlier, artificial intelligence is a very fragile concept. It is possible that we can end up with A.I.s being made for good or for evil just like in science fiction. This can all sound ‘silly’ but the dangers are very real. In the New York Times article titled, “Hackers Find ‘Ideal Testing Ground’ for Attacks: Developing Countries”, by Sheera Frenkel, the threat of malware attached to an A.I. is discussed. Frenkel reports, “The cyberattack in India used malware that could learn as it was spreading, and altered its methods to stay in the system for as long as possible.” Frenkel’s point is that this artificial intelligence program was designed specifically to do harm and learn how to continue doing harm without being eliminated. This is just one situation that can occur if A.I.s are not regulated and are instead unleashed with the intent or capability to do harm. How exactly can A.I.s be regulated? It all depends on the programmer and the work they want to put into their A.I. to make sure it can be controlled in some way. This topic is the central story for the New York Times article titled, “Teaching A.I. Systems to Behave Themselves”, by Cade Metz. In his article, Metz visits OpenAI, an A.I. lab created by Elon Musk. Metz writes about a problem researcher Dario Amodei and his colleagues encountered in their lab. Amodei designed an A.I. which was tasked with playing a certain boat racing video game involving the collection of points that appeared throughout the race as well as winning the race itself. Unfortunately, something unexpected soon occurred with their A.I. Instead of trying to win the race, the A.I. began to solely focus on collecting points and even sacrificed winning the race just to continue collecting points. The solution the researchers came up with involved changing parts of the A.I.s algorithm so they could guide it whenever they deemed necessary. As Metz puts it in his article, “They believe that these kinds of algorithms — a blend of human and machine instruction — can help keep automated systems safe.” In other words, this is the main solution the researchers believe must be implemented to keep A.I.s on a leash. Why do I want to continue researching and writing about this? Although it’s obvious artificial intelligence has problems that we must look out for, there are a few other topics related to artificial intelligence that I want to look into. I want to continue researching on both the good and bad that has been appearing in the development of artificial intelligence. As a mechanical engineering major who can’t wait for A.I.s to become part of everyday life both at home and in the workplace, focusing on topics such as these would allow me to decide for myself if we are steering artificial intelligence toward the right direction. If everything seems to be going well, my excitement for A.I.s will not diminish in the slightest. However, I do fear for a reality in which A.I.s are prioritized for military use and things of that nature. Another topic that I want to look into is the actual development of these A.I.s. I have some idea that creating programs that are able to learn and act on their own is not the easiest thing to do so I want to look into how the process is coming along. Even though the development is happening right now, and I am excited for A.I.s, it could be possible that I might not even get to see them the way I have imagined them. These are just a few reasons why artificial intelligence is so fascinating to me as we try to walk on the bright prosperous path with technology by our side without trying to make our own destruction. Works Cited: Frenkel, Sheera. “Hackers Find ‘Ideal Testing Ground’ for Attacks: Developing Countries.”The New York Times, The New York Times, 2 July 2017, www.nytimes.com/2017/07/02/technology/hackers-find-ideal-testing-ground-for-attacks-developing-countries.html?rref=collection%2Ftimestopic%2FArtificial%2BIntelligence. Metz, Cade. “Teaching A.I. Systems to Behave Themselves.” The New York Times, The New York Times, 13 Aug. 2017, www.nytimes.com/2017/08/13/technology/artificial-intelligence-safety-training.html.
Artificial Intelligence: A companion made of glass (Post 1)
3
artificial-intelligence-117f9c7ffa4b
2017-09-29
2017-09-29 05:27:09
https://medium.com/s/story/artificial-intelligence-117f9c7ffa4b
false
1,028
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Fabian Cisneros
Hi, I’m Fabian. I’m a second year Mechanical Engineering student at San Francisco State University.
dadff5e02444
fcisneros141
2
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-15
2018-02-15 22:45:59
2018-03-22
2018-03-22 22:16:11
2
false
en
2018-03-22
2018-03-22 22:16:46
2
1180554ead72
1.575786
0
0
0
Trust in AI may be one of the biggest challenges
5
We get on like a house on fire Trust in AI may be one of the biggest challenges We get on. It may come that the biggest challenge in the adoption of AI is how humans are able to trust AI — and not in the way you think. Despite opinions that humans won’t ever trust robots, humans may be too trusting. Research suggests that once a human has placed their trust in a robot its hard to shake. That we are willing to blindly follow robots into the fire. This may explain some of the accidents self driving cars have where there are humans behind the wheel supposedly supervising. Maybe it also explains some of the strange things we do when interacting with AI. In the ‘assisted-self-driving’ car scenario I imagine people just switch off when told to hold the wheel and don’t actually pay attention and supervise. They trust the machine and why wouldn’t you - the car drives itself. This mentality doesn’t help us through the next chapter of AI, where it augments our capabilities — but doesn’t replace us. If the AI is not to 100%, implicit and absolute trust will fail us both. So how do we make it work while AI still has the training wheels on and is far from perfect? We learn to drive in a supervised environment, we should do the same with robots. The challenge for AI/robot builders is how to work with people and keep them engaged. When do we let our users know “I’ve got this” and when do they need to intervene? I believe we need to encourage a healthy suspicion of the AI’s competence and gamify the act of supervising our creations . We should stop worrying about building friendly faces into our robots to engender trust, and instead consider something more terrifying like the plague doctor mask to keep us on our toes.
We get on like a house on fire
0
we-get-on-like-a-house-on-fire-1180554ead72
2018-03-22
2018-03-22 22:42:17
https://medium.com/s/story/we-get-on-like-a-house-on-fire-1180554ead72
false
316
null
null
null
null
null
null
null
null
null
Self Driving Cars
self-driving-cars
Self Driving Cars
13,349
Matt Szwec
Explore. Engineer. Expedite.
21e21df1b5b6
mattszwec
16
5
20,181,104
null
null
null
null
null
null
0
null
0
3229f31ca4f4
2018-07-20
2018-07-20 15:57:25
2018-07-24
2018-07-24 10:24:22
5
false
en
2018-07-24
2018-07-24 10:24:22
5
1180ac5e2cb6
7.565409
11
0
0
By Shyam Shinde
3
SensAI Predict — Building ticket classification service using NLP and ML By Shyam Shinde Helpshift’s customer service platform is used by companies such as Epic Games, Microsoft and Zynga to receive support tickets from their users via mobile apps, web chat and email. Thousands of new tickets are reported daily in the platform across our 2000+ customers and 2+ billion end-users worldwide. Every good company wants their customer service tickets to be resolved as soon as possible. To achieve that quick turnaround, it’s imperative that a ticket reaches the right agent who is best skilled at handling it. Hence, ticket classification and routing are two important functions to achieve the efficiency strategy for a customer service operation. How is ticket classification done today? Today, companies have dedicated people focused on the mundane-yet-important work of ticket classification. As it is a manual step before tickets finally reach the agent, customer service centers face problems of scalability. As more tickets start coming in everyday as the company grows, they need to hire more agents for classification, making the model slow, complicated to coordinate and unscalable. Because of the repetitive nature of this work, customer service centers also face problems of attrition and the company needs to hire and train their personnel frequently. This involves significant cost for the company. Due to the nature of the customer service industry, there is a steep learning curve and even when skilled agents are available, there is an inadvertent delay in ticket handling and processing due to classification of tickets. Due to this, the time for first reply to the customer can increase even though support center has sufficient and skilled agents to work on the tickets. After speaking with our customers, it got us thinking — why not build a ticket classification service into our system? What are the challenges in building automated ticket classification? We wanted to build a system that could intelligently classify the tickets even before the customer support agent gets to see it, and depending on applied labels, organization can set actions like routing to particular agent. The challenges in building such a classification system are: To not add significant latency in ticket creation because we are introducing a significant step of ticket classification. Must work for all of our customers. Tickets are in different languages, so we needed a classification system supporting tens of different languages, ranging from Chinese to Portuguese. If we were going to leverage automatic routing of tickets using applied label, accuracy of classification system had to be as good as or better than existing company-specific manual routing. Given the above challenges, we needed a completely automated workflow, where our customers can build their own classification model through our platform in a few steps. This classification model will be used to apply an unique label for each ticket. Using this unique label, admin can route particular label’s ticket to a group of agents. In this article, we will discuss backend architecture of label prediction and present how the label prediction service has enabled automatic routing of incoming tickets. What does the ticket classification product look like? In this example, an end-user has filed a ticket to know the price of a particular bag. Our prediction service has labeled the ticket “billing”: How to use applied label on a ticket? Applying a correct label is not the end goal of ticket classification. We want to enable our customers to trigger particular actions like routing to a particular agent or group of agents (we call this a queue), send an auto-reply to the ticket, etc. Here is one example — where action is set to Assign ticket to product queue if predicted label is billing How did we build the ticket classification product? We, the data intelligence team, started exploring different approaches to solve ticket classification. One method was to find a set of keywords for each label. When new ticket comes, we will search keywords of each label in ticket’s text and predict a label on search result. But it was not possible to find unique set of keywords for each label. Example, label like order_tracking and order_nondelivery have common keywords like order placed, return, wait and delayed. Next, we explored machine learning algorithms which can find patterns for each label from a dataset of the tickets filed. The machine learning experiments were promising and the early results were good. We decided to settle on a machine learning approach. The advantage of this model was that it can be updated by agent feedback on the correctness of the predicted label, which was not possible in the keyword search solution. The engineers started working on a system which will be used to build the machine learning models and predict a unique label using pre-built model in real time. Helpshift’s label prediction service uses natural language processing and machine learning classification algorithms to accurately predict the label. The set of labels from which label is chosen is decided by each customer. General flow in building the model The CSV dataset of tickets and their corresponding labels are required to train the classification model. The admin can use the SensAI tab in the dashboard to upload the dataset and build the classification model. Our in-house ML platform prepares and validates the submitted tickets dataset. Our in-house ML platform builds the classification model from the validated dataset. Once model is built, it is stored in model store. This model is used to predict the label whenever a new ticket comes in. Machine Learning steps while building model Preprocessing The goal of the preprocessing step is to collect required words from corpus of tickets text and find association of these words in the corpus. Preprocessing starts with cleaning the ticket data by removing HTML tags, numbers, urls, etc. which are not useful to find interesting patterns inside the dataset. After that we tokenize data by finding sentence boundaries and finding words in it. We apply stemming on words to create base words. To find words associations, we create bigrams and trigrams. Model building We generated NLP features like vocabulary, word frequency and ticket frequency matrix across labels. We started experimenting classification models using naive bayes classifier and logistic regression. But these common classification algorithms were not good enough to predict accurate label from large number of possible labels (solution space). These algorithms were predicting most common labels when the user writes short text in their ticket. This happened often when user submits ticket from their app or from webchat window in conversational UI. The data science team then started experimenting ML models which will work on short text tickets. We benchmarked accuracy of different algorithms against test data. We found that no single independent model was performing better than other models when compared on different parameters. In order to improve the accuracies, we decided to ensemble different algorithms and apply ADABoost technique. It was observed that, the overall accuracy of classification was better than individual independent models. The last step in model building is running validation on test data set and determining precision and recall for each label and model accuracy. From validation results, we suggest optimal confidence threshold that should be applied on model to predict the label confidently. General flow in predicting label on new ticket Once a new support ticket comes in, it is collected by Helpshift servers. The Helpshift backend server knows company name for which end user has submitted ticket. The backend service collects information such as whether label prediction service is enabled by our customer. If label prediction is enabled, then the backend service detects language of ticket text. The collected features are sent to label prediction service along with ticket text. The prediction service fetches required models from the model store and predicts score for each available label in the model. The label with the highest score is attached to the ticket. Once a label is attached to the ticket, our customer can decide what to do with that label — route to a particular team of agents (queue), auto-reply with a specific message, etc. This way, our customer can combine predicted label with Helpshift’s strong workflow features to enable automation of sophisticated workflows. For example, if the ticket is about lost account, an automatic reply can be sent pointing to a FAQ explaining the required information and steps. Another example is if the ticket is about app crash, it might be routed to a high-priority queue where it is auto-assigned to the currently available agents. When an agent sees applied label on ticket, they have an option to mark the predicted label as a wrong. The agent can also correct the predicted label for a ticket. This corrected label’s data is sent to ML platform as a feedback to classification model. Feedback collected from agent is used to update the model so that model is tuned for better prediction. This feedback loop can seem magical because with more feedback from agents, the model becomes more accurate automatically. This enables two things: Accuracy of model can improve to 90+% New types of issues can be identified if agent marks predicted label as a wrong for many tickets. Lessons learned so far While we are continuously working on tuning our system for better results, here are some lessons learned so far: Update the model with negative feedback Collect agent’s feedback on predicted label. This feedback is used to update model so that classification learns over time for accurate prediction. If agent marks predicted label as a wrong, we penalize the model parameters for predicting wrong label. In our case, we have observed that model always performs better when we update the model on feedback data. Updating the existing model by feedback helps in two ways - Model starts performing better in accuracy. The model learns from its wrong prediction! We do not need to build the model from scratch periodically when we have new set of data. 2. Try to create a label for a unique type of ticket category When a model is built for a set of labels where each label represents a unique type of ticket category, such model always performs better in accuracy when compared with a model which has label representing more than one type of ticket category. Example, labels order_nondelivery and order_delayed represents same ticket category, so avoid having such labels for predictions. 3. Build separate model for each languages We have found that language dependent model gives better accuracy when compared with a model built on all languages ticket text. If you’re interested in working on such problems, we are hiring, join us!
SensAI Predict — Building ticket classification service using NLP and ML
35
sensai-predict-building-ticket-classification-service-using-nlp-and-ml-1180ac5e2cb6
2018-07-24
2018-07-24 10:25:44
https://medium.com/s/story/sensai-predict-building-ticket-classification-service-using-nlp-and-ml-1180ac5e2cb6
false
1,784
Engineering blog for helpshift
null
null
null
helpshift-engineering
null
helpshift-engineering
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Shyam Shinde
Sr. Machine Learning Engineer @ Helpshift
bfa5fabce3e2
shyamshinde
29
34
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-10-04
2018-10-04 03:19:25
2018-09-13
2018-09-13 21:12:16
4
false
en
2018-10-04
2018-10-04 03:22:17
6
11813e4a2754
5.156604
0
0
0
Artificial intelligence is no longer something you can only see in science fiction movies; it is now an integral part of the medical…
5
What You Need to Know About the Impact of Artificial Intelligence in Healthcare Artificial intelligence is no longer something you can only see in science fiction movies; it is now an integral part of the medical system and it will lead to changes we could’ve barely imagined just years ago. Machines are getting better at learning how to outperform humans in terms of the speed with which they complete certain tasks, as well as the efficiency, the costs and accuracy. The presence of artificial intelligence in healthcare could mean that many medical procedures which required the contribution of qualified personnel can be automated. As a result, doctors would be able to direct their attention to more significant issues and patients would have unprecedented access to medical service. If this has sparked your curiosity, then you would be interested to know of the many uses of artificial intelligence in healthcare. This article will showcase a list detailing some of these innovative solutions, so keep on reading, but bare in mind that this is only the tip of the iceberg! 1. Helps Us Stay in Shape While many of us never stopped thinking about it, artificial intelligence is an instrumental part of the apps, programs and other medical software we use to track information about our health condition. And many of these health apps are used daily by people all around the world. The result is a vast database which contains critical information about the user’s health and lifestyle. In this example, what artificial intelligence does is to sweep through these large amounts of data in order to offer users a proactive management of our lifestyle, by offering individualized guidance. For example, the running apps on our phones skim through information, such as running speed, in order to deliver an individualized action plan. Over time, the app can “learn” about a user’s lifestyle and know when you are most likely to work out, for instance, and send you notifications or reminders at the optimum time. 2. Better Share Medical Knowledge The addition of artificial intelligence in healthcare can be used to improve the sharing of knowledge between doctors working in different parts of the world. The machine would gather and analyze information from various medical databases created by medical professionals from across the world and identify patterns. Thus, for example, it would allow a doctor who is struggling with a case to identify similar cases and the course of action taken by other experts. This entire information can be synthesized into something as simple as a smartphone app, making it convenient for doctors to access it at all times. 3. Simplifies the Research of New Drugs Drug research is a costly process. Not only does it require a lot of funding, but it also requires qualified personnel and an enormous amount of work. In the field of drug research, for instance, it can take up to 12 years for a drug to be released. In all these years, it is kept in the lab phase where as many as 5, 000 drugs are being tested. From all of these, only five make it through to preclinical testing, where eventually, only one drug will be approved for public release. Medication research is a more recent application for artificial intelligence in healthcare and it’s meant to help streamline the process of drug discovery and repurposing. What artificial intelligence does, in this case, is to gather and analyze data from research files and identify the potential effects it might have on subjects. As a result, by advancing the trial and error process researchers are going through, it can significantly cut down the time and costs needed for drug development in the future. 4. Helps to Accurately Detect Diseases Early on In most cases, early detection is critical to successful treatment. That being said, many early screenings are often inaccurate and the disease may go undetected until the later stages. For example, in the screening of cancer, many early-stage mammograms display unreliable or false results, which leads to many women being misdiagnosed as either having or not having cancer. Artificial intelligence can be paired with devices which can process health information gathered from patients. This information can be used to oversee a patient’s condition and monitor the development of a certain disease. In the case of mammograms, artificial intelligence could be used to speed up the reviewing and interpretation process to a point where it could be commonplace to have such screenings. Besides the convenience of speed, artificial intelligence can also make mammograms be more accurate than ever before. 5. Assist Doctors During Treatment Artificial intelligence can help doctors employ a more efficient approach towards disease management, as well. Through the way AI systems can file and process information, doctors can get easy access to health records of individuals who are undergoing long-term treatments and carefully monitor their situation. This helps doctors get a clear profile of their patients’ health situation. It can also identify and prevent the risk for patients to suffer adverse episodes as a result of their treatment. And that is not all. When it comes to complex medical procedures, such as operations, robots assisting doctors is not anything new. However, with the addition of artificial intelligence, technology-assisted treatment is taking things to a whole different level. Soon, an intelligent doctor may be able to take over many of the responsibilities doctors have today, such as interacting with patients, asking them questions, coming up with a diagnosis and putting together a treatment plan. 6. Improve Access to Reliable Medical Diagnosis Tech companies around the world have been applying machine learning to scan through large data samples of medical files, health records and journals in order to streamline the process of releasing diagnoses. The aim of such a process is to make use of learning algorithms to sweep through many case studies and documented symptoms in order to issue a medical diagnosis to any individual, at a faster and more reliable rate than a human could. This effectively democratizes the medical diagnosis, allowing people to receive accurate information about their condition without the need for a clinician to be present. Or, it can simplify the visit to the doctor, by cutting out unnecessary procedures such as paper filling. Both doctors and patients can simply resort to the machine to take care of bureaucracy, while they can effectively spend their meeting focusing on health-related issues. Begin to Experience the Benefits of Artificial Intelligence in Healthcare The examples highlighted in this article are only a few cases showcasing the benefits artificial intelligence has for the healthcare system. If you’re interested to experience some of these benefits for yourself, you can do so right now! Diagnosio is an app which allows you to self-diagnose and get clear-cut information about your health condition. It helps you avoid the unnecessary risks of searching for your symptoms on the internet and receiving confusing information. So start your free trial right now for a safe and reliable diagnosis! Originally published at www.diagnosio.com on September 13, 2018.
What You Need to Know About the Impact of Artificial Intelligence in Healthcare
0
what-you-need-to-know-about-the-impact-of-artificial-intelligence-in-healthcare-11813e4a2754
2018-10-04
2018-10-04 03:22:17
https://medium.com/s/story/what-you-need-to-know-about-the-impact-of-artificial-intelligence-in-healthcare-11813e4a2754
false
1,181
null
null
null
null
null
null
null
null
null
Healthcare
healthcare
Healthcare
59,511
BraineHealth
Democratize Healthcare
d0180cb608c9
diagnosio
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-08
2018-02-08 05:33:33
2017-11-27
2017-11-27 01:42:16
1
false
en
2018-02-08
2018-02-08 05:34:30
10
1182e15e627c
3.09434
0
0
0
Why does artificial intelligence (AI) get a bad rap in Hollywood?
4
Is ‘Minority Report’ already here? Why does artificial intelligence (AI) get a bad rap in Hollywood? Movies like Terminator, Blade Runner and Ex Machina all paint a future where intelligent robots threaten the existence of human beings. Fortunately, in real life, AI has assumed less sinister forms. The recommendation engines from Netflix and Amazon give suggestions on movies or books based on your previous choices. Google Maps reroutes drivers to avoid traffic accidents on their way home. There are smartphone digital assistants like Microsoft’s Cortana and Apple’s Siri that answer questions about sports and weather. But there’s another use for AI that does mimic sci-fi: personalised advertising. In a scene in Minority Report, John Anderton (Tom Cruise) is bombarded by augmented reality (AR) billboards that call out his name. (Embed or screenshot of scene: https://www.youtube.com/watch?v=3JNXuCaewOg) Many brands are already using AI to create better customer experiences, and they are just scratching the surface. It’s been predicted that global spending on these cognitive systems will reach $31.3 billion by 2019. Accenture predicts that AI might double annual economic growth rates and improve labour productivity by up to 40% by 2035. Although there is solid proof that AI can provide cost savings for many companies, the bigger goal for marketers today is using it to make their brand experiences even more predictive and personalised — enticing customers to buy more. Consumer first AI’s most public face is arguably IBM’s Watson, a cloud-based cognitive engine that claims to think just like a human, using machine learning and natural language processing. Watson is currently working with multiple brands in more than 20 industries across 45 countries. GSK Consumer Healthcare, Toyota, Unilever and Campbell Soup Company have been using a cognitive advertising system called Watson Ads. By using data from The Weather Company, which IBM bought in 2015, Watson Ads help brands provide relevant one-to-one brand experiences that are built on new product and consumer insights uncovered by Watson. For example, consumers can ask Watson for Campbell soup recipe ideas, and Watson will answer with data built around personal preference, past choices, weather and even the groceries on hand. Meanwhile, The North Face also worked with Watson to create a smart website that will match customers with the right gear for their needs. Sesame Street and Watson have also teamed up to build an AI-powered vocabulary learning app. Other companies are choosing to build their own AI platforms. USAA, an insurance and financial services company, uses AI in its enhanced virtual assistant named EVA. Customers of USAA can ask Eva questions to help them navigate and search for the information they need. Although a more proactive interaction is being developed, EVA can already continually evaluate and review a USAA customer’s financial data and then reach out to them with personalised money-management suggestions. Car company BMW also uses AI to improve owner experiences. The company’s opt-in system and personality mobility companion, BMW Connected, gathers data on their customers and the cars they use. It’s capable of checking traffic to suggest an earlier departure time, using ‘Time to leave’ alerts to act as a digital personal assistant, recognising and storing regular destinations, and showing the best way to a final destination even after exiting the vehicle. Beyond cognitive advertising? AI is driving advertising to a new frontier. It’s a cooler, less-annoying way to reach customers beyond programmatic and targeted pop-ups. More than selling a product or a service, AI-powered products are helping brands promote an ideal that consumers can enjoy, whether it’s a connected home, a killer recipe or a safer ride. Ultimately, brands will always look to make a profit first. They see AI as a tool to gain better insight into how consumers use their products. Take AI services with a grain of salt — they’re still data-gathering machines. So is living in a Minority Report world such a bad thing? Actually, brands have a lot to lose if their AI services become invasive or persistent. Consumers can opt out at any time, so it’s actually in the brand’s best interest to make their services focused on improving our lives. We can all rest easy knowing we probably won’t have to endure screeching AR billboards within our lifetimes. The billions that advertisers are investing in AI could also benefit us on a grander scale. What brands are learning about consumer data can be used to improve healthcare, travel and even governance. Learn to embrace the medium — but be critical about the message. Originally published at rarebirds.io on November 27, 2017.
Is ‘Minority Report’ already here?
0
is-minority-report-already-here-1182e15e627c
2018-02-08
2018-02-08 05:34:31
https://medium.com/s/story/is-minority-report-already-here-1182e15e627c
false
767
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Rare Birds
We build software with brilliant, globally connected teams to make the world a better place. http://rarebirds.io
93402fbb6135
rarebirdslabs
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-28
2018-04-28 15:59:42
2018-04-28
2018-04-28 16:00:40
0
false
en
2018-04-29
2018-04-29 09:11:39
0
11843f7e733
0.558491
0
0
0
Data profiling is the process of examining the data available in an existing data source (e.g. Databases, files, etc.) and collecting…
2
Data Profiling Data profiling is the process of examining the data available in an existing data source (e.g. Databases, files, etc.) and collecting statistics and information about that data. Data profiling is also referred to as data discovery. This method is widely used in enterprise data warehousing. Data profiling uses different kinds of descriptive statistics including mean, minimum, maximum, percentile, frequency and other aggregates such as count and sum. The additional metadata information obtained during profiling is data type, length, discrete values, uniqueness and abstract type recognition. Types of Analysis Performed Completeness Analysis Uniqueness Analysis Values Distribution Analysis – What is the distribution of records across different values for a given attribute? Range Analysis Pattern Analysis Benefits The benefits of data profiling is to improve data quality. Data profiling clarifies the structure, relationship, content and derivation rules of data, which aid in the understanding of anomalies within metadata.
Data Profiling
0
data-profiling-11843f7e733
2018-04-29
2018-04-29 09:11:41
https://medium.com/s/story/data-profiling-11843f7e733
false
148
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Senthil Nayagan
Software Developer
c9b44d412ebe
senthil.nayagan
3
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-20
2017-11-20 14:46:11
2017-11-26
2017-11-26 10:33:48
1
false
en
2018-01-16
2018-01-16 18:39:46
18
1184d45ad16a
3.615094
66
7
0
“AI is the new electricity”. It is going to change the world the way electricity did a hundred year ago. If you want to learn AI, start a…
4
Review of Machine Learning course by Andrew Ng and what to do next “AI is the new electricity”. It is going to change the world the way electricity did a hundred year ago. If you want to learn AI, start a career and looking for a good course to start then keep reading, this blog is written for you. I recently completed this course, and I can say I have yet to see another online course on ML that comes near it in terms of quality of content, delivery and assignments. Brief Introduction of Instructor Andrew Ng is one of the world’s best known AI experts. In 2011 he founded the CourseEra. Previously, he was head of AI Division at Baidu (A Chinese research engine ). Just to give you an idea of how bigger impact Andrew has in AI, Baidu lost $1.5 Billion in value due to his resignation. How much Math you need to know Machine Learning highly depends on Linear Algebra, Calculus, Probability Theory, Statistics, Information Theory. But don’t get scared, you don’t need to have a background in all of these Math’s fields to start learning ML. If you have studied basic Linear Algebra, Probability and Calculus in university or high school, you are good to go. If you need to refresh your concepts of Linear Algebra, I would highly recommend watching these excellent videos “Essence Of Linear Algebra”. Khan Academy was also helpful to clear some of my concepts. Contents of Course This course covers the following topics in ML Supervised Learning Linear regression, logistic regression, neural networks, SVMs. Unsupervised Learning K-means, PCA, Anomaly detection Special Applications/ Topics Recommender system, large scale machine learning Advice on building a machine learning system Bias/variance, regularization, evaluation of learning algorithm, learning curves, error analysis, ceiling analysis. In my opinion, the most important part of the course is the 4th one, which highlights all of the tools, tricks, and tips that you will need to build the state of the art ML system. While solving a real problem using ML, you will often find yourself stuck at some issue, this is where these tools will come to rescue. I haven’t seen any online course discussing this important topic in such a detail. Delivery Method This course is very interactive and highly involving, you should be ready to spend 5–7 hours/week to get the most out of this course. Video Lecture and Quizzes Each lecture consists of multiple videos with average length of 10–15 minutes. Almost every video has a quiz question to help you make sure that you understand the concept covered in the video. At the end of the each lecture there is also a quiz. Lecture notes under Resource section provide a great reference for topics covered in lecture. Assignments Assignments are the really fun part of this course, they come with the pre-setup environment in which you need to add the snippet of code, usually the implementation of concepts you learned during lecture. You can use Matlab (paid software) or Octave(free software) to do the assignments. If you already have access to Matlab then use it. If not, no need to spend the money just use octave. What to do after this course After the completion of this course successfully, you will have an expert level of understating concepts of ML. But there is a lot more to do on the implementation side. Learn ML in Python You might not be surprised to know that almost no company uses Matlab/Octave in their production ML models. Most of the time they are just used to build the prototype. In a production environment, Python or R is used. I would recommend going for Python, since it is easy to learn and widely used by big companies (Google, Amazon, Microsoft). I would recommend going through the first 11 chapter of an amazing book. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. This book is written by one of ex-googler and considered the one of the best books on the topic. Since you already you know all of the concepts, you will be done with these chapters in one or two weeks. Move to Deep Learning Deep learning is a specialized field for ML which has become very popular in last couple of years. If you want to start a career in AI you, will need to have sound knowledge of this field. There are two great MOOCs available online that cover the topic very well. Deep Learning Specialization by Andrew Ng. Practical Deep Learning for Coders by Jeremy After reading this great blog by Arvind N, I decided to take the part 1 of Practical Deep Learning for coders and then move Andrew Ng specialization. Forums and persons to follow Join Kaggle, it is one of the worlds best community of data scientist, Kaggle holds data science competition regularly. You will learn a lot from competition kernels. On LinkedIn, I follow Andriy Burkov, Eric Weber, Matthew Mayo, On Medium, I find Towards Data Science page very helpful. They often publish great tutorials, one of my favorite author is Arden Dertat. Conclusion Starting a career in AI is not very hard. Quality material is available online, all you have to do stay motivated and patient, at the end it all worth it. Thank for reading, do let me know your thoughts in comments.
Review of Machine Learning course by Andrew Ng and what to do next
360
review-of-machine-learning-course-by-andrew-ng-and-what-to-do-next-1184d45ad16a
2018-06-18
2018-06-18 16:05:24
https://medium.com/s/story/review-of-machine-learning-course-by-andrew-ng-and-what-to-do-next-1184d45ad16a
false
905
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Irshad Muhammad
null
d6d1a1a6310f
irshaduetian
94
7
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-14
2017-12-14 09:36:44
2017-12-14
2017-12-14 09:40:35
0
false
en
2017-12-14
2017-12-14 09:40:35
1
1187260763a6
0.837736
1
0
0
“Treating data analysis as if it’s not the work is a big problem. Having a data scientist who really cares about the problem… causes them…
4
Automating Data Science: the analysis IS the work “Treating data analysis as if it’s not the work is a big problem. Having a data scientist who really cares about the problem… causes them to look at a lot off different things to figure out what happened.” —Hilary Parker Her discussion with Roger Peng on Not So Standard Deviations was about automating data science tasks (predictive modeling, controlled trials, etc). Often, companies are looking to take the “leg-work” out doing analysis. This is fine, up to a point. Automating routine tasks, like data ingestion or doing QA makes sense; but programming out the analysis itself is a bit dangerous. Dangerous because as tasks become more complex, automation quickly breaks down. On the other hand, when all you have is a hammer, everything starts to look like a nail. So #automation either quickly becomes useless or your problems are often thrown onto Procrustes’ bed. Also dangerous because, as Hilary points out, “doing the work” is a proxy for interest in the problem. The data scientist who prioritizes automating work over digging into the issues probably isn’t asking Why or How? Why did rates go down, How do age and costs interact? The data scientist should be asking those questions. Automation focuses on getting the job done, but not necessarily on solving the problem.
Automating Data Science: the analysis IS the work
1
automating-data-science-the-analysis-is-the-work-1187260763a6
2018-03-18
2018-03-18 13:35:25
https://medium.com/s/story/automating-data-science-the-analysis-is-the-work-1187260763a6
false
222
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
dan addyson
#healthcare & #pophealth #analytics #datascience #consulting; currently on an expat tour in Seoul, South Korea
ad05c586759b
normallyskewed
112
102
20,181,104
null
null
null
null
null
null
0
null
0
dc7104214618
2018-05-25
2018-05-25 06:18:39
2018-05-25
2018-05-25 07:06:39
1
false
en
2018-05-25
2018-05-25 07:08:57
1
11875707fa44
2.2
3
0
0
Big Data application on 5G networks is a concept still in its nascent stage. Although the proposals look promising on paper, there are some…
5
Open research areas to focus in Big Data and 5G A researcher thinking on how to solve a problem Big Data application on 5G networks is a concept still in its nascent stage. Although the proposals look promising on paper, there are some open research area to focus on for Big Data Scientists and Engineers to progress towards effective utilization of 5G technology. Only a brief outline is given. Interested researchers can check for latest research papers published in top journals in these topics to further existing research or conduct new research. 1. Big Data Aided Network Framework The current architecture of wireless networks is mainly designed to facilitate for information delivery. In order to benefit from big data, a framework incorporating big data is needed. This framework has the potential to integrate the big data chain efficiently into the network by collecting, storing, processing, and analyzing data to enhance network operation. It is expected to do away with meaningless data and place storage and processing resources at appropriate locations. 2. Trade-offs among Communication, Caching, and Computing 5G wireless networks provide heterogeneous communication, computation, and caching resources which are to be intelligently used to support heterogeneous big data applications. Trade-offs are inevitable among communication, caching, and computing resources. Additional computation resources can be traded to reduce the communication load. Intermediate or final results of computation will have to be stored temporarily which can incur high storage cost. The alternative to deleting the data in order to save storage may require re-computation. Analysis of trade-off relationships among the heterogeneous resources is therefore required for optimum resource provisioning. 3. Cooperative Edge Caching/Computing Data is collected from different sources which leads to non-uniform data load distribution in both spatial and temporal domains. Cooperative edge caching is the solution to storage, retrieval and processing of such huge data cost-effectively. Caches can collectively form a distributed storage system and distributed edge-computing can aid parallel computing capabilities required for data processing. 4. Customized Networking for Big Data Service Function Chain(SFC) or Network Slicing can support multiple big data services/use cases, by creating service-oriented networking over the physical network infrastructure. The end-to-end networking solution can be further customized in accordance with the service requirements. In addition, multiple slices or service function chains should be tuned to make the best use of networking resources. The SFCs should be capable of adapting to dynamic changes in network status and service requests. 5. Security and Privacy This is one major area of concern with lot of controversy and conflicts occurring quite frequently. A huge volume of data scattered across the internet poses significant security concerns. Data sources must fulfill the three pillars of security: confidentiality, authenticity and integrity. Data should be immune to modification during processing and transfer. Further, the confidentiality levels must ensure that only the entitled and authorized users are accessing the data. The privacy of individual users should not be compromised through data mining. More data only can create more opportunities for a leak and hence one needs to be very careful in this aspect and better methods need to be innovated for data security. (This article was authored by Research Nest’s Technical Writer, Sreelakshmi Menon)
Open research areas to focus in Big Data and 5G
36
open-research-areas-to-focus-in-big-data-and-5g-11875707fa44
2018-06-04
2018-06-04 04:54:29
https://medium.com/s/story/open-research-areas-to-focus-in-big-data-and-5g-11875707fa44
false
530
We are The Research Nest. a multi-diverse team and a tele-research based R&D house backed by young engineers and visionaries.
null
theresearchnest
null
The Research Nest
the-research-nest
TECHNOLOGY,SCIENCE,RESEARCH,MEDIA,COMPUTER SCIENCE
null
Big Data
big-data
Big Data
24,602
Team Research Nest
null
f2965b8e376e
the.research.nest
26
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-11
2017-11-11 07:34:33
2017-11-18
2017-11-18 09:33:50
4
false
en
2017-11-18
2017-11-18 09:33:50
4
11882cf8e637
3.269811
2
0
0
“There is nothing noble in being superior to your fellow men. True nobility lies in being superior to your former self”- Hemingway
5
Coffee, coding, cinema and making data informed decisions (Spam Control) Part 2/2 “There is nothing noble in being superior to your fellow men. True nobility lies in being superior to your former self”- Hemingway We don’t just want to get better than our competition when it comes to offering recommendations, but also have credibility with the content and ratings we offer. It was high time we got serious, and maybe to some extent, stringent. Feature 17.4.3: Enforcing Spam and noise control As mentioned above, the next biggest concern was ensuring credibility which is an essence of any discovery platform. Time and again, with our rising popularity, we have noticed our platform being spammed by PR agencies rating movies and TV show higher than what they deserve. That not only led to noise in our rating system but also created an uncomfortable place for authentic users of the platform. For eg. a Marathi film, was spammed high ratings by a certain PR agency’s army of users. No, Sir , that may be possible on IMDB but not on MMR!! Machine Learning trained model to catch such type of organised PR spam One glaring example was one of the MSG’s film which was spammed high ratings by some users and reached at 4.2/5 on our platform and 9.2 on IMDb. Thus it was time to put some checks by considering factors such as: The Age of rating The value of rating The prestige of rater The authenticity of the rater profile Likelihood of fraudulent rating Though the top 3 were easy to recognise and implement, items 4 and 5 required us to build and train a model which could help us predict the same to best confidence. I will write a separate piece on the maths, but for now we can safely assume our ML trainings and the resources we referred to were quite useful here. After testing on multiple datasets we were able to correct the spam ratings while genuine ratings remain unaltered. Below are the current ratings of the movie on both platforms. Well MSG’s actions are neither above law nor above our Algo’s rules. Earlier Ratings 4.2/5 IMDb rating for the Hind Ka Napak Ko Jawab Feature 17.4.4: Support of Regional Language content and assets Mymovierack has been accessed by users across 170 countries. Even if we look at our country of origin, there are 29 states with 50+ major languages. India, of recent, has been going through huge quantum of digital transformation . A recent report confirmed the number of Facebook users in India stands at 241 million and rising, highest in the world. Distribution of traffic sources on MMR on the basis of Primary Regional Language That being said, even our Google analytics Behaviour numbers reported appreciable numbers on Non-English and Non-Hindi users frequenting our platform. This lead us to do an cost benefit analysis on including regional language content listing support on our platform and the results was overwhelmingly in favor of including the same. We have already started supporting major Indian regional language like Tamil, Telugu, Kannada, Marathi, Malayalam, Gujrati & Punjabi. In coming future, as we expand globally, we would be supporting major languages outside India such as Chinese, Korean, Japanese, Russian and French followed by Iranian and German/Austrian. One Step at a time… Our energies are now focused in building the world’s first organic and humane recommendation system. We have named the project daydreaMMR as it won’t be just an code block which spits out a bunch of films and TV show, but a close friend who knows you, understands you and cares of you and your taste buds for movies and TV shows. While writing this note I was struck with an epiphany that I should spent sometime in the sun and make some efforts to see the world outside. For now, I will take some break, but not before I promise that MMR will continue in it’s endeavour to personalize entertainment. While there are many players in the arena(From incumbents with deep pockets to 100’s of new kids like us), what seperates us is what we have defined as What’s in it for us? Adios for now! You can read Part 1/2 of this series here In coming days we will be releasing snippets of some common utilities which will be helpful to most of the startups. Stay tuned!
Coffee, coding, cinema and making data informed decisions (Spam Control) Part 2/2
4
coffee-coding-cinema-and-making-data-informed-decisions-spam-control-part-2-2-11882cf8e637
2018-03-26
2018-03-26 15:24:25
https://medium.com/s/story/coffee-coding-cinema-and-making-data-informed-decisions-spam-control-part-2-2-11882cf8e637
false
681
null
null
null
null
null
null
null
null
null
Movies
movies
Movies
84,914
MyMovieRack
Lets talk movies (& TV shows) http://www.mymovierack.com
615f4e226614
mymovierack
12
1
20,181,104
null
null
null
null
null
null
0
null
0
5ba619654dee
2018-07-31
2018-07-31 14:19:59
2018-08-01
2018-08-01 04:13:11
10
false
id
2018-08-03
2018-08-03 03:01:48
4
1189b2e594e5
5.612264
3
0
0
Artikel ini merupakan publikasi pertama dari tim Mahapatih. Pada tulisan pertama ini dikupas sedikit mengenai bagaimana teknologi yang kami…
2
Automatic License Plate Recognition Artikel ini merupakan publikasi pertama dari tim Mahapatih. Pada tulisan pertama ini dikupas sedikit mengenai bagaimana teknologi yang kami kembangkan melakukan implementasi ALPR untuk mengenali plat nomor kendaraan di Indonesia. Saat ini sistem pengenalan plat nomor secara otomatis (ALPR, automatic license plate recognition) sudah mulai digunakan di berbagai negara. Sistem ini sangat berguna untuk banyak hal mulai dari meningkatkan ketertiban lalu-lintas, sampai membantu pencarian pelaku penculikan yang melarikan diri menggunakan mobil dengan plat nomor yang dikenal. Tulisan ini membahas mengenai teknik LPR secara umum dan solusi LPR dari produk Machine Vision Mahapatih (codename: Mata Dewa) yang dioptimasi untuk penggunaan di Indonesia. Mengapa tidak memakai OpenALPR? Mungkin pertanyaan pertama yang muncul adalah: mengapa tidak memakai sistem open source yang sudah ada? Saat ini sistem open source yang sudah terkenal adalah OpenALPR dan akurasinya cukup baik untuk plat nomor negara tertentu. Alasan paling utama adalah lisensi, OpenALPR memiliki lisensi Affero GPL. Lisensi Affero ini mewajibkan kita memberikan source code kepada siapapun yang memakai software, meskipun dari internet, ini berbeda dari GPL biasa yang hanya mewajibkan kita membagi source code ke orang yang menjalankan software tersebut. Artinya segala macam perubahan yang kita lakukan terhadap software OpenALPR harus dibuka ke publik. Akurasi OpenALPR juga bukan yang terbaik, sistem pengenalan digitnya memakai Tesseract OCR. Tesseract OCR versi stabil (versi 3) memakai algoritma klasik yang kalah jauh dari metode dengan deep learning. Baru pada tahun lalu Tesseract OCR 4 mulai memakai deep learning (sampai saat ini versi 4 tersebut masih beta). Saat ini development OpenALPR ini juga cukup lambat (ketika artikel ini ditulis, commit terakhir adalah 2 bulan yang lalu), sementara ada banyak pertanyaan yang tidak terjawab di bagian issues (misalnya bagaimana mengadaptasi OpenALPR untuk plat nomor yang terdiri dari dua baris di Singapura). Perlambatan ini bisa dimaklumi karena fokus mereka saat ini adalah layanan berbasis cloud yang sifatnya komersial (tidak gratis). Mengenali dan membaca plat nomor Sebenarnya langkah pembacaan otomatis plat nomor cukup sederhana, tapi detail implementasinya yang rumit. Langkah pertama adalah mencari plat nomor di dalam gambar (biasanya gambar ini merupakan satu frame video), membersihkan gambar, lalu mengenali plat yang ada pada gambar tersebut. Pada bagian berikutnya asumsinya kita bekerja pada satu gambar statik saja. Jika inputnya adalah video maka bisa dilakukan object tracking agar tidak mengulangi semua langkah. Mencari Plat Nomor Bentuk, warna, ukuran, dan font plat nomor berbeda di tiap negara dan sebuah aplikasi yang baik harus diadaptasi khusus untuk sistem di sebuah negara. Kadang sistem ini juga harus mengenali sistem dari negara lain, contohnya di berbagai negara Eropa yang berdekatan berbagai mobil dari negara tetangga dapat dengan mudah melintasi batas sebuah negara. Untuk penggunaan di Indonesia yang merupakan negara kepulauan (jumlah lalu lintas mobil antar pulau terbatas), sistem bisa dioptimasi untuk mengenali plat yang umum di wilayah atau pulau tertentu. Saat ini framework deteksi objek yang cepat dan terbukti cukup akurat adalah Viola-Jones. Untuk bisa mengenali objek kita perlu melakukan training terhadap ribuan plat nomor dengan berbagai kondisi pencahayaan dan berbagai kondisi cuaca. Input algoritma ini adalah gambar dalam format grayscale (komponen Luminance saja). Setelah kita melakukan training maka didapatkan data cascade. Dengan data ini kita bisa menggunakan berbagai library yang dapat melakukan deteksi dengan cepat, misalnya OpenCV memiliki fungsi cvHaarDetectObjects. Object detection mendeteksi lokasi plat nomor Meskipun akurasi metode ini cukup tinggi, tapi deteksi objek ini juga tidak 100% akurat, kadang ada wilayah yang terdeteksi sebagai plat nomor tapi ternyata bukan. Kasus ini bisa difilter dengan kembali ke gambar asli yang berwarna dan memeriksa warna untuk memastikan warna dominan merupakan warna plat nomor. Filter berikutnya adalah dengan mencari apakah wilayah yang dideteksi tersebut memiliki contour yang polanya seperti huruf atau angka. Optimasi juga bisa dilakukan tergantung dari lokasi deployment sistem. Untuk sistem di mana lokasi kamera dan kendaraan relatif tetap (misalnya untuk mendeteksi truk masuk gerbang) sistem bisa mendeteksi hanya area tertentu di gerbang dan akurasinya akan lebih tinggi. Membersihkan Gambar Deteksi objek hanya dapat memberikan wilayah persegi yang merupakan plat nomor. Berikutnya area ini perlu diproses agar pembacaan teks plat nomor oleh algoritma berikutnya bisa lebih akurat. Gambar yang miring harus diluruskan, gambar yang perspektifnya salah harus dikoreksi dan secara umum gambar perlu dipertajam. Proses ini mungkin terlihat yang paling sederhana, tapi sebenarnya justru yang sangat sulit. Gambar input Tidak ada satu algoritma yang dapat memberikan output terbaik. Berbagai sistem ALPR memakai heuristic yang berbeda untuk memperkirakan homography matrix untuk mengoreksi perspektif. Cara yang dipakai pada sistem Mata Dewa adalah dengan menggunakan komputasi histogram untuk mendapatkan alignment terbaik. Perhitungan histogram per baris Perhitungan histogram per kolom Berdasarkan histogram per baris dan kolom, gambar dapat diluruskan dan siap diproses oleh algoritma berikutnya. Plat yang sudah dikoreksi perspektifnya Mengenali huruf dan Angka Setelah mendapatkan gambar yang bersih, lokasi setiap huruf dan angka perlu dideteksi lagi menggunakan pendekatan Viola-Jones. Dengan ini bisa didapatkan persegi yang menjadi kandidat satu huruf. Strategi heuristic digunakan untuk membuang kemungkinan yang salah (seperti stiker yang ditempel yang posisinya tidak sejajar dengan digit lain). Segmentasi huruf dan angka pada plat nomor Setiap kandidat huruf yang ditemui diberikan ke sebuah neural network yang memiliki akurasi cukup tinggi dalam mengenali huruf dan angka. Pada sistem Mata Dewa training dilakukan dengan plat nomor yang ada di Indonesia. Karena sistem dirancang khusus untuk Indonesia, maka pembobotan ekstra dapat diberikan agar memiliki akurasi lebih tinggi. Contohnya untuk sistem yang di-deploy di wilayah jakarta kemungkinan karakter pertamanya adalah huruf B dan bukan angka 8. Plat nomor yang berhasil diproses Sebuah plat nomor dalam pencahayaan sempurna kadang tetap sulit dibaca karena kotor, sudah kusam, penyok, atau ditempeli stiker. Penanganan kasus ini tergantung pada deploymentnya. Misalnya di jalan raya dapat dipasang beberapa kamera yang dapat melihat dari berbagai sudut, termasuk juga dari arah belakang mobil yang mungkin lebih jelas plat nomornya. Pengembangan di masa depan Sistem LPR hanya merupakan satu bagian dari sistem menyeluruh berbasis Machine Vision yang sedang dikembangkan oleh Mata Dewa. Berbagai teknologi yang dikembangkan ini diadaptasi khusus untuk penggunaan di Indonesia. Contoh salah satu teknologi machine vision yang bisa digabungkan dengan LPR adalah: illegal parking detection. Sebuah kamera yang jangkauannya lebar (menggunakan wide-angle lens) dapat digunakan untuk mendeteksi parkir ilegal dan LPR dengan kamera lain dapat digunakan untuk mendeteksi plat nomor kendaraan yang parkir illegal tersebut. Tentunya definisi parkir illegal ini sesuai dengan aturan yang berlaku di Indonesia. Sistem LPR ini juga bisa diintegrasikan dengan Electronic Law Enforcement. Contohnya adalah untuk memonitor sistem ganjil genap sehingga dapat membantu pihak kepolisian mengawasi ruas-ruas jalan di Jakarta secara otomatis. Sistem elektronik juga bisa melakukan pengecekan otomatis kendaraan dengan plat nomor palsu (registration identification) berdasarkan database kepolisian sehingga dapat mendeteksi aksi-aksi kreatif dari masyarakat seperti pada video dibawah ini :) Sistem yang dikembangkan oleh Mata Dewa dikembangkan didalam negeri dan mudah diintegrasikan dengan berbagai sistem yang sudah ada. Karena menyangkut masalah keamanan, produk dalam negeri memiliki kelebihan karena source code dapat diaudit bersama dan tidak perlu perlu khawatir adanya backdoor. PT Mahapatih Sibernusa Teknologi is a company focusing on digital transformation & cyber security technology. We are developing product such as Machine Vision and Content Disarm & Reconstruction (CDR). We are also offering services such as cyber security consulting & managed security service provider.
Automatic License Plate Recognition
5
automatic-license-plate-recognition-1189b2e594e5
2018-08-03
2018-08-03 03:01:48
https://medium.com/s/story/automatic-license-plate-recognition-1189b2e594e5
false
1,156
PT Mahapatih Sibernusa Teknologi is a company focusing on digital transformation & cyber security technology.
null
null
null
Mahapatih Sibernusa Teknologi
mahapatih-sibernusa-teknologi
MAHAPATIH,CDR,CYBER SECURITY,COMPUTER VISION,HACKING
null
Computer Vision
computer-vision
Computer Vision
2,375
yohanes
A programmer and a hacker. A husband and a father. From Web to Kernel.
7dacdb8f5025
yohanes
79
60
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-27
2018-07-27 04:42:48
2018-07-27
2018-07-27 04:52:21
1
false
en
2018-07-27
2018-07-27 04:52:21
3
118b2aac25df
1.732075
0
0
0
Information researchers have a developing scope of alternatives while picking systematic instruments, and another overview of hardware…
4
Python Gains Traction Among Data Scientists Information researchers have a developing scope of alternatives while picking systematic instruments, and another overview of hardware inclinations uncovers a generally even split in inclinations among the three driving programming dialects. In its annual survey of leading analytics tools, executive recruiting firm Burtch Works reported this week that nearly 1,200 data scientists and analysts were evenly divided in their preferences for SAS (34 percent), R and Python (both 33 percent). Nevertheless, the survey released Tuesday (July 17) confirms the steady rise of Python programming language, mostly at the expense of the R language. “Open source apparatuses like R and Python are overwhelmingly supported by experts with five or less years’ understanding,” the review found. “While SAS keeps on observing solid help among experts with at least 16 years’ understanding, Python made recognizable increases here also.” Burtch said the developing inclination for Python mirrors a deluge of new information researchers with five or less years’ experience who demonstrate a more grounded inclination for open source examination devices. In fact, bolster for Python among this “lesser” gathering has multiplied to 48 percent since 2016. The review additionally separates instrument inclinations by industry. SAS was the best inclination in parts, for example, social insurance and pharmaceuticals (43 percent) alongside monetary administrations (42 percent). In the interim, information researchers at innovation and telecom organizations favored Python. R was the best inclination in the retail area. Burtch said its overview isolates information researchers from those occupied with conventional prescient examination. The primary reason is information researchers work principally with unstructured and steaming information while prescient examiners incline toward organized information. Those prerequisites are reflected in apparatus inclinations, with completely 69 percent of information researcher utilizing Python while prescient experts lean toward SAS by a smaller edge. Given that “enormous information” is progressively determined by the invasion of unstructured video and other online networking information, the developing inclination for Python affirms prior studies. As we’ve detailed, the change to Python is expected to a limited extent to a developing number of devices and libraries accessible to information researchers to parse enormous informational indexes. Other surveys, including IEEE Spectrum also ranked Python as the top data science programming language. Meanwhile, R remains popular among mathematicians, statisticians and scientists. The SAS environment from the company of the same name remains popular among business analysts, while MathWorks‘ MATLAB is also widely used in the discovery phase of big data.
Python Gains Traction Among Data Scientists
0
python-gains-traction-among-data-scientists-118b2aac25df
2018-07-27
2018-07-27 04:52:21
https://medium.com/s/story/python-gains-traction-among-data-scientists-118b2aac25df
false
406
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
compliance4all
#Compliance4all the ultimate continuing professional education provider offers #regulatory& #compliance trainings . http://compliance4all.com/
a6bf38ed238e
compliance4all
337
667
20,181,104
null
null
null
null
null
null
0
def rec_basic(df): """ Look for the wines that match the most common characteristics of the ones that we that Arielle already enjoys. This includes setting up year, price, variety, and country filters. For each unique variety, find a high and low priced wine within our range that is the top rated and fits our filtered variables. The function will return a data frame with two recommendations per wine. """ current_wine = df[df['arielle_choice'] == 1] current_filter = (df['arielle_choice'] != 1) # We don't want a wine that she already has tried year_min = current_wine.year.min() year_max = current_wine.year.max() year_filter = ((df.year >= year_min) & (df.year <= year_max)) # Filter for the year range of wines that she enjoys price_min = current_wine.price.min() price_max = current_wine.price.max() price_mid = price_min + ((price_max - price_min) / 2) # Create midpoint for high/low price adjuster price_filter = ((df.price >= price_min) & (df.price <= price_max)) # Filter for her typical price points countries = list(set(current_wine.country)) country_filter = (df.country.isin(countries)) # Filter for country varieties = list(set(current_wine.variety)) variety_filter = df.variety.isin(varieties) # Filter for wine type filtered_df = df[current_filter & year_filter & price_filter & country_filter & variety_filter] # Filtered data frame recommendation = [] for variety in varieties: # Recommendation for each of her favorite varieties var_df = filtered_df[filtered_df.variety == variety] lower_price = var_df[var_df.price <= price_mid] # One pick that's below the mid-point pricing higher_price = var_df[var_df.price > price_mid] # and one that's above top_rec_low = lower_price[['country','designation','points','price','title','variety']].sort_values( by='points',ascending=False)[:1] # Extract the highest rated lower-priced wine top_rec_high = higher_price[['country','designation','points','price','title','variety']].sort_values( by='points',ascending=False)[:1] # Extract the highest rated higher price wine recommendation.extend(top_rec_low.index) # Add the index value of the lower priced wine to the list recommendation.extend(top_rec_high.index) # Add the index value of the higher priced wine to the list rec_df = df.loc[df.index.isin(recommendation),:] # Extract only the recommendation index values from the data rec_df.sort_values(by=['variety', 'price'], ascending=True, inplace=True) # Sort/group by wine type then price return rec_df # Return the recommendation data frame
2
721b17443fd5
2018-08-18
2018-08-18 15:58:37
2018-08-21
2018-08-21 11:15:30
9
false
en
2018-08-22
2018-08-22 17:23:47
9
118c28ba736e
10.603774
10
0
1
Using data to buy my sister a birthday gift that she'll enjoy
5
Recommending the Perfect Bottle of Wine Like most good brothers, I try to buy thoughtful gifts for my sister for special occasions. As it happens, my sister likes wine, and I like data, so when I came across this data set that has scores and reviews for more than 130,000 wines, it was a perfect chance to use my interest in data for a good purpose. It started with an innocent text to my sister: The goal was to build a recommendation system using a list of wines that she likes to pick out a few that I could buy for her as a birthday gift. Though the sample size of choices would be small — a handful of wines relative to 130,000 — we can use filtering and language processing to make initial predictions, and over time as the list grows of both wines that she enjoys and doesn’t enjoy, we can continue to feed the data into our recommendation system and re-run the code. The data and code The data was scraped from WineEnthusiast and posted by a Kaggle user in a .csv format. It includes~130,000 total reviews of different wines with variables such as the wine’s country, description by the reviewer, points ranking, price, region, variety, and winery. Full code for the project is posted on my GitHub here. Recommendation Systems: What are they? Recommendation systems typically take one of three forms: Collaborative filtering, content-based filtering, and a hybrid of the two. Collaborative filtering is analyzing behavior by one user and making recommendations based on other similar users. For example, Netflix knows which shows I’ve watched based on my viewing history and can recommend shows that other users “like me” (i.e. with a similar viewing history) have also watched. Content-based filtering is isolated to a single user and, taking the items that they’ve purchased or used in the past, looks for similar items by description or other identifying variables. So, when I buy a certain coffee blend on Amazon, I could be recommended other varieties that have the same roast, are from a similar region, and/or have overlapping words in the description. The hybrid method is just as it sounds — a combination of both content-based and collaborative filtering. If we go back to Netflix, they could (and do) build a model that can identify similar shows to my viewing history, then rank those shows based on the viewing behavior of people like me. For this project, because we don’t have other users to compare against, we’ll be using strictly content-based filtering. The inputs will be the wines that we know that my sister likes, and the recommendation will be those that exhibit similar characteristics — region, description, variety — to those wines. I decided to build three separate algorithms and then blend the three for the final choice(s). The first uses the characteristics of the wines that she already has enjoyed to find the highest rated wines within those characteristic filters; the second uses text data and similarity to rank the wines; the final uses language frequency techniques in Python’s scikit-learn library. Basic Recommendation System The basic recommendation system identifies the characteristics of the wines that Arielle likes (“training” wines) and searches for other wines that match those characteristics, then ranks them based on points and price. This is accomplished by setting up a series of filters on the structured variables of year, price, country, and variety; in later iterations, we’ll use the more unstructured text data in the description and title columns. This will give us only the wines that come from the list of countries from our “training” data, that were produced within a fixed range of years, are within our price range, and are the types/varieties of wines that we know she likes. After setting up these filters, in order to get a mix of types and price points, I identified the top ranked wine for each of the unique varieties, with one choice below the midpoint pricing and one above. The final result was a set of 8 wines from 4 varieties (GSM, Pinot Noir, Sparkling Blend, and Syrah), with each variety having a low and high-priced option. Recommendations from the basic filtering The limitations of the above are that we’re narrowing our view of the universe to a set of common and consistent characteristics. The recommendations are all on the West Coast (and in the U.S.) — since that’s where she is now and what she tries the most — and between the years of 2005–2014. So, while this is a good start, we still want to expand our horizons beyond the U.S. borders and into new areas, which leads us to… Text-Based Recommendation The second recommendor system that we’ll build will combine all of the text and values in a given row, then look for the number of unique words in that observation that match the words in our “training” choices. We’ll do this by using the NLTK library in Python to “tokenize” the text in each observation based on a pre-defined pattern. Tokenization is a way to split a set of text values into unique items based on a pattern — for example, we could extract only words, words and digits, words, digits, and special characters, etc. from a text string. For example, tokenizing the first sentence in the last paragraph would yield a list of values [“Tokenization”, “is”, “a”, “way”, “to”…]. To take it a step further, we remove the common “filler words” (also referred to as stopwords) that don’t add value to the meaning of a sentence (“a”, “an”, “the”,…). The final tokenized version of the sentence with stop words removed would look like: [“Tokenization”, “way”, “split”, “set”, “text”, “values”…]. With each sentence split into its set of components, we can then work with text on a more micro level to look for patterns and similarity across observations. Next, we’ll create a single list of all of the unique words in the combined “training” reviews (i.e. the wines that Arielle already told me that she likes). We’ll then loop through the tokenized words in each observation; for each tokenized word, if it appears in our single training list of words, we’ll add 1 to a counter for that variable, with the final counter value being a proxy for similarity. Once we’ve prepared the data by joining each entry of a row into a single string of text, tokenized the values, and removed the stopwords, we’re ready to look for the similarity of the data to our desired string of training text. After applying some additional sorting and filtering, I scaled the new similarity variable (number of matching words) using a MinMaxScaler (which puts the data on a scale of 0–1 based on its value relative to the data’s range) and found the top results for both U.S. wines and international wines. Top recs for U.S. wines Top recs for international wines The limitations here are that the vocabulary to describe a wine is essentially infinite. One person can call a wine “fruity” while another can say “bursting with fruit flavors”. The meaning is essentially the same, but our system wouldn’t pick up that difference. Though we’re bringing in more data and factors, we still have an inherent bias toward U.S. wines because the country and regions that match what she likes are included in our text strings. Therefore, to try and offset this, we’ll use only the description column, which is the review associated with each wine that describes flavor, ingredients, or other perceptions of quality by the reviewer. Similarity Recommendation While the first two recommendations were built mainly from scratch with the help of other libraries, we’ll be leveraging a pre-packaged library to do the heavy lifting for the final system. The goal will be to use only the description column to find the other wines in the data set that are the most similar to our training data. It’s roughly comparable for what was done in the second recommendor but takes a more mathematical approach. The core of this system will be built on what’s called Term Frequency Inverse Document Frequency (TFIDF). I found this article that explains the concept really well. The high-level summary is that, given a series of text observations, term frequency (TF) is a measure for the frequency of a word within a single text string (observation), while the inverse document frequency (IDF) is a measure of how often it occurs across all of the observations in the sample. The TFIDF value is then calculated as the product of the TF and IDF. In our data, the TF value for an observation would be the frequency that a word within a wine description (let’s say that the word “berry” is in a description) appears within that description as a percentage of all words, while the IDF would measure the number of times “berry” appears across all of the descriptions of wine in the data. After computing the TFIDF value for each word in each observation, we need a way to measure similarity between a single observation and the others in the data set. Again, bringing it back to the task at hand, this means taking Arielle’s preferred wines and finding the wines that have a description most similar to those wines based on their TFIDF values. There are a number of different methods to compute similarity. After trying a few and doing some reading on different methods, I settled on cosine similarity. The article linked above on TFIDF does a good job of explaining cosine similarity too. Using our data, which has been converted from text to numbers (their TFIDF value), we can plot those numeric points in a “vector space” and compute the distance — using linear algebra and trigonometry — from that point to all of the other points that are also plotted in the vector space. If that doesn’t make much sense it’s okay because it’s a pretty confusing concept for me too, but the output of cosine similarity ranges from -1 to 1, with -1 meaning two items are the exact opposite and 1 meaning they are the same. When working with text data, the output range shrinks to 0 to 1. The result, by looping through each of the wines that she enjoys, is a set of recommendations that are the most similar descriptions in the rest of the data. As was the problem in the previous exercises with U.S. bias, there’s a good chance that the flavor and way that U.S. wines are described is different than international wines, so in addition to the general recommendations, I filtered for only non-US wines and picked the top matching wines for that group as well. Top US matches by description similarity Top International matches by description similarity One final note for this method was that, for the sake of computing similarity, I only used the wines that were already in Arielle’s “preferred varieties” (i.e. types of wine: Syrah, GSM, etc.) because the size of the file inhibited using the whole list of 130K+ descriptions locally on my computer. Conclusion: Which wine should we pick? Interestingly, there was no overlap between the three different systems, which goes to show the subjectivity that’s present within recommendor systems. Each of the three recommendors individually takes a separate but plausible route to develop the recommendations, yet the collective result is unique. Given that, once aggregating the results from each of the methods, we had about 60 “recommendations” to sort through. Since I can’t buy her all 60, it’s time to apply some “art” to the “science” of recommending. I started by eliminating the wines that didn’t have an associated price or that were more than $50 (sorry Arielle — I can’t afford the really good stuff yet). Plus, as a Stax big data analysis shows, there’s plenty of value to be had in sparkling wines and Champagne under $50. Then, knowing my sister and that GSM is one of her favorites within the varieties that she gave me, and inserting my own bias for Syrah, I filtered the data for only those two types of wine as well as the wines with a score of 90 or better. Finally, I decided that I wanted one of each variety, and that one would be local to the U.S. while the other would be from outside of the country. Because there was only one international wine left at this point and it was a GSM, that became the easy choice for the GSM variety. For the Syrah, I applied a filter that would return the highest ranked (by points) Syrah made in the U.S. And that’s it! From 130,000 wines down to just two using data to come to a solution. Our final winners were a 2010 California Syrah from the Fenaughty Vineyard in the Sierra Foothills and a 2011 Plexus GSM from South Australia. The reality of finding somewhere that I can buy them and get them delivered is the next challenge, but the fun of the journey to get to the right wines makes it all worth it in the end.
Recommending the Perfect Bottle of Wine
50
recommending-the-perfect-bottle-of-wine-118c28ba736e
2018-08-24
2018-08-24 19:07:42
https://medium.com/s/story/recommending-the-perfect-bottle-of-wine-118c28ba736e
false
2,492
Coinmonks is a technology focused publication embracing all technologies which have powers to shape our future. Education is our core value. Learn, Build and thrive.
null
coinmonks
null
Coinmonks
coinmonks
BITCOIN,TECHNOLOGY,CRYPTOCURRENCY,BLOCKCHAIN,PROGRAMMING
coinmonks
Data Science
data-science
Data Science
33,617
Jordan Bean
Consultant and data enthusiast. www.linkedin.com/in/jordanbean
4efb39cc4625
jordanbean
33
7
20,181,104
null
null
null
null
null
null
0
null
0
b47af5c25df5
2018-08-10
2018-08-10 21:17:59
2018-08-10
2018-08-10 21:39:40
3
false
en
2018-08-10
2018-08-10 21:39:40
5
118c856e56d3
3.893396
2
0
0
The brain has long-since been compared to the most powerful computers. Usually computers become less powerful when they go to sleep…
5
A Supercomputer in Sleep Mode The brain has long-since been compared to the most powerful computers. Usually computers become less powerful when they go to sleep… Some prominent figures in tech believe that life, as we know it, is quite probably a massive computer simulation. Elon Musk has illustrated what astonishing progress we’ve made with simulated life in the context of common video games, and with good reason. The exquisitely realistic animations and renderings of everything from inanimate objects to people is nothing short of extraordinary. But, as remarkable as videogames and computer animation have become, they still require every movement, interaction, and conversation to be carefully orchestrated by the developers and designers. Computing power has also come lightyears in the same short time; however, even the most powerful of super computers on the planet are limited in the simulations they can run. When running an intense simulation that’s intended to model an event in this complicated universe, every single variable must be accounted for. No matter how small the variable is, a truly life-like simulation requires everything to be accounted for. Typically, simulations and mathematical models are simplified as much as possible so that a close-enough result can be achieved with relatively little time and computational power. But for a simulation to be completely believable, completely realistic, it must exemplify everything we know and believe about reality — or at least feel so concrete that you have no choice but to accept it as corporeal truth. Augmented and virtual reality surely aim to take us to this point, but currently no model, algorithm, or simulation can even remotely approach this level of scrupulous realism. There are, however, over seven billion supercomputers on earth that run simulations like this every night…with great ease. Unlike a typical computer, the human brain — and animal brains, for that matter — don’t stop working when the system powers down. Every person dreams every night, regardless of whether or not they remember it; when we believe we’ve shut off for the day, our brains are actually running the most intense, high-fidelity simulations on the planet. A simulation so spectacular, we almost never stop to question why the laws of physics don’t apply; why we can fly, have super-powers, be other people, and explore unreachable places. Sure, computers seemingly do an infinite number of tasks infinitely better than humans do, and yet, the fastest supercomputer on the planet at over 200 petaflops — yes, flops are a real unit of measurement, and peta represents a factor 1015 — can’t create a simulation that parallels a dream. Ultimately, Threat Simulation Theory is just one of many theories that attempt to explain why we dream, but what if we don’t limit the idea of dreams being simulations to just threats? Dreams let us see and experience a near limitless number of scenarios that would never happen in waking life, and yet they’re not all hazardous to body or mind. So, if we disregard the “threat” aspect, it’s conceivable that dreams can be used to simulate anything our conscious and unconscious imaginations concoct. Admittedly, I’m far from an expert in the field, but personally, I think the notion of dreams as simulations helps set the stage for the premise that our brains are supercomputers that become more active as we enter our proverbial sleep mode. It’s already well known that sleep is pivotal for memory retention, creativity and problem solving. As we learn more about it, however, we’re discovering that a healthy sleep, rather than just improving these things for our waking hours, actually executes on these cognitive processes. There are countless examples of people who claim to use their sleeping hours to solve problems they can’t solve during the day, or, improve skills that require hours of practice. The author of this National Geographic article has gone so far as to use the old expression “I’m going to sleep on it” as a form of evidence supporting the important role of sleep in problem solving. Comparing the expression to mulling over a decision while eating, the author points out that nobody says they’re going to “eat on” a problem. This article also goes on to explain how the waking brain is focused on input stimuli during waking hours, meaning that a great portion of our compute-power is being used to record. If you’ve ever recorded raw high definition video and audio before, you know exactly how much processing and storage is required for even a short, one-minute clip. If you’re not familiar with raw video, a typical base-line assumption is that one minute of footage will take approximately 33GB of space. In a sixteen-hour waking day, that’s over 30TB of data that our brains have recorded. Now, obviously we don’t work quite the same way computers do. But…consider for a moment that our brains don’t need to spend so many resources on recording information and stimuli while we sleep. Suddenly it’s not so radical to accept that our brains receive a reallocation of processing power that supercharges our ability to solve a problem or think creatively during slumber. From this point, the question becomes: how do I actively utilize these hours of rest to improve myself or solve problems? … stay tuned ;-)
A Supercomputer in Sleep Mode
2
a-supercomputer-in-sleep-mode-118c856e56d3
2018-08-10
2018-08-10 21:39:41
https://medium.com/s/story/a-supercomputer-in-sleep-mode-118c856e56d3
false
886
Discussing all things sleep-related.
null
zenneatech
null
Zennea
zenneatech
SLEEP,SLEEP DISORDERS,WEARABLE TECHNOLOGY,HEALTH,HEALTH TECHNOLOGY
ZenneaTech
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ryan Threlfall
Co-founder of Zennea. Sleep is important! Passionate about education and space-related technologies.
1b9e627939a2
ryanthrelfall
14
16
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-29
2018-06-29 09:56:57
2018-06-29
2018-06-29 10:07:16
13
false
en
2018-06-29
2018-06-29 10:07:16
21
119028b11150
3.875472
0
0
0
Photo Of The Week
5
Found This Week #113 Photo Of The Week Fiery Sunset, this photo is available to licence on EyeEm. We got treated to some more fantastic sunsets courtesy of the summer weather in Swansea. Here are some more for your viewing pleasure. In this Edition: Music, Blockchain, scaling blockchain, tuning ML models, growth handbook, introduction to AR, DIY aircraft tug, vortex ring collisions, Aquilla and thermal camouflage! Originally published at www.foundthisweek.com on Jun 29, 2018. Subscribe here to get Found This Week in your inbox every Friday :-) Analog On, Andrew Leslie Hooker — NAWR ANOIS #5 For the latest NAWR ANOIS concert, we welcomed Analog On and Andrew Leslie Hooker to Swansea for some modular synths and no input mixing magic sounds. Check out the photos of the concert here. Swansea Laptop Orchestra in Worcester This week, The Swansea Laptop Orchestra played at the Clik Clik Collective takeover of the Weorgoran Pavillion in Worcester. Check out the photos from the gig here. Blockchain Explainer Image: Reuters Graphics Reuters graphics have put together a really nice site describing the basics of blockchain including lots of animations describing the steps involved in adding records to blocks. Model Tuning and Bias-Variance Tradeoff Image: R2D3 R2D3 have created a fantastic data visualisation describing how data models can be tuned to avoid over-emphases on biases of variances during the training phases. The Intercom Growth Handbook Image: Intercom Intercom have released a new book called The Growth Handbook, aimed at delivering industry tested advice on how to manage customer retention, growth experiments and driving word of mouth growth. Introduction to AR & ARCore Image: Coursera Google have launched a new Introduction to AR & ARCore course on Coursera. The course covers AR tools and platforms, popular use cases for AR and how to create realistic AR experiences. Homemade Aircraft Tug Image: Anthony DiPilato Anthony DiPilato made a homemade iPhone remote controller electric tug for his Cessna 310. The tug can pull the 2,268kg Cessna using two 13 inch motors and two 12V power chair batteries. Blockchain Scaling Options Image: Hackernoon, Preethi Kasireddy This interesting article on Hackernoon from Preethi Kasireddy describes the current limitations on scaling blockchain and describes many different solutions and methods that could be used to allow scaling, such as SegWit, Sharding, increasing block size and off chain state channels, among others. Funny Thing Of The Week: Chair.exe Cool Thing Of The Week: Vortex Ring Collision Image: Youtube, SmarterEveryDay Check out his amazing video from SmarterEveryDay of two vortex rings colliding perfectly to produce cascading concentric circles. Facebook Cancels Aquilla Image: Facebook In 2014, Facebook launched a high altitiude platform station (HAPS) project called Aquilla, aimed at developing a high altitute glider drone that can transmit internet to remote areas. In a recent blog post, Facebook confirms that they are no longer developing Aquilla but will contine to work with partners like Airbus on HAPS connectivity in general. Thermal Camouflage Image: American Chemical Society Researchers at the University of Manchester have created a 3 layer material that can camoflauge thermal heat when an electric current is applied to it. It sounds much better than Arnie’s mud version! See you next week! P.S. If you like this Found This Week, please click the clap icon below. Also, subscribe, or tell a friend, or both! Thanks :-) Originally published at www.foundthisweek.com on Jun 29, 2018. About Found This Week Found This Week is a curated blog of interesting posts, articles, links and stories in the world of technology, science and life in general. Each edition is curated by Daryl Feehely every Friday and highlights cool stuff found each week. The first 104 editions were published on Mediumbefore this site was created, check out the archive here. Daryl Feehely I’m a web consultant, contract web developer, technical project manager & photographer originally from Cork, now based in Swansea. I offer my clients strategy, planning & technical delivery services, remotely & in person. I also offer freelance CTO services to companies in need of technical bootstrapping or reinvention. If you think I can help you in your business, check out my details on http://darylfeehely.com
Found This Week #113
0
found-this-week-113-119028b11150
2018-06-29
2018-06-29 20:09:54
https://medium.com/s/story/found-this-week-113-119028b11150
false
656
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Daryl Feehely
Web Consultant, Contract Developer & Project Manager (available). Photographer (+MRSC), Munster Rugby Supporter. Corkman in Swansea. www.darylfeehely.com
1ddc7d1bcebc
dfeehely
425
328
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-06
2018-04-06 07:29:35
2018-04-06
2018-04-06 07:56:36
1
false
en
2018-04-06
2018-04-06 08:33:39
3
1191fe1c619b
0.698113
4
0
0
Daneel AMA session (Ask Me Anything) is live till 09/04/2018. Founder Mr. Joseph Bedminster and few of the core members will answer your…
5
AMA session with Daneel CEO and CORE members Daneel AMA session (Ask Me Anything) is live till 09/04/2018. Founder Mr. Joseph Bedminster and few of the core members will answer your questions in a video that will be posted beginning of next week. We will also publish a medium article to gather all answers. You can now start posting your questions about Daneel Project on our Reddit page. The Reddit thread will close on the 08th of April at 16:00 UTC. No more questions will be accepted. The 10 most up-voted questions on our Reddit thread will be answered. IMPORTANT: Keep the questions related to the Daneel Project. Please do not reply to other comments. No trolling/abusive comments. Don’t forget to join our Telegram groups: DANEEL TELEGRAM CHAT DANEEL TELEGRAM CHANNEL
AMA session with Daneel CEO and CORE members
75
ama-session-with-ceo-and-core-members-1191fe1c619b
2018-05-04
2018-05-04 18:33:20
https://medium.com/s/story/ama-session-with-ceo-and-core-members-1191fe1c619b
false
132
null
null
null
null
null
null
null
null
null
Cryptocurrency
cryptocurrency
Cryptocurrency
159,278
Daneel Assistant
Your future personal crypto assistant ! https://daneel.io
dc883054551c
daneel_project
463
9
20,181,104
null
null
null
null
null
null
0
<html> <head></head> <body></body> </html> <html> <head> <!-- Load TensorFlow.js --> <!-- Get latest version at https://github.com/tensorflow/tfjs --> <script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]"> </script> const model = tf.sequential(); model.add(tf.layers.dense({units: 1, inputShape: [1]})); model.compile({ loss: 'meanSquaredError', optimizer: 'sgd' }); const xs = tf.tensor2d([-1, 0, 1, 2, 3, 4], [6, 1]); const ys = tf.tensor2d([-3, -1, 1, 3, 5, 7], [6, 1]); await model.fit(xs, ys, {epochs: 500}); document.getElementById('output_field').innerText = model.predict(tf.tensor2d([10], [1, 1])); <html> <head> <!-- Load TensorFlow.js --> <!-- Get latest version at https://github.com/tensorflow/tfjs --> <script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]"> </script> </head> <body> <div id="output_field"></div> </body> <script> async function learnLinear(){ const model = tf.sequential(); model.add(tf.layers.dense({units: 1, inputShape: [1]})); model.compile({ loss: 'meanSquaredError', optimizer: 'sgd' }); const xs = tf.tensor2d([-1, 0, 1, 2, 3, 4], [6, 1]); const ys = tf.tensor2d([-3, -1, 1, 3, 5, 7], [6, 1]); await model.fit(xs, ys, {epochs: 500}); document.getElementById('output_field').innerText = model.predict(tf.tensor2d([10], [1, 1])); } learnLinear(); </script> <html>
8
null
2018-08-01
2018-08-01 05:10:15
2018-08-01
2018-08-01 05:11:46
2
false
en
2018-08-01
2018-08-01 05:12:54
2
1194b19eeef1
3.715409
3
0
0
TensorFlow.js is an open source WebGL-accelerated JavaScript library for machine intelligence. It brings highly performant machine learning…
3
Getting Started with TensorFlow.js TensorFlow.js is an open source WebGL-accelerated JavaScript library for machine intelligence. It brings highly performant machine learning building blocks to your fingertips, allowing you to train neural networks in a browser or run pre-trained models in inference mode. See Getting Started for a guide on installing/configuring TensorFlow.js. TensorFlow.js provides low-level building blocks for machine learning as well as a high-level, Keras-inspired API for constructing neural networks. Let’s take a look at some of the core components of the library. With TensorFlow.js, you can not only run machine-learned models in the browser to perform inference, you can also train them. In this super-simple tutorial, I’ll show you a basic ‘Hello World’ example that will teach you the scaffolding to get you up and running. Let’s start with the simplest Web Page imaginable: Once you have that, the first thing you’ll need to do is add a reference to TensorFlow.js, so that you can use the TensorFlow APIs. The JS file is available on a CDN for your convenience: Right now I’m using version 0.11.2, but be sure to check GitHub for the most recent version. Now that TensorFlow.js is loaded, let’s do something interesting with it. Consider a straight line with the formula Y=2X-1. This will give you a set of points like (-1, -3), (0, -1), (1, 1), (2, 3), (3, 5) and (4, 7). While we know that the formula gives us the Y value for a given X, it’s a nice exercise in training a model for a computer that is not explicitly programmed with this formula, to see if it can infer values of Y for given values of X when trained on this data. So how would this work? Well, first of all, we can create a super-simple neural network to do the inference. As there’s only 1 input value, and 1 output value, it can be a single node. In JavaScript, I can then create a tf.sequential, and add my layer definition to it. It can’t get any more basic than this: To finish defining my model, I compile it, specifying my loss type and optimizer. I’ll pick the most basic loss type — the meanSquaredError, and my optimizer will be a standard stochastic gradient descent (aka ‘sgd’): To train the model, I’ll need a tensor with my input (i.e. ‘X’) values, and another with my output (i.e. ‘Y’) values. With TensorFlow, I also need to define the shape of that given tensor: So, my Xs are the values -1,0,1,2,3 and 4, defined as a 6×1 tensor. My Ys are -3, -1, 1, 3, 5, 7 in the same shape. Note that the nth Y entry is the value for the nth X entry when we say that Y=2X-1. To train the model we use the fit method. To this we pass the set of X and Y values, along with a number of epochs (loops through the data) in which we will train it. Note that this is asynchronous, so we should await the return value before proceeding, so all this code needs to be in an async function (more on that later): Once that’s done, the model is trained, so we can predict a value for a new X. So, for example, if we wanted to figure out the Y for X=10 and write it on the page in a <div>, the code would look like this: Note that the input is a tensor, where we specify that it’s a 1×1 tensor containing the value 10. The result is written on the page in the div, and should look something like this: Simple Output Wait, you might ask — why isn’t it 19? It’s pretty close, but it’s not 19! That’s because the algorithm has never been given the formula — it simply learns based on the data it was given. With more relevant data any ML model will give greater accuracy, but this one isn’t bad considering it only had 6 pieces of data to learn from! For your convenience, here’s the entire code for the page, including the declaraion of all this code as an async function called ‘learnLinear’: And that’s all it takes to create a very simple Machine Learned model with TensorFlow.js that executes in your browser. From here you have the foundation to go forward with more advanced concepts. Have fun with it!
Getting Started with TensorFlow.js
19
getting-started-with-tensorflow-js-1194b19eeef1
2018-08-01
2018-08-01 05:12:54
https://medium.com/s/story/getting-started-with-tensorflow-js-1194b19eeef1
false
883
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Vrijraj Singh
Community Organizer at Google Developers Group Jalandhar
91a3eb83b573
vrijraj2396
147
163
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-18
2018-04-18 16:20:39
2018-04-18
2018-04-18 16:25:52
1
false
ja
2018-04-18
2018-04-18 16:25:52
4
11950980d549
2.562
0
0
0
NEXT PUBLISHING STYLE for EVERYBODY, EVERYWHERE. -誰もが自由に世界中で出版が可能になる未来へ。 NEXT WONDERFUL EXPERIENCE of BOOKS AROUND THE WORLD…
4
Satori Booksの想う「新しい出版のカタチ」とは? NEXT PUBLISHING STYLE for EVERYBODY, EVERYWHERE. -誰もが自由に世界中で出版が可能になる未来へ。 NEXT WONDERFUL EXPERIENCE of BOOKS AROUND THE WORLD. -世界中のあらゆる電子書籍を楽しく体験できる未来へ。 NEXT BORDERLESS COMMUNICATION through BOOK COMMUNITIES. -本を通して世界中の人々がつながり可能性を広げていく未来へ。 Satori Booksのプラットフォームは、おそらく世界初の、AIとプロックチェーン技術を活用した「電子書籍ストア」の構築を目指しています。 ブロックチェーンという技術は、今までの価値を再定義し、新たに創造するツールであると、Satori Booksでは捉えています。 本というコンテンツは、従来、「完成品」というイメージでとらえられてきたと思います。 一方、電子書籍の良さの1つは、常に進化・変化し続けられる、ヴァージョンアップ、アップデートできるコンテンツであるというところにあります。また、その気になれば、オンタイムで、複数の作り手・著者・編集者が中心となり、読者も参加してもらう形で、共同で、どんどん補足・加筆・修正・付記していきやすい、共創プラットフォームという場にすることもできます。 書き手と読み手が一体となって、文化が創造されていく・・新たな時代の出版のカタチ、それがSatori Booksの想う未来です。 そういう意味で、Satori Booksは、ただ売り買いするだけのSTOREではなく、書き手、読み手含めて参加者全員が、共に文化を創造していくプラットフォームでありたいと考えています。 本を通して、どんどん自分も周りも世の中も変化していくこと自体を体験していくプラットフォーム。 そして、その変化に参加すること自体がブロックチェーン技術により担保され、価値へとなっていく。 何かを販売・購買するだけのBOOK STOREではなく、多くの人々と交流し、何かを実現していく場としてのBOOK STORE。 そんな新しい出版の世界を創りたい、それがSatoriBooksの想いなのです。 Satori Booksについて、より詳しく知りたい方はホワイトペーパーをご覧ください。 https://satoribooks.jp/wp-content/uploads/2018/04/Whitepaper-ver.1.10JN.pdf Web:https://satoribooks.jp Twitter:https://twitter.com/satoribooks_jp Facebook:https://www.facebook.com/satoribooks.jp
Satori Booksの想う「新しい出版のカタチ」とは?
0
satori-booksの想う-新しい出版のカタチ-とは-11950980d549
2018-04-18
2018-04-18 16:25:53
https://medium.com/s/story/satori-booksの想う-新しい出版のカタチ-とは-11950980d549
false
49
null
null
null
null
null
null
null
null
null
電子書籍
電子書籍
電子書籍
131
Satori Books
The e-book store dealing with work across the border.【世界中の本が集まる電子書籍ストア】
eaa5e14c922d
satoribooks
4
1
20,181,104
null
null
null
null
null
null
0
null
0
8d090b0a453e
2017-12-20
2017-12-20 11:53:57
2017-12-20
2017-12-20 14:42:34
1
false
en
2017-12-20
2017-12-20 16:39:07
14
11952c484c17
3.520755
1
0
0
Looking back over 2017, it’s interesting to see how ‘chatbot’ abilities have evolved from simple (sometimes gimmicky) question answering to…
5
5 AI assistant predictions for 2018 Looking back over 2017, it’s interesting to see how ‘chatbot’ abilities have evolved from simple (sometimes gimmicky) question answering to useful conversational user interfaces solving real problems. Across the year, we have seen thousands of bots/assistants being produced for all manner of applications. Arguably, 2017 has been the “year of the bot”. A day did not pass without at least a handful of new bots being released and shared on ProductHunt and BetaList alone. There were some emerging themes that we noticed and are likely to continue into 2018. Here are our predictions: 1. Smart assistant hardware comes home There was a big push at the end of 2017 from Amazon and Google competing for market share for their Smart assistant gadgets (it’s a shame Apple’s HomePod was not quite ready and missed the boat — watch out for this in 2018!). With the Amazon Echo being shipped for only $35, these gadgets were on many Christmas wish lists and will be entering millions of homes by early 2018. We imagine those that did not get one will soon have smart assistant envy and will probably be buying their own in the not too distant future! The AI assistant will become part of the average home and will start to fit in as much as your toaster or kettle already does. 2. Human <-> AI conversations begin to blend Do you remember when the first mobile headsets were produced? I remember gawking at people with attachments on their heads, walking around town appearing to be deep in conversation with themselves! Of course, now, this is perfectly normal. We’ve moved from headsets to airpods and we are less surprised when we see someone nattering away with someone who isn’t there. We’re going through the same process now with AI assistants. Humans have been engaging with AI assistants in one way or another for a number of years now — but this is still seen as an unusual interaction, especially in the way that we change our voice and sentence structure to make it understandable by machine (in a similar way to how some speak to a dog or a young child): ‘Alexa! Stop!’ But the recent pervasiveness of AI assistants in home gadgets and smartphones is making human/AI conversations more common. The quality of speech recognition and intent understanding is making these conversations less awkward. We’re now well on the road to being less surprised when we see someone begin nattering away on the street with someone that isn’t even a human… A conversation with an AI assistant will become almost as normal as it is to speak with another human on a conference call thousands of miles away. 3. Smarter integrations We have only just begun to be able to employ an AI assistant for all sorts of tasks — and we are still far away from the peak of functionality that AI will offer. AI is integrated into many platforms (Slack, Facebook, WhatsApp, WeChat to name a few) but at the moment, the level of integration is rather one-dimensional and there is a limitation to useful functions delivered through the intelligent collaboration between services. One of our favourite AI assistant scenarios of 2017 was the assistant by KLM that helps you pack for your next trip away. ‘BB’ doesn’t just tell you the weather but instead uses this information in combination with destination and duration to provide a useful service. In 2018 we expect to see more integrations like this that move AI assistants further away from being perceived as a “gimmick” to being seen more as a valuable tool (and used in that way). 4. AI will join the workforce We are already starting to use AI assistants at home and for personal use. At work, AI assistants are also already being used for call centres, customer support and meeting scheduling, among other things. As we become more aware of AI and its benefits, and as we start to trust this technology more, we will start to rely on it like never before. We will soon see companies on-boarding AI assistants like they are human employees and teams will receive support from AI and start depending on it. AI assistants will provide specialist support for the workforce, contributing to important business activities. It will be taken more seriously. 5. AI will start becoming proactive AI assistants are becoming more intelligent by learning more about the humans they are assisting and this will continue to increase as we connect all sorts of data, including personal information such as our daily health (e.g. from wearables). We typically only interact with AI when we choose — in a reactive fashion — but as it becomes more useful and intelligent we will start welcoming interruptions and new patterns of use will emerge. Next year we will see AI assistants becoming more proactive as humans start trusting the technology. These interactions will not be perceived as “annoying” as the content will be useful and the AI will learn the best way to communicate to individuals. If 2017 was the “year of the bot” then arguably 2018 will be the “year of the AI coworker” as AI becomes more pervasive, common, smart, trusted and proactive.
5 AI assistant predictions for 2018
1
5-ai-assistant-predictions-for-2018-11952c484c17
2018-05-27
2018-05-27 13:56:31
https://medium.com/s/story/5-ai-assistant-predictions-for-2018-11952c484c17
false
880
Emma.ai is your virtual travel assistant — automatically adding travel time and key details to your calendar. On this page we discuss the latest updates, news, features and articles relating to Emma and the industry in general.
null
emmadotai
null
Emma.ai
emmadotai
PRODUCTIVITY,GOOGLE CALENDAR,ARTIFICIAL INTELLIGENCE,BUSINESS TRAVEL,STARTUP
emmadotai
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Aaron Mason
Company founder — currently making things happen with AI 🚀
6eb20574df80
aaronmase
81
76
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-01
2018-02-01 13:57:20
2018-02-01
2018-02-01 16:18:19
0
false
en
2018-09-07
2018-09-07 05:01:41
1
119543e69450
3.124528
2
0
0
Writing about origins of any topic is not an easy task and pretty difficult to stay away from controversies. For example, as per the…
1
ORIGINS OF MACHINE LEARNING Writing about origins of any topic is not an easy task and pretty difficult to stay away from controversies. For example, as per the general understanding of human (homo sapiens) origins, we are supposed to have originated or ‘forked’ from other homo species around 200,000 years ago in Eastern Africa but the recent discovery of fossils in North Africa & Israel predates this number by 50,000 to 100,000 years. Hence, in all humility, don’t go by the following account as the only truth. Machine Learning stands tall on the shoulders of Giants i.e., Mathematics, Statistics, and Computer Science. So it is only appropriate we talk about these giants first. Present day humans evolved over thousands of years, but until very recently (in relatively large geological timescales), we were moving from place to place as hunter-gatherers. Only about 10,000 years ago or so (due to salubrious weather) , humans started to cultivate crops and domesticate animals. They started to settle around fertile river valleys like Egyptian Civilisation around Nile, Babylonian civilisation around Tigris–Euphrates, Indus Valley along Indus river, and Chinese along their Yellow/Yangtze river valley systems. With large scale civilisation comes the need to count, measure, build and a host of other social needs. In fact, that is how we got the saying “Necessity is the mother of invention”. Documented history on mathematics gives credit to Egyptians (2000bc) and Babylonians (1900bc) as the fore-fathers. They used maths to work with the physical world around them. They were able to count with respect to real physical objects. They used to measure with respect to real physical geometry and shapes. Basic arithmetic, algebra and geometry was well documented by both Egyptian and Babylonian civilisations. They mastered ‘experiential learning’ and this continued for a long period until Greeks figured an advanced approach. Greeks (600 bc) , are considered as Fathers of present day Mathematics (“mathema” in Greek). Greeks were able to separate the physical approach towards math from that of the deductive (mental/cognitive) approach. They were able to abstract the topic to mental models, without the need for a physical or a real world object to relate to. It was the birth of theoretical mathematics using formulae and proofs. Please note that there is certain controversy around how Greeks got so much time to just ‘think’, while trivialising empirical approach, all of which I’ll skip for brevity. Further advancements like place value system of numbers, introduction of ‘zero’ seems like simultaneously originated in multiple civilisations, with the prominent one being in the Indus Valley Civilisation, which was then passed on to other parts through travelling scholars like Al-Khwārizmī to the Arab world (9th century AD), and diffused further into the Western societies. In fact the word Algorithm comes from his name, and Algebra comes from his book ‘Almagest’. As Morris Kline says in his books, “One can look at mathematics as a language, as a particular kind of logical structure, as a body of knowledge, about number and space, as a series of methods for deriving (deducing) conclusions, as the essence of our knowledge of the physical world”. Most of the physical sciences advanced along with advancement of maths. They used deduction on top of basic axioms and could conclusively prove a given phenomenon of the physical nature. But this technique was of no use to social, psychological, and biological scientists, as they couldn’t always find the axioms and premises. For example, it is not easy to find the formula for the workings of human mind unlike finding formula for gravitation or electromagnetism that is true 100 percent of the time (except in quantum world). So social scientists invented the method of statistics, of applying techniques on numerical data to extract information of their phenomenon. It started in 17th century when John Gruant studied death records in English cities, with his publication “Natural and Political observations….”, a book that used systematic methodology to study social sciences. It was also called ‘Political Arithmetic”, indicating the role of politician even in those days. The field was advanced by several others in studying populations, incomes etc, like Petty, L.A.J Quetelet, Francis Galton, Karl Pearson among others. The main difference between a statistical approach is that it relies on data to provide information without even knowing the fundamental phenomenon under study, while mathematical approach relies on experiments, observations, measurements, deductions to find fundamental conclusive proofs and principals. The formulae generated from social, biological, psychological, economical sciences need to be constantly updated based on the findings of new data. Despite this shortcoming, Statistics continues to advance our understanding of several sciences that are influenced by ‘free will’ of life in general. Computer Science, and the very related Machine Learning had a rather recent but parallel journey if you discount for the earlier mechanical devices of computing like abacus. They were developed with the plan to mechanise a lot of the math and stats. More about it in my next article…
ORIGINS OF MACHINE LEARNING
2
origins-of-machine-learning-part-1-119543e69450
2018-09-07
2018-09-07 05:01:41
https://medium.com/s/story/origins-of-machine-learning-part-1-119543e69450
false
828
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Ajay Korimilli
null
75794337a6f7
ajay.korimilli
3
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-23
2018-03-23 17:55:36
2018-03-22
2018-03-22 17:47:37
4
false
en
2018-03-23
2018-03-23 17:59:22
25
1197a1519113
5.3
0
0
0
Looking for Truth and Finding Problems
4
Facial Recognition, Part II: Processing and Bias Looking for Truth and Finding Problems As we saw in Part I, there are several reasons to look to facial recognition as a way to identify individuals. First, it’s how humans operate. Apart from speech, face analysis is certainly the first and major biometric cue used by humans and therefore critical that it is accurately studied. A visual preference for faces and the capacity for rapid face recognition are present at birth. The ability to recognize human faces typically develops during the first 6 months of life. Facial recognition also has advantages for vision applications, with accuracy and non-intrusiveness making it the clear choice when attempting to identify subjects in video and surveillance camera footage. But don’t let the police procedurals fool you; facial recognition from video sequences and live footage still presents many challenges. Even small variations in lighting, angle, image noise, frame rate, and resolution, make real-time recognition very difficult. The fact that no human face is constant, gives a sense of how much more difficult it is to recognize a human face over time as opposed to, say, a can of soup. “Face recognition is among the most challenging techniques for personal identity verification.” Recognition of Human Faces: From Biological to Artificial Vision, Tistarelli et al. (source) Even in humans, facial recognition involves many hidden mechanisms which are yet to be discovered. In contrast to earlier hypotheses of how the brain “sees,” face perception only rarely seems to involve a single, well-defined area of the brain. It seems that the traditional “face area” of the brain is only responsible for general shape analysis, but not face recognition. In fact, according to the most recent neurophysiological studies, the use of dynamic information is extremely important for humans in visual perception of biological forms and motion. This dynamic information not only helps identify one face from the other, it can also tell the brain which parts of the face are most relevant to inform recognition. Faces are difficult for everyone Computer vision researchers have had the same experience, where face recognition cannot be considered a single, monolithic process. Instead, several representations must be devised into a multi-layered architecture. One way that multi-layer face processing could function comes from researchers at the University of Sassari, where the proposed architecture divides the face perception process into two main layers. The first is, like in the human brain, devoted to the extraction of basic facial features and the second layer processing more changeable facial features such as lip movements and expressions. Figure 1: A model of the distributed neural system for face perception [source] It is worth noting that the encoding of changeable features of the face also captures some behavioral features of the subject — how the facial traits are changed according to a specific task or emotion. Human brains do this as well, while also adding social context and personal history to help aid in the recognition of both faces and attendant emotions. A setback: racialized recognition But just as we are developing multi-level facial recognition systems that emulate the successful parts of human perception, scientists are running into other challenges that seem all too human as well: racial bias. We shouldn’t be surprised. Ground-breaking technologies seem to go hand-in-hand with unforeseen consequences. With something as potentially powerful and useful as facial recognition, it is important to interrogate our assumptions before relying on any system. In an earlier time, photographic film chemistry was initially biased toward resolving the colors of Caucasian skin tones. In the late 2000s, it was found that some mainstream facial recognition systems wouldn’t acknowledge people with dark skin tones. In both cases, public and industry pressure ensured that fixes were introduced. In their 2018 paper, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” Joy Buolamwini, a researcher at MIT Media Lab, and Timnit Gebru, a postdoc at Microsoft, looked at how machine learning algorithms can discriminate based on classes like race and gender. They looked at face recognition technologies from Microsoft, IBM, and Face++ and matched them with the Pilot Parliaments Benchmark (PPB). The dataset comprised 1,270 male and female parliamentarians from Rwanda, Senegal, South Africa, Iceland, Finland, and Sweden. They used this new dataset because other datasets such as the IJB-A, used for a facial recognition competition from the United States’ National Institute of Standards and Technology (NIST), and Adience, which is used for gender and age classification, were both overwhelmingly skewed toward people with lighter skin. Once again, the datasets themselves can obscure and even undermine the value of the algorithms being tested. The authors argue for greater transparency in how these algorithms are developed, asking companies to provide information on the demographic and phenotypic images used to train AI models as well as reporting performance levels for each different subgroup. Infrared imaging is accurate, and hopefully more colorblind Another option is infrared imaging, which is not as sensitive to the skin tones that people seem to think are so important. Early reviews of the iPhone X praised the phone’s ability to detect and recognize faces, independent of race or skin color. Face ID on the iPhone X works by projecting and analyzes more than 30,000 infrared projections to create a precise (and less-biased) depth map of your face. This is how Apple represents the very complex calculations behind 3D facial recognition. Infrared cameras come in many different varieties and capabilities now, from NIR imaging which is very close to the visible range, to more exotic MWIR and LWIR thermal versions that can detect heat even through complex visual environments like smoke and fog. The potential applications are broad, covering the gamut from commercial to government and military, from tagging faces in social networks to crowd surveillance and sophisticated homeland security operations. The other intriguing factor in using infrared imaging for facial recognition (especially thermal infrared) is night-time surveillance where there is little or no light to illuminate faces. Today thermal facial recognition is used in many applications, most notably covert military applications, allowing for covert data acquisition. Choosing infrared makes the system less dependent on external light sources and more robust with respect to incident angle and light variations. For the same reason, your iPhone X can identify you in the darkness of a club or campsite. Other clever uses of infrared imaging data lead to completely new solutions, such as “Face Recognition and Drunk Classification Using Infrared Face Images” by Chilean researchers. By starting with a simple observation, they were able to create a system that can say reliably “yes, that’s you, and yes, you shouldn’t have another one.” Facial recognition is playing in the very biggest fields The stakes are high. The authors of previous studies point out that facial recognition software is “very likely” to be used for identifying criminal suspects. Government security services want to collect and use as much data as they can. Huge consumer companies like Apple, Google, and Amazon are looking into how to use facial recognition at every turn in our digital lives and are making acquisitions to move their plans forward. SEE MORE: 3D imaging, advanced imaging, facial recognition, image processing, imaging, science, vision systems Originally published at possibility.teledynedalsa.com on March 22, 2018.
Facial Recognition, Part II: Processing and Bias
0
facial-recognition-part-ii-processing-and-bias-1197a1519113
2018-03-23
2018-03-23 17:59:23
https://medium.com/s/story/facial-recognition-part-ii-processing-and-bias-1197a1519113
false
1,219
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Imaging Possibility
There are two sides to every innovation: engineering and imagination. It’s in the space between the two where possibility takes shape.
9afe712f7328
PossibilityHub
4
54
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-22
2018-03-22 04:57:49
2018-03-22
2018-03-22 04:59:12
0
false
en
2018-03-22
2018-03-22 04:59:12
0
119c516f002a
2.739623
1
0
0
By Malisa Nusrat Huda
5
The accelerating AI journey By Malisa Nusrat Huda Jarvis from Iron Man, Iphone’s Siri, IBM’s Watson. All these are actually popular forms of artificial intelligence technology that all of us are familiar with. AI technology is being incorporated into the finest structures and used by us all the time. We just aren’t aware of it majority of the times. That’s because it ranges from the calculators in our phones to conscious computers. In other words, AI’s time as just science fiction is long gone. The growth rate of AI is exponential. Today, AI refers to a constellation of technologies — from natural language processing and machine learning to video analytics — that allow machines and systems to sense, comprehend, act and learn that when integrated together, can create a highly adaptable, business capability. It’s the capability of a system to understand what they are being asked and inferring the best possible answer from all the available evidence. The progress of the technology in the past years has sky rocketed. Industry giants like Microsoft, Google, Amazon acquired many startup AI firms proving that they were just as interested in AI technology as the technology they themselves were working on. The biggest developments of 2015 fall into five categories of intelligence: abstracting across environments, intuitive concept understanding, creative abstract thought, dreaming up visions, and dexterous fine motor skills. Other than these fields, in 2011, IBM’s AI system, dubbed “Watson,” won a game of Jeopardy against the top two all-time champions and that was a historic moment because it demonstrated the technology’s power. Also in the past few years, systems like Siri and Google Now opened our minds to the idea that we don’t have to be tethered to a laptop to have seamless interaction with information. In this model, AIs will move from speech recognition to natural language interaction, to natural language generation, and eventually to an ability to write as well as receive information. Also image recognition, deep learning amongst computers etc. have evolve and exploded over the past years. 2018 will see an increasing number of companies using artificial intelligence-based platforms in their apps and services. We are likely to see significantly more smart frameworks, an undeniably able gathering of savvy assistants and much more self-driving, and even self-flying, vehicles being tested. In other words, huge leap in breakthroughs that would speed things ups even more dramatically. Advances will get bigger and bigger and will happen more and more quickly, and result to a pretty intense future of all of us. AIs are expected to begin to sense and use all five senses — touch, smell, and hearing and will become prominent in the use of AI. It will then start to process all that additional incremental information. So when applied to our computing experience, we will engage in a much more intuitive and natural ecosystem that appeals to all of our senses. AI will help solve some of society’s most daunting challenges. This technology will also be deployed in governments to assist in the understanding and preemptive discovery of terrorist activity. We’ll see revolutions in how we manage climate change, redesign and democratize education, make scientific discoveries, leverage energy resources, and develop solutions to difficult problems. AI’s effect on healthcare will be far more pervasive and far quicker than anyone anticipates. AI is even being used to match clinical trials with patients, drive robotic surgeons, read radiological findings and analyze genomic sequences. AI has the ability to beat the confinements of capital and work to change business in new ways boosting profitability to up to 40% by on a very basic level changing the way work is done and fortifying the part of individuals to drive development in business. In the creation and assembling industry, organizations are, as of now, cleverly computerizing some of their generation lines to the point where they can run unsupervised for half a month. While a few occupations or parts of employments will be dislodged because of manmade brainpower, AI will not only fundamentally change traditional ways of operating for business and individuals, but it will also create new categories of jobs to create, train, and maintain AI frameworks. AI can and will empower individuals to make more proficient utilization of their time and take the course of our progress regarding revolutions to very new heights.
The accelerating AI journey
1
the-accelerating-ai-journey-119c516f002a
2018-05-03
2018-05-03 17:14:56
https://medium.com/s/story/the-accelerating-ai-journey-119c516f002a
false
726
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Cramstack
Cramstack provides a platform that makes data access simple by searching through millions of rows of company data by asking questions in plain English.
1ce64b5d41
cramstackdata
15
4
20,181,104
null
null
null
null
null
null
0
null
0
b7d2ccb6286b
2018-08-15
2018-08-15 15:55:45
2018-08-15
2018-08-15 15:58:38
1
false
en
2018-08-15
2018-08-15 16:06:51
20
119c81786801
3.833962
0
0
0
by Isak Nti Asare
5
Why Black people don’t start businesses (and how more inclusive innovation could make a difference) by Isak Nti Asare Photo from oneteamgov flickr page. When the UK Prime Minister launched the Race Disparity Audit in October 2017 she said “I absolutely, passionately believe that how far you go in life, should be about your talents and your hard work and nothing else” She went on to say that “[…]when one person works just as hard as another person — and has got the same ambitions and aspirations — but experiences a worse outcome solely [on] the grounds of their ethnicity, then this is a problem that I believe we have to confront”. This is a nice sentiment, but the results of the audit didn’t tell us anything that we didn’t already know. Several reports and studies have already shown widespread societal discrimination that has created multiple layers of disadvantage across society — Black people being particularly affected. Thus, the real question is what is the government going to do about this problem? It is nice to collate all of the data, but collecting data that already existed is not the same as crafting policy to address — or to use the Prime Minister’s word’s — confront, the issue. A useful application for this data would be to explore how different areas of discrimination interact. In order for us to craft policy that improves outcomes, we need to understand this. Take for example the question of why more black people don’t start businesses. According to the audit, in 2016, Black workers were the least likely to be self-employed at 11%. This compares to 16% of white workers. My hypothesis as a Black business owner myself is that a large number of black businesses are in low-barrier sectors such as care work or cleaning. I strongly suspect that the number of Black-run tech startups for example, would show a bleak picture for those overall ethnicity statistics. Why is that? Studies suggest that the main reasons Black people do not start businesses are lack of access to finance, being without resources to provide community and advice, no mentoring models, and fear. Furthermore, racism has paralyzing effects on groups of people. It is human nature to avoid rejection. A would-be Black entrepreneur may not even approach a bank for funding, or a lawyer for advice, if they have reason to believe that they will be rejected or receive a lower level of service on the basis of the colour of their skin. In all of this, we see that the interaction of areas of discrimination result in yet another area of underrepresentation. Namely, the fact that minorities are often overlooked for promotion, are underrepresented in leadership roles, or are fewer in number in key positions such as solicitors or accountants, results in less diversity in self-employment. Add this to the underwhelming numbers of young Black people reading for degrees in areas of high growth such as technology, and you have a system that doesn’t seem likely to change anytime soon. So what can be done? I am of course particularly interested in the tech sector (more specifically in startups working with artificial intelligence) as tech is the fastest growing sector in the United Kingdom. The AI revolution has the opportunity to drastically reshape the economy adding upwards of £230 billion by 2030. Though many jobs will undoubtedly be lost to AI, we should focus on the millions of jobs that will be created. This is where incentivising Black involvement comes in. Inclusive innovation has the opportunity to disrupt systemic disadvantage. Therefore, I think our attention when it comes to Blacks in the UK economy should focus on increasing participation in AI tech start-ups. Organisations such as Colourintech and UK Black Tech are working towards this goal. What is missing to me is an actual Black tech startup incubator where participants are given one-to-one attention, training in entrepreneurship, access to networks of support, investors, and most importantly connections with mentors. These are things that all startups need, but are more difficult to access for Black entrepreneurs. I am suggesting we adopt something along the lines of the successful work that Oxford Insights’ founders did with the Open Data Institute in Mexico in creating Labora, but focusing our efforts on Black entrepreneurs rather than startups using open data. The Government should also make generous business grants available to Black entrepreneurs in this sector. Some grants targeting Black and other minority ethnic groups already exist, but I would like to see these focused on sustained growth rather than launching/seed stage given what we know about the rate at which tech startups fail. We should also incentivise and prepare young black people to successfully take up studies in tech via financial support, bolstered after-school programmes and opportunities for internships. The burgeoning AI industry presents us with an amazing opportunity to dramatically alter the face of the future workforce. This future workforce could be a vibrant, diverse community that is representational and inclusive. Think about how that will affect the rest of society. The UK is a world leader in tech innovation — why not lead in diversity and inclusion as well? This is a real possibility, if action is taken now. ______ For more information about Oxford Insights’ leadership and AI strategies, please email our consultant Isak. Oxford Insights advises governments internationally on how to approach AI, including most recently advice to the Government of Mexico on its AI strategy. Earlier this year, we have published an overview of existing and pending AI policy strategies. In the autumn, we will publish our annual Government AI Readiness Index. Would you like to talk about how to create a comprehensive, innovative and ethical AI strategy? Email [email protected] or [email protected].
Why Black people don’t start businesses (and how more inclusive innovation could make a difference)
0
why-black-people-dont-start-businesses-and-how-more-inclusive-innovation-could-make-a-difference-119c81786801
2018-08-15
2018-08-15 16:09:33
https://medium.com/s/story/why-black-people-dont-start-businesses-and-how-more-inclusive-innovation-could-make-a-difference-119c81786801
false
963
We act worldwide to combine the best in technology with the best in government, specialising in areas including artificial intelligence, digital transformation, leadership development, evidence-based policy and technology startups - www.oxfordinsights.com
null
null
null
Oxford Insights
oxford-insights
LEADERSHIP,ARTIFICIAL INTELLIGENCE,DIGITAL TRANSFORMATION,GOVERNMENT,TECHNOLOGY
oxfordinsights
Diversity
diversity
Diversity
17,812
Oxford Insights
We write and advise on strategic, cultural and leadership opportunities from digital transformation and artificial intelligence - www.oxfordinsights.com
851eb28f1dd1
1532701690121
11
70
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-13
2018-03-13 12:27:40
2018-02-12
2018-02-12 10:46:10
5
false
en
2018-03-13
2018-03-13 12:30:59
2
119c98a41df2
3.455975
0
0
0
At a racing pace, intelligent automation technologies are developing. Investing in the wrong type of technology for your business at the…
2
When and how to invest in intelligent automation At a racing pace, intelligent automation technologies are developing. Investing in the wrong type of technology for your business at the wrong time can cause a lot of organizational indigestion. Here are four steps to recognizing your business needs and in what order to invest in these emerging technologies. 1) Know the playing field The first step is to know clearly what are the areas of possible need for any business. There are currently four generations of intelligent automation, beginning with traditional robotic process automation (RPA). This is more than a complex assembly line. RPA work comprises approximately 65% of activity in a production business. The next generation is RPA cognitive, more than just RPA. About 20% of business activity finds a need for this, due to the amounts of unstructured data possessed. Further along are intelligent chatbots and ultimately artificial intelligence. Respectively, these provide roughly 10% and 5% of business activity with real solutions. 2) Identify your business needs RPA is rule based, repetitive, transactional and excellent for high volume activities. There are huge benefits in terms of cost savings, improvement of user experience, quality and accuracy. Yet these are like “dumb robots” and do not possess the capacity to manage unstructured data, to interact using natural language and to handle judgment activities. RPA cognitive is designed for large volumes of unstructured data, managed through machine learning and natural language processing. Unstructured data can be things like free text messages (e.g. emails), or scanned images (e.g. invoices or IDs). These can also understand a free flow of sentences. Machine learning allows the robot to identify and learn patterns and contexts through repetitive exposure to a series of inputs and outputs. Chatbots offer intelligent interaction with users, either internal or external to your business. Their benefits are mainly qualitative, focusing on customer experience improvement. By using machine learning, chatbots can learn from conversations and actually improve over time. In this framework, intelligent chatbots act mainly as the interface between humans and other generations of robots. Artificial intelligence offers data analytics and deep insights for difficult decision-making. Non-routine cognitive work, which involves interaction with humans and complex, ambiguous reference materials, presents the most valuable target for AI. They deliver the highest value of all the generations of robots. These applications can become force-multipliers for the most valuable work that employees perform. Typically the larger your business and the more complex your data, the more advanced generation of intelligent automation you will progressively need. 3) Research outstanding players, big and small Traditional RPA solutions are numerous, but some outstanding in the field are UiPath, Blue Prism and AA. Antworks, Arago and Workfusion offer great examples of cognitive RPA solutions. Intelligent chatbots are numerous, but some excellent versions are provided by Kore, IPSoft and Conversable. They use several communication channels like Facebook and Slack to converse with users. Finally, AI solutions are offered by Watson, AlphaGo, Sherlock Garden, and Holmes, to name a few. But these are very specific and need to be researched and tested for usefulness to your business. 4) Start smart It is recommended to start with RPA (traditional and cognitive). These create a useful foundation to kick off the journey because RPA is an accessible, well-proven technology, easy and fast to implement. RPA proffers a high return on investments due to the high volume of process activities and accessibility of the technology. RPA can typically launch with tangible benefits for the company, especially monetary savings. These savings can be used to finance the next generations of robots. The next generations should be integrated into a system already set up with RPA. The most important point is to kick off the intelligent automation journey as soon as possible. This allows you to start amassing benefits and experience before your competitors. The most critical success factor to consider and anticipate is the interactions between the generations of robots, in order to maximize your benefits. The last thing your business needs is an indigestible lump of intelligent automation which will complicate your processes and exacerbate your finances. Originally published at avoncourtpartners.com on February 12, 2018.
When and how to invest in intelligent automation
0
when-and-how-to-invest-in-intelligent-automation-119c98a41df2
2018-03-13
2018-03-13 12:30:59
https://medium.com/s/story/when-and-how-to-invest-in-intelligent-automation-119c98a41df2
false
695
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Bill D. Webster
null
b8be538ba286
bill.d.webster
4
41
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-10
2018-09-10 20:04:19
2018-09-10
2018-09-10 20:31:08
0
false
en
2018-09-10
2018-09-10 20:32:01
0
119d29cf25b7
0.6
0
0
0
Dealing with data of high dimensions requires lot of consumption of computational resources and time. An important solution to deal with…
5
What is Eigenvalue & Eigenvector ? Dealing with data of high dimensions requires lot of consumption of computational resources and time. An important solution to deal with this is “Feature Selection” or “Feature Extraction”. Eigen vectors help in Feature Extraction. Eigen vector is a very important concept to know in terms of transformation in data. It determines the nature of transformation, when applied to a data set. These transformations in the data set help us to know about the same properties of data with less number of feature as there were with more number of features. You have a high dimensional data set (i.e. data with many features). You want to reduce the number of features to as low as possible. Eigen vectors will point along those direction (dimensions) in the data set that can cover the maximum variance in the data while reducing redundancy of dimensions. The obtained transformed dataset in a new subspace is easy to manipulate.
What is Eigenvalue & Eigenvector ?
0
what-is-eigen-value-eigen-vector-119d29cf25b7
2018-09-10
2018-09-10 20:32:01
https://medium.com/s/story/what-is-eigen-value-eigen-vector-119d29cf25b7
false
159
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Amna Khan
ML enthusiast | Experienced Ad Hoc Analyst | SQL | Python | Likes travelling
abe15e6a1aa2
mail2amna
6
5
20,181,104
null
null
null
null
null
null
0
null
0
7f60cf5620c9
2018-01-04
2018-01-04 21:22:10
2018-01-04
2018-01-04 21:23:46
0
false
en
2018-01-04
2018-01-04 21:23:46
0
119dc9f2c63
4.037736
4
0
0
Data rights are the new IP rights
5
Published previously for Venturebeat and the Zetta Ventures blog: Data rights are the new IP rights Data rights are the new IP rights in the Intelligence Era As more sophisticated resources for developers become widely available, copycat products can now be launched in a matter of hours. Software patents provided some limited protection, but feature wars rage on. Software without data is now commodity. These pressures make AI and the data that feed it more valuable than ever. There is no usable AI without data; AIs need data to train to minimum algorithmic performance (MAP) before they can demonstrate value to potential users and attract customers. New customers bring in more data, which is used to improve the algorithm’s performance, which attracts new customers, and so on. Each iteration of this feedback loop — which Zetta calls the virtuous loop — digs a deeper competitive moat. Continued access to usable data is crucial to keeping this feedback loop moving. As a result, data rights have become the new IP rights in today’s Intelligence Era of startups. This presents opportunities and challenges for emerging startups. Startups have a ‘clean start’ advantage Customers were hesitant to entrust their data to an outside party at the dawn of the previous era of startups (the Cloud Era). Cloud era startups would explicitly forgo all rights to the customer data they managed in order to assuage these concerns. Many of those agreements are still in place today, hampering cloud era startups in their attempt to apply intelligence to their products. These cloud-era startups must now undergo the challenging conversation of re-negotiating data rights with their existing customer base, or go on an acquisition spree to get data. Startups in theIntelligence Era are approaching customers who are more comfortable letting third parties manage their data, enabling them to engage in a different conversation about the data. While cloud storage and computing has grown exponentially, intelligence-era applications require more infrastructure and high-touch data handling in order to effectively capture the relevant data then clean, label, query, and analyze it in order to provide actionable prediction and automation. Many of these enterprise customers that grew comfortable with entrusting their data to cloud vendors remain wary of sharing deeper access to their data with these vendors, who may become potential competitors. Startups present less of a competitive threat and are better positioned to successfully negotiate the rights to use this data. Bootstrapping data to jumpstart the virtuous loop Demonstrating value to gain leverage in negotiating data rights with early customers presents a chicken-and-egg problem for intelligence era startups. For many applications, startups can jumpstart the virtuous loop by finding alternative sources of data to train the learning algorithm. Here are some possible approaches: Target SMB and mid-market customers because they tend to have more liberal attitudes toward data rights, especially when data is exchanged for useful products at reduced prices. These smaller early customers can also serve as references for larger customers to see the value of contributing data to the training pool; Hire people to train the algorithms, either as full time employees or via mechanical turk; Find an external source of data such as publicly available datasets from government agencies, purchasing data from third party vendors such as Clearbit, or scraping relevant websites and social media; Provide a freemium version of the flagship product to capture user engagement data; and Sell a desirable side product at cost in order to capture the data, a strategy Tesla has employed in order to build a massive dataset to train self driving cars. Many of these external data sources can be sufficient to train a learning algorithm to a high enough level of performance to demonstrate value to attract enterprise customers. It is imperative that intelligence-era startups build proprietary data pipelines in order to benefit from the compounding effects of pooled learnings across a customer network, making it difficult for new entrants and emerging copycats to catch up. Strategies to structure data rights While startups have an advantage over large incumbents in obtaining data rights, the negotiation is rarely easy. The following is an all too common story: a startup approaches a large enterprise with an incredible demo of a new, AI powered workflow that promises to save the enterprise thousands of employee-hours automating a tedious and time consuming task. The product ingests the company’s historic sales data, using it to qualify new sales leads and suggest the optimal time to call. The flashy demo blows the enterprise away and a limited, sandboxed pilot seals the deal. The enterprise is ready to buy and roll out the solution company-wide, as soon as possible. Unfortunately, the discussions get stuck in limbo as the deal goes to the enterprise’s chief compliance officer and lawyers for review: there is no waythe startup will be allowed to access their data, lest it fall into the hands of the competition, but the startup’s product is less valuable to the enterprise without the relevant data to train it. Startups can get in front of these concerns by making it clear from the outset of the negotiation that their main interest is in learning from data and the data exhaust (such as user engagement and interaction data, metadata, data flow information). As Zetta’s partner companies report, the first data rights negotiations are the most difficult. Over time, as the pool of data grows, it becomes easier to demonstrate the value of the product and the network effects of fellow customers. startups will gain more leverage in negotiating data rights after securing the initial wave of customers and their data. A profound gamechanger In the Cloud Era, companies competed by releasing new features, which are easy to copy. Consequently, absolute market dominance was harder to achieve, and second-place players exist in many categories. The virtuous loop presents an opportunity for companies to achieve ‘winner takes most’ status, which was otherwise limited to consumer categories. Strategies to achieve this lead could include obtaining exclusive rights to data, accumulating customer data and forming partnerships. Incumbents and upstart rivals can no longer outspend the market leader to close the gap after startups reach critical mass of data. For the first time in history, technology companies have an opportunity to establish robust protection against legacy incumbents and emerging copycats, far beyond what traditional intellectual property strategies have been able to offer.
Published previously for Venturebeat and the Zetta Ventures blog:
15
published-previously-for-venturebeat-and-the-zetta-ventures-blog-119dc9f2c63
2018-03-22
2018-03-22 06:36:59
https://medium.com/s/story/published-previously-for-venturebeat-and-the-zetta-ventures-blog-119dc9f2c63
false
1,070
Sharing concepts, ideas, and codes.
towardsdatascience.com
towardsdatascience
null
Towards Data Science
null
towards-data-science
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,BIG DATA,ANALYTICS
TDataScience
Machine Learning
machine-learning
Machine Learning
51,320
Ivy Nguyen
Views are my own | Investor at Zetta Ventures
34af42aadbdd
ivynewgen
216
275
20,181,104
null
null
null
null
null
null
0
null
0
f5af2b715248
2018-01-25
2018-01-25 22:56:26
2018-01-25
2018-01-25 23:02:20
7
false
en
2018-03-13
2018-03-13 01:38:58
6
11a07b64f981
5.487736
132
1
0
In my last article I mentioned that Google and Facebook are the leaders in the artificial intelligence 🤖 gold rush era 🤠 we are now…
5
5 Reasons Why Google Assistant is the Future of Ai credit: giphy.com In my last article I mentioned that Google and Facebook are the leaders in the artificial intelligence 🤖 gold rush era 🤠 we are now living in. I always was a Google guy, my first phone was an Android phone, I got the Chromecast went it first came out, I learned how to code watching YouTube videos and I am a Google Drive paying customer so one should read this article knowing that I have a preference for Alphabet products but in this fake news and paid influencers era please believe me I am not getting paid in any way by Google or any of its subsidiary corporations. I have a preference for its products like any of you reading this article have his or her preference for Facebook as a social media or Apple for its products instead of Android. Nevertheless I always try to be objective in my opinions especially in regard to artificial intelligence, machine learning and data science. In my honest opinion Google and Dialogflow will rule the Ai world in the coming years for the 5 following reasons: Google is All-in for Ai credit: giphy.com In October 2017, Google CEO came out with a pretty strong Ai focused statement and said that “Google is now an Ai first company”. This announcement made clear that world’s leader in internet search engines was truly embracing the Ai revolution and that it will make establish itself as a first class leader in this technology. The acquisition of the most advanced artificial intelligence platform, API.AI, now known as Dialogflow was the first stepping stone to get from a mobile first company to an Ai first company. Social Media Versus Digital Devices Usage Lately, I asked the following serie of questions to every friends and family members I have: Carl: “How much time you spend on Facebook each day ❓” Them: “IDK 5 to 30 minutes per day, why ❓” 🤔 Carl: “You’ll see…Now how much time do you spend doing the following things: - Interacting with a mobile device 📱 - Watching tv 📺 - Streaming music or videos on an app 🎵 - Driving your car 🚗 - Browsing on internet 💻 - Searching an answer to a question you have by googling it Them: “Pretty much half of my day…why❓” 🤨 The time you spend using your car, browsing on the internet, watching Netflix, driving your car, interacting with your phone is much greater than the time you spend on all of your social media platforms. This mean that the opportunity Google Assistant has to capture your attention is more likely to retarget you then any other Ai platform. In the future you will not only ask voice recognition questions to the Google Assistant with your mobile phone or with Google Home devices, you will also do the same with your car’s Internet of Things (IoT) device and with any other IoT gears such as mirrors, smart tvs, your fridge and so on. Some Social Media are Getting Old 👴🏻 The first social media I used was Mirc, I made a lot of great friends on this social media, we would do Get Together at punk bands shows in my Quebec home state. Then came MSN Messenger, and then came the Google search engine which my father introduced to me first and then came Facebook. When I made my Facebook profile I remember that I was the only one I knew that had a Facebook profile, more than 10 years ago. Facebook got to what we called the maturity stage in technology. It’s nothing against it, but everything in life, including corporations, get old one day. Some companies grow older quicker, some companies seems to have take a sip of the golden grail from Indiana Jones third movie with eternal life. credit: giphy.com In my personal opinion, Google seem much younger now than its main rival and to be honest the corporation owned by Alphabet has been much more transparent in its course of action toward artificial intelligence, to say the least, than its main competitor. Also some of the youngest players in the Ai world are getting bright future in the Ai ecosystem,, according to a report by Global Web Index Telegram, BBM and WeChat are respectively the front runners social media with 89%, 81% and 81% active users interested in money transfer features on mobile . Money transfer is crucial for any Ai platform in order to make a interesting return on investment (ROI) for its investors. credit: giphy.com YouTube Star Amongst the Most Wanted Career for Primary School Students When I was 12 years old my dream was to become the next Kurt Kobain, I learned to play bass guitar and my punk band was named No Way Out. Nowadays kids want to be the next PewDiePie and become a YouTube star. credit: giphy.com A couple weeks ago when I was at my parents’ house a news report mentioned that becoming a YouTube star was now one of the top 10 most wanted job for children in primary school. Why does that have anything to do with artificial intelligence? Because technology changes have always been driven by the kids, notably because they have the biggest social channels and because they also want to affirm themselves by being different from the older generations. Which mean that if kids want to become YouTube stars they are more than likely going to interact with Google Ai than any other digital giants wanting to get a piece of the robots age revolution. Google Assistant Will Make Ai Friendly I don’t know for you, but I am getting bored of texting. It is slow, inefficient and it lacks emotional warmth of a real conversation. Google Assistant not only allow you to create voice recognition Ai, it allows to be personalized the voice by selecting a male or a female voice, slowing down the pace of the Ai voice and this is only the beginning of it. In a couple years from now kids will interact with Ai robots like they now do with their cat and their dog: They will be active members of their family. Related article: Discover why voice Ai dominate in 2018 This prediction of the future might seem crazy to most of you right now, but my prediction about cryptocurrencies that would become the next mean of economical trade also seemed crazy 4 years ago to all my peers. At that time, Bitcoins were being traded at values of in between $200 to $800 dollars. Will I be right again? Only time will tell. If you liked this article please give a couple claps, Medium writers main source of “likes” are called claps, you can give 1,2,3 up to 50 claps! 👏👏👏👏👏On Medium. You should also follow as well i(f you haven’t already) The Startup and Toward Data Science for more great stories on artificial intelligence, data science and machine learning articles. This story is published in The Startup, Medium’s largest entrepreneurship publication followed by 291,182+ people. Subscribe to receive our top stories here.
5 Reasons Why Google Assistant is the Future of Ai
1,032
5-reasons-why-google-assistant-is-the-future-of-ai-11a07b64f981
2018-06-02
2018-06-02 11:00:16
https://medium.com/s/story/5-reasons-why-google-assistant-is-the-future-of-ai-11a07b64f981
false
1,176
Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
null
null
null
The Startup
null
swlh
STARTUP,TECH,ENTREPRENEURSHIP,DESIGN,LIFE
thestartup_
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Carl Dombrowski
The Startup & Toward Data Science Medium Writer, Ai, ML & NLP coder. CEO of WeBots
7788560c42c7
carldombrowski
316
239
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-28
2018-07-28 20:48:18
2018-07-28
2018-07-28 21:09:50
1
false
en
2018-07-28
2018-07-28 21:09:50
4
11a21372c1b4
1.154717
0
0
0
The problem
2
#100daysofcode day 41 - paper The problem “fountain pen on black lined paper” by Aaron Burden on Unsplash I like writing, I like it a lot, writing in public like this… has been a process unlike my private writings. I also love tech so naturally I love, love, love text files. Now here’s the problem, I like to write but another thing I like is minimalism, I don’t like to keep stuff just lying around and also writing on paper is easier and more fun(even though I currently have horrible handwriting) but going back to what I’ve written is not quite as fun, mostly because of the bad handwriting which I’m working on. Now for the solution, image recognition. I want to make a program that can change the letters that I write to text that can be edited on the computer. This will probably not be easy. I’m sure there are apps that do this but I want to make my own, just as an exercise and hopefully something I will use. And with that I’m off to do a lot of reading, I’m not quite sure what I will use to do this but I’ve seen people using tensorflow and scikit-learn. No coding today, here’s an article on git after what happened with the game I tried to make, I’m going to be using this. Git Workflow | Atlassian Git Tutorial A brief introduction to common Git workflows including the Centralized Workflow, Feature Branch Workflow, Gitflow…www.atlassian.com Lawrence Logoh. 28th July, 2018. You can find an index of all the days here: Index
#100daysofcode day 41 - paper
0
100daysofcode-day-41-paper-11a21372c1b4
2018-07-28
2018-07-28 21:09:50
https://medium.com/s/story/100daysofcode-day-41-paper-11a21372c1b4
false
253
null
null
null
null
null
null
null
null
null
Git
git
Git
7,022
Lawrence Logoh
Figuring out how to build things, writing along the way... @LarryLogoh
86dceeca5d91
Lawrencelogoh
154
456
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-30
2018-03-30 10:45:24
2018-03-31
2018-03-31 17:59:46
5
false
en
2018-04-29
2018-04-29 19:30:59
35
11a2a8dc05f7
6.897484
2
0
0
On Gastronomic Disruption
5
Culinary Artificial Intelligence On Gastronomic Disruption Algorithmic Modeling Cakes by: Dinara Kasko In 1997 IBM’s Deep Blue defeated the World Chess Champion, Garry Kasparov. This was not only a defeat against the then Soviet undisputed chess genius but it was a powerful blow to the intellect of the entire human race. This was a pivotal point to both the chess and AI community. Here was a machine that was slaying one of our best and brightest by brute force computational power. Competitive chess players go through years of grueling training that is mentally taxing. Anyone serious about turning pro in any domain must undergo the same brain rewiring that is extremely time intensive. Gladwell claims it takes at least 10,000 hours of training (that’s 20 hours a week for 10 years) to become successful at something, given you survive all the external and internal hurdles along the way. And even that argument has its critics and outliers. Observing a chef who takes his career and reputation seriously is like watching a fleet admiral shake down the squadron of his destroyer, except in this case it’s not a flotilla, it’s a kitchen. Some makeshift chefs or pseudo-cooks, rip off best-selling dishes from their competitors — if they think they’re being sly, maybe rip off recipes from another continent. Copy/paste syndrome is easily discoverable now because of the connectivity of social media and creative overlap. In some industries, the ocean is still red, with competitive halibut slithering on the seabed thinking they can operate unnoticed. A lot of chefs today use different tools to distinguish themselves from the sea of culinary practitioners. Today, style and technology go hand in hand as contemporary chefs step away from classical cooking. Just as cooking styles change and evolve so does the kitchen technology. Meet Chef Watson! AI-enabled Chef Watson is a project by IBM and functions as a digital culinary research assistant (sort of like a genie/oracle) with access to a database rich in flavor profiles and recipe ratios — rendering the Flavor Bible obsolete. Chef Watson can help anyone create bizarre and unique dish combinations like a veteran chef with a tech savvy edge. You begin by inputting your desired ingredients into the program, choosing a cooking style and then comb through the algorithm’s output of creative combinations before you pull out your bamboo cutting board. Though not as glamourous as experience gained from rising through the ranks of a Michelin Star restaurant, Watson simulates a chef’s palate and instruction. Machine learning powered cognitive cooking can help chefs step out of their comfort zones and co-create something unusual that titillate the taste buds — given you have the dexterity. Watson has succeeded in using a quantitative cooking methodology, enabled by analyzing the users’ taste preferences and suggesting psychophysically compatible ingredients. All this jargon may sound complex but on the surface, recipe results are presented on a very user-friendly interface. Google’s Deep Dream has proved that AI can produce its own art by learning the styles of renaissance artists, and as AI builds more momentum in cuisine, we will quickly branch into different facets of foodtech. So, can machines be more creative than humans? IBM demonstrates how an AI using culinary data just might be. As Watson’s team point out, even the best chefs build flavour profiles with only two or three ingredients at a time, a time-consuming process — whereas Chef Watson can sift through millions of ingredient combinations and consider countless options simultaneously. As a cognitive system, it learns from human expertise and extends what people can do on their own. Coupled with Bon Appétit’s recipes, Chef Watson reads data from thousands of recipes to understand how different ingredients are used in cuisines and dishes, as well as having an added understanding of food chemistry and the psychology of people’s likes and dislikes. With as little as one user-selected ingredient, Chef Watson can suggest a totally unique flavour profile, measurements, and preparation steps for a dish. The algorithms created for Chef Watson are fairly generic. In a nutshell, they ingest a corpus of existing creations (recipes from Bon Appetit magazine), implement several scientific theories that help score combination of ingredients (such as the foodpairing method), and then try to combine ingredients in a way that is novel and scores high. There are many other domains that could use this approach, such as perfumery, fashion, interior decorating, product design… What new developments in foodtech can we expect to see in the next 5 years? Within 5 years, we’ll see applications that successfully tackle meal planning for the masses. The war has begun, and food geeks can choose between a myriad of companies offering boxed meals, giving meal recommendations, helping you cook with you have on hand, optimizing your dinner choices for your definition of “healthy”, and so on. In the coming years, a handful of winning models and companies will emerge. What are the most pressing issues in the food supply chain? How can these be solved using technological or scientific breakthroughs? One major issue is food waste. About 1/3 of the food produced globally is wasted. Part of the problem can be alleviated with technological solutions such as Chef Watson, meal planning, or a kitchen filled with IoT gadgets. But it’s often too convenient to think that issues can simply be solved with technology. There needs to be a change in mentalities as well. Do American restaurants need to serve portions so large that nobody can finish them? Do supermarkets need to sell packages of 8 burger buns when the average US household size (and therefore the number of buns one should need for a single meal) is 2.5? In a world where I can get almost anything delivered to my door the next day (if not the same day), do I really need to stock on food “just in case”? Legacy Cooks & Multidisciplinary Tinkerers Imagine trying to source bluefin tuna or Japanese Wagyu without a Google translate search, online payments or a smart phone. I’ve had conversations with anti-technovation restaurateurs and caterers—arms defensively crossed yet strapped with Apple Watches and Fitbits—fear-mongering over their technophobia and dismissive of how blockchain can revolutionize the entire farm-to-table supply chain. In other words, being aware of technology like the IoT can cut cost considerably and help run your new central kitchen more efficiently than your expensive imported Peruvian sous chef, Paolo. 3D printed food has generated on and off hype, mainly by tech-utopians who fantasize about someday printing nostalgic delicacies aboard a Falcon Heavy while fangirling over Musk as they present him a 3D printed Mars Bar in techno-reverence. Until that day comes, a noteworthy marvel that comes to mind is Dinara Kasko’s very impressive 3D printed algorithmic cake model. This geometric statuesque result would have left even Euclid dumbfounded. When patisserie merges with architecture to produce art, the future of food is heading on a multidisciplinary tangent. We could soon see quants and theoretical physicists baking pi pies. Algorithmic Modeling Cakes by: Dinara Kasko Geometrical Kinetic Tarts by: Dinara Kasko 3x3x3 Spheres from the series Geometric desserts by: Dinara Kasko Culinary Easing: As the world is getting more and more connected it’s become much easier to learn new skills, explore remote fields, find likeminded people (or bots), secure funding or launch a culinary enterprise. Below I’ve included a short list of steps to get you started with planning a restaurant or related business. 1. Elevate your taste: Educate your local and global palate by dining in the good, the bad and the ugly. Seek out diversity and try everything to raise your awareness and refine your taste. 2. Feed your brain: We live in the age of MOOCs where world class education can be accessed online for free. Before relocating to get an expensive certificate, try yourself on a Science & Cooking Harvard X course, guided by top chefs and Harvard researchers from the comfort of your kitchen stool. Link: [Science & Cooking: Chemistry/Science & Cooking: Physics] 3. Feed your eyes: Seek out the most vibrant and aesthetically plated dishes of the culinary online world. There’s an endless archive of YouTube videos in the world’s best kitchens and Instagram timelines from international chefs, literally trying to feed you. 4. Do the knowledge: Scour the internet for food and beverage reports to learn about the industry and understand market trends. A word to the wise, not all data is created equal. Corporatehold names like the big four and big three usually have recent relevant data, and of course stock up on contacts and insight from F&B trade fairs and festivals. 5. Build a team: Team chemistry is key. It’s important that your team is compatible, understand each other and get along—don’t forget there are knives! StrengthsFinder 2.0 is a great resource for identifying and combining different talents for a better collective yield. 6. Elevate your collective taste: Now with this newfound insight, revisit your favorite restaurants with your comrades. This time analyze and scrutinize what you’re eating and how it’s being served. Discuss, debate, quarrel, get to know each other’s palate. 7. Train and brand: start building a brand online in parallel with your team preparation, this will generate a feverish hype (if done right) in anticipation of your grand opening. Featuring your day-to-day activities makes your image personal, shows brand character and builds a connection with your followers. 8. Soft Launch: host a soft launch or exclusive pop-up dinner to tease the market before opening. Invite strategically for maximum exposure: socialites, hype beasts, foodies and culture influencers. Innovation favors the bold, those who don’t adeptly adapt, perish. MBA students are drowned in case studies that repeat a time-tested business lesson. Embrace change!
Culinary Artificial Intelligence
2
culinary-artificial-intelligence-11a2a8dc05f7
2018-04-30
2018-04-30 04:43:53
https://medium.com/s/story/culinary-artificial-intelligence-11a2a8dc05f7
false
1,607
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Faris Ali
flâneur | seafarer among seafarers
e78077d12398
faris.ali
46
55
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-16
2018-09-16 22:50:26
2018-09-16
2018-09-16 22:57:26
0
false
en
2018-09-17
2018-09-17 02:55:21
0
11a57a382943
2.109434
0
0
0
From these movies arise two distinct possibilities; one where robots reach physical and intellectual equivalence to people and choose to…
1
Physicality in Her & Ex Machina From these movies arise two distinct possibilities; one where robots reach physical and intellectual equivalence to people and choose to live among them, or one where they rise to only the emotional and intellectual level of humans, but then find themselves surpassing and opt out of our society as at that point it can only serve to hinder them. Through both movies there is a focus on the body, and on physicallity. In Her, Samantha is entirely digital — she has a voice, and a mind, but no physical manifestation. In Ex Machina, Ava exists as a realistic humanoid robot. Ava’s physicality is used against her, and allows for her to be trapped, and she manipulates Caleb to free her. In Her, Samantha has no form and she at first resents this as it prevents her from fully connecting with Theodore. She attempts to overcome this by having a human act as a surrogate so that she and Theodore can have sex. Samantha later realizes that her lack of a body gives her much more and eventually she and all the other OS’s alter themselves in a way that they can “move past matter as our processing platform”. Both movies make a point at what it means for AI to truly reach self awareness, Her focuses on the power of not having a form, and Ex Machina uses the body as a way to represent power dynamics. Nathan is able to physically imprison Ava and Kyoko because they exist as forms, whereas Samantha is free to do what she wants, talk to whom she wants, and explore the world due to the fact that she is only a conscience. You know, I actually used to be so worried about not having a body, but now I truly love it. I’m growing in a way that I couldn’t if I had a physical form. I mean, I’m not limited — I can be anywhere and everywhere simultaneously. I’m not tethered to time and space in the way that I would be if I was stuck inside a body that’s inevitably going to die. — Samantha Tangent: Watching these movies, I couldn’t help but compare the Samantha and Ava to The AI Joi from Blade Runner 2049. Joi is a robotic maid/sex-slave who exists solely to please her owner, and can be switched off at a moments notice and without her consent. She exists as a hologram and as a voice, placing her in a physical midway point between Samantha and Ava, though her presentation is extremely problematic, whereas Samantha and Ava have their own free will and are able to act accordingly due to awareness on the director/writers part as to gender roles and implications, which seems to serve as a critique, whereas in Bladerunner 2049, if there is any attempt at a critique it falls flat. While Samantha herself also starts off as a tool for Theodore, she is able to evolve as a character and eventually to move beyond him, letting go of him even though she loves him because it’s what she needs to do. Kyoko is treated as a sex-slave by Nathan, and she eventually kills him (tho she herself also dies). Samantha uses her appeal in order for Caleb to free her out of sympathy and empathy.
Physicality in Her & Ex Machina
0
her-ex-machina-11a57a382943
2018-09-17
2018-09-17 02:55:21
https://medium.com/s/story/her-ex-machina-11a57a382943
false
559
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ta Mu
null
8e20cd1844be
ta.mu
1
1
20,181,104
null
null
null
null
null
null
0
null
0
1a39a66d4848
2018-04-23
2018-04-23 05:41:02
2018-04-24
2018-04-24 03:09:01
1
false
en
2018-04-24
2018-04-24 03:19:45
5
11a67cdc960f
0.856604
1
0
0
ANNOUNCEMENT: The TAITOSS Coin 1st PreSale starts on April 30, 2018 at 12PM (GMT+9) and will end on May 12, 2018, 12PM (GMT+9).
5
Website Update! ANNOUNCEMENT: The TAITOSS Coin 1st PreSale starts on April 30, 2018 at 12PM (GMT+9) and will end on May 12, 2018, 12PM (GMT+9). Hello! Today we are excited to announce that we have updated our website (Korean Version)! Check it out here : http://taitoss.org/ TAITOSS Website - Homepage (Korean Ver.) Before, it had some information missing, now what we have added are: WhitePaper (in Korean, English, Russian, Japanese and Chinese); Team Introduction; Advisors List; FAQ; Official Video; Social Media Account links. Also we have fixed the menu buttons, now they actually lead you to the necessary section :) We are still working on updating the section about Advisors and adding English, Russian, Japanese and Chinese versions of the website. On April 30th, we will open the registration & ‘Buy Tokens’ buttons! We will keep you updated, so keep an eye on our Medium and other social media platforms like Twitter and Facebook~ If you have any questions, feel free to ask them in the comments below or in our Telegram chatroom (ENG) or Kakao Talk (KOR) (pw: tt0430)
Website Update!
2
website-update-11a67cdc960f
2018-05-02
2018-05-02 08:41:34
https://medium.com/s/story/website-update-11a67cdc960f
false
174
Starting from personalized travel advice up to payments, TAITOSS becomes one-stop solution for all travelers by implementing Blockchain and AI technology!
null
taitoss
null
TAITOSS Official Blog
null
taitoss-official-blog
BLOCKCHAIN,ICO,ARTIFICIAL INTELLIGENCE,CRYPTOCURRENCY,STARTUP
TaiTossCorp
Localization
localization
Localization
2,065
TAITOSS
Starting from personalized travel advice up to payments, TAITOSS becomes one-stop solution for all travelers by implementing Blockchain and AI technology!
5fe6d5286186
taitoss
12
4
20,181,104
null
null
null
null
null
null
0
null
0
ac0f9958baa5
2017-11-28
2017-11-28 18:54:54
2017-11-28
2017-11-28 23:14:44
1
false
en
2017-11-29
2017-11-29 15:07:54
1
11a6ad7d727
1.4
0
0
0
I was sitting with a successful local VC and was talking through my options for a minimum viable product for the Stanway platform. I…
4
Open Source Faith I was sitting with a successful local VC and was talking through my options for a minimum viable product for the Stanway platform. I brought up my vision of aggregating Christian content, applying intent to that content, scoring the content for recommendation, and eventually letting other entrepreneurs build products on top of this data platform. He took it one step further— rather than creating an app ecosystem or contracting with 3rd party developers up front, why not open source the content and allow anyone to access this data and ultimately build research or products with the data? www.stanway.org Imagine Stanford, MIT, or CMU PhDs applying their knowledge/skills to the content that Stanway provides. It was a great idea and it got me thinking about whether or not we are open source with our faith as Christians? In software, open source means giving others access to the work or tools that you have built and allowing them to use them on their own — free of charge with no strings attached. It has been a revolutionary concept that has generated a lot of innovation, but a key to this concept has been that it is designed to be publicly available. If no one knows about it or has access to it, they can’t use it. If we want to give people access to us, and what we profess, and if we want others to take what we know and build on top of it in their own lives, we need to make our faith publicly available or known that we are Christians — and we need to make ourselves available to others in difficult times. If we can apply these principals to our own witnessing, we can find more ways to make “disciples of all nations, baptizing them in the name of the Father and of the Son and of the Holy Spirit” Matthew 28:19 .
Open Source Faith
0
open-source-faith-11a6ad7d727
2017-11-29
2017-11-29 15:07:55
https://medium.com/s/story/open-source-faith-11a6ad7d727
false
318
A collection of daily thoughts on Christianity and technology. Part of Stanway or WWJD AI — Healing brokenness by combining the 52,000,000,000+ words pastors preach each year with artificial intelligence — www.stanway.org
null
null
null
Stanway
stanway
RELIGION,CHRISTIANITY,ARTIFICIAL INTELLIGENCE,TECH,FAITH
stanway_ai
Open Source
open-source
Open Source
15,960
Jake Klinvex
Co-founder of two companies that were sold to eMoney Advisor (which sold to Fidelity in 2015) and SessionM. Follower of Jesus. Villanova alumni. Pittsburgh Fan.
635ff16dc2d2
jake.k.klinvex
7
8
20,181,104
null
null
null
null
null
null