audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
null
2018-05-21
2018-05-21 08:55:58
2018-05-21
2018-05-21 09:01:34
6
false
en
2018-05-21
2018-05-21 09:01:34
9
1478b96dd693
4.742453
15
0
1
Have you wonder what impact everyday news might have on the stock market. In this tutorial, we are going to explore and build a model that…
5
Simple Stock Sentiment Analysis with news data in Keras Have you wonder what impact everyday news might have on the stock market. In this tutorial, we are going to explore and build a model that reads the top 25 voted world news from Reddit users and predict whether the Dow Jones will go up or down for a given day. After reading this post, you will learn, How to pre-processing text data for deep learning sequence model. How to use pre-trained GloVe embeddings vectors to initialize Keras Embedding layer. Build a GRU model that can process word sequences and is able to take word order into account. Now let’s get started, read till the end since there will be a secret bonus. Text data pre-processing For the input text, we are going to concatenate all 25 news to one long string for each day. After that are going to convert all sentences to lower-case, remove characters such as numbers and punctuations that cannot be represented by the GloVe embeddings later. The next step is to convert all your training sentences into lists of indices, then zero-pad all those lists so that their length is the same. It is helpful to visualize the length distribution across all input samples before deciding the maximum sequence length. Keep in mind that the longer maximum length we pick, the longer it will take to train the model, so instead of choosing the longest sequence length in our datasets which is around 700, we are going to pick 500 as a tradeoff to cover the majority of the text across all samples while remaining relatively short training time. The embedding layer In Keras, the embedding matrix is represented as a “layer” and maps positive integers(indices corresponding to words) into dense vectors of fixed size (the embedding vectors). It can be trained or initialized with a pre-trained embedding. In the part, you will learn how to create an Embedding layer in Keras, initialize it with GloVe 50-dimensional vectors. Because our training set is quite small, we will not update the word embeddings but will instead leave their values fixed. I will show you how Keras allows you to set whether the embedding is trainable or not. The Embedding() layer takes an integer matrix of size (batch size, max input length) as input, this corresponds to sentences converted into lists of indices (integers), as shown in the figure below. The following function handles the first step of converting sentence strings to an array of indices. The word to index mapping is taken from GloVe embedding file so we can seamlessly convert indices to word vectors later. After that, we can implement the pre-trained embedding layer like so. Initialize the embedding matrix as a numpy array of zeros with the correct shape. (vocab_len, dimension of word vectors) Fill the embedding matrix with all the word embeddings. Define Keras embedding layer and make is non-trainable by setting trainable to False. Set the weight of the embedding layer to the embedding matrix. Let’s have a quick check of the embedding layer by asking for the vector representation of the word “cat”. The result is a 50 dimension array. You can further explore the word vectors and measure similarity using cosine similarity or solve word analogy problems such as Man is to Woman as King is to __. Build and evaluate the model The task for the model is to take the news string sequence and make a binary classification whether the Dow Jones close value will rose/fail compared to previous close value. It outputs “1” if the value rose or stays the same, “0” when the value decreases. We are building a simple model contains two stacked GRU layers after the pre-trained embedding layer. A Dense layer generates the final output with softmax activation. GRU is a type of recurrent network that processes and considers the order of sequences, it is similar to LSTM regarding their functionality and performance but less computationally expensive to train. Next, we can train the evaluate the model. It is also helpful to generate the ROC or our binary classification classifier to access its performance visually. Our model is about 2.8% better than the random guess of the market trend. For more information about ROC and AUC, you can read my other blog — Simple guide on how to generate ROC plot for Keras classifier. Conclusion and Further thought In this post, we introduced a quick and simple way to build a Keras model with Embedding layer initialized with pre-trained GloVe embeddings. Something you can try after reading this post, Make the Embedding layer weights trainable, train the model from the start then compare the result. Increase the maximum sequence length and see how that might impact the model performance and training time. Incorporate other input to form a multi-input Keras model, since other factors might correlate with stock index fluctuation. For example, there are MACD(Moving Average Convergence/Divergence oscillator), Momentum indicator for your consideration. To have multi-input, you can use the Keras functional API. Any ideas to improve the model? Comment and share your thoughts. You can find the full source code and training data here in my Github repo. Bonus for investors If you are new to the whole investment world like I did years ago, you may wonder where to start, preferably invest for free with zero commissions. By learning how to trade stocks for free, you’ll not only save money, but your investments will potentially compound at a faster rate. Robinhood, one of the best investing app does just that, whether you are buying only one or 100 shares, there are no commissions. It was built from the ground up to be as efficient as possible by cutting out the fat and pass the savings to the customers. Join Robinhood, and we’ll both get a stock like Apple, Ford, or Sprint for free. Make sure you use my shared link. Originally published at www.dlology.com.
Simple Stock Sentiment Analysis with news data in Keras
64
simple-stock-sentiment-analysis-with-news-data-in-keras-1478b96dd693
2018-06-20
2018-06-20 18:44:25
https://medium.com/s/story/simple-stock-sentiment-analysis-with-news-data-in-keras-1478b96dd693
false
1,005
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Chengwei Zhang
Programmer and maker. Love to write deep learning articles.| Website: https://DLology.com | GitHub: https://github.com/Tony607
dc7e7f0185be
chengweizhang2012
416
19
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-08
2018-06-08 03:22:33
2018-06-11
2018-06-11 04:03:15
9
false
en
2018-06-11
2018-06-11 17:02:54
6
1478c27e65e
5.69434
3
0
0
Forking a DSX project in GitHub creates a greater separation between users and allows for a more flexibility in merging assets. A greater…
3
DSX, GitHub and forking Forking a DSX project in GitHub creates a greater separation between users and allows for a more flexibility in merging assets. A greater separation allows DSX users to commit to their linked GitHub repository frequently without immediately impacting other users of the project. Assets are merged between users explicitly using GitHub pull-requests and can only be done when and if the owner(s) of the repository agree. Solving merge conflicts can be performed outside of DSX, either on GitHub or using any of the available git tools on your local workstation. This is a follow up to my post on DSX and GitHub where I discussed how use GitHub as a code repository for DSX. In this post I’ll be discussing how you can setup and use GitHub forking with your DSX projects. Why forking? Forking creates a level of separation between projects, while still maintaining a link to allow for updates back and forth. Suppose there is a DSX demo, maintained by your colleague that you would like to extend for your own DSX presentation. However, while you are working on your extensions, your colleague may need to present the original demo and thus you would like to avoid pushing your day-to-day changes to the original repository. This is a common situation for GitHub repositories and the reason for the concept of forking a repository. Instead of linking the original GitHub repository to your DSX project, you first fork the repository on GitHub and then link the forked repository to your DSX project. In GitHub, create a forked repository by selecting the Fork button. How to fork In GitHub, navigate to the ‘original’ project you would like to fork. Select the ‘fork’ Change the name of the repository: Settings -> Repository name. Make sure it is different from the original repository, otherwise in DSX you will not be able to have a project based on both the fork and the original repository in your DSX user account. Copy the URL. In DSX, create a project from the GitHub fork using the URL. Now you can start developing your own features in the project and push these changes to your forked repository on GitHub, just like with any other GitHub-based DSX project. If this is all you want to do, then you are all set. However, the forking becomes interesting if: Your colleague has updated the original repository. Perhaps has refined some notebook or dataset. And you would like to update your project with these changes. You have added a feature to the demo that would be valuable as an improvement to the original demo. Use-case 1: Update your forked repository from the original There are 2 ways to do this: the easy way (via GitHub) or the hard way (via your local workstation). The easy way The ‘easy’ way is via a pull-request in GitHub: Go to the original repository. Create a pull-request: Pull requests -> New Pull Request. Use as base fork: your forked repository, head fork: the original repository (For both, leave the branch at master, assuming you haven’t introduced other branches in the project.) GitHub will display the commits that where made to the original repository since you lasted synced. And most importantly, it will test if the changes can be merged automatically. If so, you can create the pull-request. From your forked repository, as the owner, accept the pull-request. If this worked, you are done with use-case 1: you have updated your fork with the latest changes from the original repository. However, if there are merge conflicts, i.e. with merges that git cannot do automatically, you will need to resolve those merge conflicts and that cannot be done from within GitHub. In GitHub, create a pull request to merge changes from the original repository into the forked repository. The compare screen will show the changes, whether these can be merged automatically and thus whether this can be completed from GitHub. If this merges without conflict, you’re done. In GitHub, if the pull request compare shows a merge conflict, cancel the pull request and resolve the merge conflict using your local workstation. The hard way The ‘hard’ way is via your local workstation. Before you can start resolving merge conflicts, you need to make sure that: You have cloned the forked repository to your local workstation. You have added the original GitHub repository as a remote with name ‘upstream’ (see also configuring-a-remote-for-a-fork, are-git-forks-actually-git-clones and GitHub Forking). In GitGui: — ‘Remote -> Add..’ — Use ‘upstream’ as the name — Use the SSH (.git) URL as the Location In the Forked GitHub work-flow, your local repository relates to 2 remote repository: the regular forked repository and the original ‘upstream’ repository. In GitGui, add the remote original repository as the ‘upstream’. (I like using GitGui on Windows, therefore this tutorial shows screen shots of how to do the steps using the GitGui. But the same things can be done using the Git command line, or other Git clients.) In order to do the merge: Update the forked repository via the usual fetch/merge process. (This simply updates the copy of the fork on your local workstation and should not cause conflicts.) Fetch the updates from the original (i.e. ‘upstream’) repository. In GitGui: Remote -> Fetch From -> upstream. Initiate a local merge. In GitGui: Merge -> Local Merge -> Tracking Branch -> upstream/master. GitGui will now show a list of merge conflicts that you have to resolve. The easiest way to resolve merge conflicts in GitGui is to do a right-click in the top-right pane with the text of the file and select which version you want to keep: either the local (i.e. the version in your fork) or the remote (i.e. the version in the original repository). If this doesn’t work, you would need to manually edit the file and remove the conflicts via your favorite text editor. (For notebooks, there exist dedicated tools that can help resolve conflicts, for instance see nbdime, but that is a topic for another blog post. For a deeper discussion about the challenges of using git with notebooks see also making-git-and-jupyter-notebooks-play-nice.) After you have resolved the conflicts, you’ll need to commit the changes into your local workstation version of the forked repository. Push the changes from you local workstation into the GitHub forked repository. From your forked DSX project, pull the changes from the GitHub fork. In GitGui, fetch the upstream remote. In GitGui do a local merge from the remote upstream repository. In GitGui, the merge will fail due to the expected merge conflict. In GitGui, if possible, resolve the conflict by either selecting the remote or local version of the file. If not, then use a text editor to resolve the conflicts in the file. Use-case 2: update the original repository from your forked repository First, always do the above use-case: update your fork from the original repository. In GitHub, from your forked repository, create a pull-request: — Pull requests -> New Pull Request. — Base fork: the upstream/original repository, head fork: your forked repository. If you have properly done the step 1, you should not see conflicts and you should be able to create the pull request. If not, cancel the pull request, complete the previous use-case and resolve all conflicts. The owner of the original repository will receive a notification that a pull request was created. The owner then reviews the pull request in GitHub and can accept the pull request to merge the changes. Summary In summary, forking with DSX projects makes sense if: You would prefer a certain separation between the original and forked repositories such that the individual DSX projects can sync with their remote GitHub repositories without immediate interfering with other collaborators of the same original project. You require a fine-grained control with respect to resolving merge conflicts, allowing you to use a variety of tools on your local workstation like the Git command line, GitGui, text editors and/or other 3rd party tools.
DSX, GitHub and forking
3
dsx-github-and-forking-1478c27e65e
2018-06-19
2018-06-19 17:28:30
https://medium.com/s/story/dsx-github-and-forking-1478c27e65e
false
1,191
null
null
null
null
null
null
null
null
null
Github
github
Github
7,846
Victor Terpstra
Senior Data Scientist — Prescriptive Analytics, IBM Data Science Elite Team. The opinions expressed are my own and don’t necessarily represent those of IBM.
9fade287b3bb
vjterpstracom
10
1
20,181,104
null
null
null
null
null
null
0
null
0
9e5f55ca0eff
2018-06-01
2018-06-01 13:04:31
2018-06-01
2018-06-01 13:07:13
3
false
en
2018-06-01
2018-06-01 13:07:13
3
147ab949b0af
2.500943
0
0
0
We sat down in April 2018 with Spoon Cereals, a challenger brand within the breakfast category. Spoon are on a mission to make breakfast…
5
THE IMPORTANCE OF STANDING OUT ON-SHELF We sat down in April 2018 with Spoon Cereals, a challenger brand within the breakfast category. Spoon are on a mission to make breakfast better, and their flavoursome and healthy (but certainly not boring) granolas and mueslis have been developed over the past few years including a successful appearance on Dragon’s Den. Their products now sit in Waitrose and Wholefoods, amongst other retailers not beginning with “W”. Their biggest issue now: “We’ve got a great product that people love when they try it… how can we get more people picking up our brand?” Sampling solves some of this issue but it’s too localised… you can’t commit to sampling in every store to reach every potential shopper as it’s too labour intensive (read: expensive). Then, despite being localised to a store, it’s completely untargeted. You will end up with people trying the product who had no intention of ever buying from your category, let alone your product (though maybe this could be offset by the one in a thousand chance of converting a new-to-category shopper). Spoon wanted to see if changing packaging would help. It’s not enough just to ask customers if they like a pack design. How does behaviour change when shoppers are faced with a purchasing decision at shelf with different packaging? If behaviour does change and make a positive impact for the brand, is it worth changing the packaging? What’s the ROI? Even if it’s worth us doing it, will it be launched by our retailers if we can’t prove that it will benefit the category overall? We could access these kind of results through a traditional in-store trial, assuming we can get agreement from a retailer, enough stores to participate and be compliant, enough stores to act as control stores, enough people to implement it, a spare 3 months to monitor the trial and a cool £250,000 to conduct all of the above. Or we could run a Virtual Store Trial. We built a virtual environment of the Granola category as an example of a Waitrose store with three different scenarios. Control — How the Spoon brand currently looks Scenario 1 — A change in Spoon branding Scenario 2 — An alternative change in Spoon branding In under a week, we ran a Virtual Store Trial with 1,500 shoppers (500 per scenario) to measure the impact of changing the packaging compared to what currently exists in store. The best of the new packaging options resulted in: 51% increase in sales for Spoon 2% increase in sales to the category Spoon’s on-share neighbours lose value share as shoppers are attracted by Spoon’s new packaging Unsurprisingly, Spoon Cereals are changing their packaging to this option, which we’re looking forward to seeing hit stores in August. They said: “The results from VST more than justified the investment delivering an outstanding ROI, creating an excellent category argument and an uplift in sales.” For more information and to understand how we can test changes to your brand or category, please contact Nick Theodore at [email protected] or visit www.storetrials.com.
THE IMPORTANCE OF STANDING OUT ON-SHELF
0
the-importance-of-standing-out-on-shelf-147ab949b0af
2018-06-01
2018-06-01 13:07:14
https://medium.com/s/story/the-importance-of-standing-out-on-shelf-147ab949b0af
false
517
Stories about bringing retail trials into the 21st Century.
null
null
null
Store Trials
store-trials
CASE STUDY,ANALYSIS,RETAIL,RETAIL INDUSTRY
storetrials
Insights
insights
Insights
4,430
Nick Theodore
null
49b9b63b09d2
nickjtheodore
5
5
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-13
2018-03-13 16:21:49
2018-03-13
2018-03-13 17:08:47
5
false
en
2018-03-13
2018-03-13 17:09:52
4
147baa33bee7
4.618239
2
0
0
Part of my series on building a bot to manage my fantasy baseball team
5
Dawn of a New ERA: Predicting Starting Pitcher Performance Part of my series on building a bot to manage my fantasy baseball team As a fantasy baseball novice, one big question I had was the extent to which I could trust “predictions.” Fortunately, at the start of last years’ draft I saved as many sources of predictions as possible. The intent was to check the reliability of predictions and see if other factors are more useful indicators. In particular, I focused on starting pitchers with an ADP in the top 1000. Bludgeon yourself in the head with a blunt object and it might resemble an American flag Along the x-axis are factors you might know going into the draft, primarily pre-season projections averaged from as many sources as possible including ESPN, Yahoo!/Oath, Razzball, Steamer. Along the y-axis are the actual season results. They are bucketed into season-long counting stats (ie Games, Games Started, Quality Starts…), batted ball stats (Ground Ball %, Fly Ball %…), pitching averages (Earned Run Average…), per-game averages (Quality Starts per Game Started, Innings per Game…), and miscellany (Average Draft Position, handedness and age for pre-season, Wins Above Replacement and Fantasy Value for post-season). Counting Stats Projections are ~50% Accurate The diagonal allows for straightforward comparison of the accuracy of pre-season predictions and post-season results. The boxed stats along the diagonal highlight the category stats in our league (IP, W, L, K, ERA, WHIP). The blue wave in the upper left demonstrates the counting stats are about 50% reliable. The strongest correlation was the Quality Starts predictions, which was almost 60% correlated. Wins and Losses fell close to but just under this mark. If you include the undrafted pitchers (ADP ≥ 999), these counting stats rise to nearly 80% correlation. The reason is because most of the belly-itchers do in fact play very few games. Breakouts are rare, and you can get more value hunting in the lower ADP range. Generally speaking, my interpretation is that most counting stats are a function of time on the mound. Consider how the post-season counting stats correlate against other post-season counting stats: Note: This blue wave is not a 2018 election map Takeaway 2: Pitching Averages are a Mixed Bag Pitching averages were a bit trickier. Our league uses ERA and WHIP, which were only 31% and 41% correlated with their actual value. This is not terrible, but perhaps not worth staking your season on these stats based on projections alone. Other pitching averages also fared poorly. The FIP projections were 39% correlated with the actual results, although it ended up predicting ERA results marginally better than the ERA projections. BABIP projections were just 20% accurate and were essentially useless at predicting anything except, curiously, ground ball rates. On the bright side, SIERA (Skill-Interactive ERA) projections looked quite good. The SIERA projections held up not just in terms of predicting the actual outcome of the SIERA (50% correlation), but also proved a far better predictor of ERA than even the ERA projections (SIERA projections were 39% correlated with actual ERA, versus 31% correlation with ERA projections). Not all average stats ended up so poorly correlated. The ground-ball and fly-ball rates were the most accurate predictions across the board, coming in at 69% and 72% respectively. Unfortunately, these stats proved essentially useless at predicting other stats. Strikeouts per nine also proved 58% accurate, and also showed some correlation with useful stats like total strikeouts (23%), ERA (29%), and WHIP (28%). Not terribly strong indicators, but aspiring stats geeks should give it stronger consideration. Takeaway 3: Age Is Something But a Number I had expected the numbers to favor younger pitchers, but veteran status is actually a slight benefit. The pitching averages showed little correlation with age, but there was in fact a slight correlation (20–30%) between age and the counting stats. The explanation is primarily that older pitchers face more batters: Mr. Colon, well north of 40, would probably tip the scales on an age vs weight scatterplot as well This isn’t terribly actionable, although somebody could do an interesting follow-up analyses on a few trends. To the naked eye it looks like the progression is that <24 year olds get tested, by 26 they get locked in, and those who survive into their 30s are generally reliable workhorses. Takeaway 4: ADP Proves the Wisdom of the Crowds Almost across the board, ADP is the best predictor of nearly every major stat. Draft position was the single best predictor of key fantasy stats including WHIP (54%), ERA (55%), Wins (57%), Quality Starts (60%), Total Strikeouts (66%). Looking at post-season indicators, the draft position showed a whopping 81% correlation with Wins Over Replacement and a 75% correlation with Fantasy $ Value. In other words, if you take nothing away from this article, it’s that you will likely be better served by simply autodrafting your starting pitchers than by any other strategy. Bonus: Modeling ERA Given the poor performance of ERA calculations, I did a little regression work to see if I could come up with a better formula for modeling ERA. After a few scripts and some trial and error, here’s the best formula I could find: The results are pretty simple to interpret. The Innings per Game is a corrector for SPARPS (Starting Pitchers as Relief Pitchers), like the Brad Peacocks of the world finished the season with a 3.0 ERA on an average of 3.8 IP / G. It wasn’t feasible to try to sort out relief innings from starting innings for this exercise, but it wasn’t terribly necessary given the low draft position of such pitchers. Otherwise, the rule of thumb is a starting pitcher who chews up 6 innings will have a baseline ERA of 3.3. Then roughly add 0.3 for every hundred draft positions — so for a pitcher drafted around the hundred slot you’d expect an ERA of 3.6, whereas a pitcher drafted 300 should generate closer to 4.2 Checking this quick and dirty equation against 22 holdout pitchers looks good. The sum of the square of the residuals for our formula landed a tidy 14.37, whereas the residuals for the experts’ predictions were an ungainly 22.38. The experts nailed Mr. Kershaw (bottom left), but never gave much higher than a 4.0 ERA to pitchers who would skew much higher.
Dawn of a New ERA: Predicting Starting Pitcher Performance
2
in-the-big-innings-predicting-starting-pitcher-performance-147baa33bee7
2018-03-19
2018-03-19 16:31:25
https://medium.com/s/story/in-the-big-innings-predicting-starting-pitcher-performance-147baa33bee7
false
1,003
null
null
null
null
null
null
null
null
null
Sports
sports
Sports
129,960
Gerrit Hall
@RezScore, @SandHillX, @MolocoVAN, @GoBithub
52eb7b771331
gerrithall
180
195
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-14
2017-11-14 12:01:21
2017-12-04
2017-12-04 12:24:18
5
false
en
2018-05-26
2018-05-26 09:40:30
3
147baa5b094a
6.38805
10
0
0
By Yoni Goldwasser and Elad Walach
5
AI in healthcare: medicus ex machina By Yoni Goldwasser and Elad Walach Artificial Intelligence (AI) has emerged as a solution to some of the most important problems facing healthcare today. There is clearly a lot of value in applying AI to medicine, but who will reap the value created? And what does this mean for entrepreneurs and investors? This article is a first in a series of 3. Here, we examine the progress AI has made into healthcare thus far. In Part 2, we will look at how startups can position themselves strategically in this space. In Part 3, we will discuss what this means for investors, partners, and acquirers. Healthcare challenges that digital health technologies can help address broadly fall into two categories: patient centricity, and new care models. Patient centricity refers to a shift in how practitioners think of patients and their care. This means focusing on practicing health care, not sick care: both preventative care and better, faster, cheaper diagnosis and prognosis. Secondly, it includes personalized healthcare, which sees patients not as averages or consensus sequences, but as individuals who may respond very differently to care. New care models are driven by the unsustainability of current healthcare models. Western economies are reaching their capacity to invest more resources in healthcare, which means we need to do more with less. Specifically, the growing strain on medical personnel — arguably our most scarce resource — is the result of two factors: aging of the Baby Boom generation and overflow of information in modern healthcare systems. Firstly, Boomers represent not only an older, sicker population; they also include thousands of retiring physicians and nurses, leading to a shortage in caregivers. Secondly, medical staff is inundated with new information about diseases and best practices, clinical data, and ballooning data management tasks. Practitioners are in desperate need of tools to help them focus on what they do best and to reduce the burdens on them, so they can provide patients with better care. Medicus ex machina: AI to the rescue? AI-based systems can help solve both the patient centricity problem by mining insights from reams of patient data in a way never before possible; and by shouldering an increasing part of both clinical and non-clinical work. The degree of progress made in integrating AI systems into clinical practice varies across medical fields, driven by the balance between the driving force of new developments in AI technology, on the one hand; and on the other hand, user (physician/provider, and to some degree, payer) and regulatory resistance due either to conservatism, risk aversion, or both. Resistance to the adoption of AI technologies is firstly due to the inherent high adoption barriers that slow down the adoption of any new technology in the medical field. These barriers are both informal barriers erected by the medical establishment and regulatory ones set up by agencies (notably, the FDA), to prevent adoption of technologies that risk patients unnecessarily. We expect AI-based systems being integrated into healthcare in 3 conceptual steps. Assistive AI Assistive AI systems At the low end of what can be considered AI, we find relatively simple, rule-based systems. A classic example is an Excel sheet. Note that in such systems, the perception, decision, and action steps are external and performed by users or other devices. The model does not really change much in response to performance unless a programmer changes the rules manually. Assistive AI also includes more complex systems that require higher levels of abstraction, for example, a system that identifies types of flowers in digital images. In medicine, a device that can make simple diagnoses based on a set of lab tests is a good example of Assistive AI. Israeli startup MeMed is building a simple diagnostic device that determines whether a patient is suffering from a bacterial or viral infection. The goal is to guide better treatment and reduce the development of MDR bacteria. Augmentative AI Augmentative systems augment the abstraction models they are initially programmed with (in some cases, in ways that are enigmatic even to the system designers). These systems may take a part in the decision making process (where they are referred to as diagnostic devices, as opposed to decision support systems). However, they keep the “Human in the Loop”: a person is ultimately responsible for translating the decision into action. Naturally, a user can override the system recommendation. Feedback regarding the user-made corrections is fed back into subsequent rounds of learning. Hence, the model is continuously improved. Most imaging-based AI systems fall into this category. Aidoc for example, uses AI to augment the radiology workflow. Aidoc’s solution detects visual abnormalities in CT scans and thus helps the radiologist focus on the areas that matter the most first. Augmentative AI systems Surrogate AI Surrogate systems attempt to “close the loop” and exclude humans from the cycle. Often such systems integrate the perception and action functions. As a result, the machine can act independently, and learn from its mistakes (and mistakes of other machines). A fully self-driving car is an example of this. In medicine, digital bots such as those proposed by Woebot and Babylon Health aim to replace practitioners (doctors, nurses, assistants, technicians), at least in some situations. Surrogate AI systems Where do we stand today? Natural language processing-based applications are only beginning to emerge. The fact that these AI applications are lagging behind the others testifies to the complexity of language. Good examples include the multiple bot-based chat agents that attempt to take over certain provider-patient interactions. EHR (Electronic Health Records) “data crunching” and genomics were an early target for Big Data applications. They are more amenable to machine learning, thanks to the availability of structured data. These applications rely on machine learning techniques that mine mountains of data and find patterns that would be invisible to the human observer. Imaging-based diagnosis has received a huge push thanks to Deep Learning advances. Constructs such as convolutional neural nets (CNNs), have made it possible for machines to perform imaging-based diagnoses at accuracy rates similar to those of radiologists, albeit thus far in narrow applications (e.g., identify hemorrhagic stroke from CT images). Speech recognition is a low-level abstraction application of AI, but one that has been surprisingly elusive. However, presently, it reaches near-human capabilities (under ideal conditions). It’s very likely that in the next couple of years speech recognition will become a purely machine-based, “human-free” application, allowing, for example, free dictation. The work of tech giants in this area — notably Apple, Google, and Amazon — makes this ever more likely. Progress of different AI technologies in healthcare We expect all applications to eventually reach the augmentative and eventually, the surrogate stage. Conceptually, medicine is a complex field that feels very fuzzy, but more and more applications are yielding to AI, and given enough data and progress in development, most functions performed by practitioners today should be executable by machines down the road. In fact, the key element keeping AI at bay will be our choice to keep certain elements of clinical practice in the hands of humans. Notably, as the line between augmentative and surrogate technologies blurs, we expect another source of resistance: while augmentative technologies aim to ease the burden on providers, surrogate technologies could threaten physician and other providers’ livelihoods, or at least their position as the ultimate sources of knowledge in the healthcare system. Furthermore, they could result in the shifting of work (and thus, income) from one type of skilled provider (e.g., a specialist, or a surgeon) to a less skilled provider (e.g., an interventionalist, a GP, or even a nurse). Thus, physicians may resist the adoption of these technologies or even the research required for their development in order to protect their livelihoods. What does this mean to investors and entrepreneurs? AI-based medical entrepreneurs are now faced with two new challenges. The first stems from unproven business models. While it’s clear that AI startups are creating value, willingness to pay for their services still needs to be proven in the market. We — like most investors and entrepreneurs in the field — feel that this issue will be resolved over time. However, we do foresee an adjustment period that could prove fatal to some innovations. The second challenge is how to maintain a relative advantage in a field where core technologies are becoming commoditized at an exceptional rate. The emerging landscape is one where the most scarce resource is innovative talent. Our next article (Part II) will discuss this second challenge in greater depth. Yoni Goldwasser is a Principal at Goldmed, investing in medtech and digital health. Elad Walach is the CEO of Aidoc, utilizing deep learning based AI to analyze medical images and patient data in order to optimize the radiology workflow. This article was published in a slightly modified Hebrew version on Geektime
AI in healthcare: medicus ex machina
119
ai-in-healthcare-medicus-ex-machina-part-i-147baa5b094a
2018-05-26
2018-05-26 09:40:32
https://medium.com/s/story/ai-in-healthcare-medicus-ex-machina-part-i-147baa5b094a
false
1,472
null
null
null
null
null
null
null
null
null
Deep Learning
deep-learning
Deep Learning
12,189
Yoni Goldwasser
Investor, entrepreneur, geek. Fresh in Berlin
9da642052433
yoni.goldwasser
51
55
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-19
2017-12-19 22:21:22
2017-12-20
2017-12-20 02:38:32
6
false
en
2017-12-20
2017-12-20 09:46:39
6
147cf670ca0d
4.583962
2
0
0
This post follows my previous post: Building a Deep Learning rig that pays for itself — you should definitely go read that one if you’re…
4
Building a cost-effective Deep Learning dream machine This post follows my previous post: Building a Deep Learning rig that pays for itself — you should definitely go read that one if you’re interested in finding out how I justified spending the money 💰💰💰. Source With a solid justification under my belt, I had to figure out what I was going to build. Given the approximate payback rate of $1,500 per year I really wanted to keep it below that figure. Practical constraints drove it closer to $2,000, but given that the asset will have a residual value of more than $500 at the end of the year I think I’m in the right ballpark. GPU This is arguably the whole point of the exercise, and as I mentioned in the previous post I chose an nVidia GTX 1080, as it’s significantly cheaper than the market leading GTX 1080 Ti yet it performs almost as well. I spent a fair bit of time trying to understand why so many different companies are making products for nVidia, and I’m still a bit confused, but it seems like they’re either building cards around an nVidia chip or they’re building chips under licence. Either way, everything I read online suggests that it doesn’t really matter which manufacturer you use, and that it comes down to taste. I disagree with the taste bit though — for me it comes down to price and noise. This thing is going to run 24/7 so it needs to be quiet. Luckily the cheapest card on the market ($699, made by GALAX) happens to be one of the more quiet designs. Sold. CPU This is less important for deep learning and mining profitability, but most ML work is done on the CPU so I still need a solid performer here. The AMD Ryzen chips are getting really good reviews, and seem to be much better value than Intel’s latest offerings. The AMD Ryzen 5 1600 in particular seems excellent value, giving you 6 cores (which means 12 parallel processing threads with hyperthreading, i.e. I can train models in 12x parallel) and performing almost as well as the top-of-the-range 1600X. It also comes with a cooler, which means it’s WAY cheaper and gives you about 90% of the performance of the 1600X. Same as the GPU, I’ve tried to pick the elbow of the price-value curve, getting the most powerful thing available before the prices start going up sharply. Motherboard, RAM and SSD This was way easier than I expected. There is a magical new site called PC Part Picker which lets you select components based on a search (which means you can sort based on price), and checks to make sure all of your parts are compatible. The site suggested the ABRock AB350M Pro4 motherboard (because it was the cheapest compatible board), 2x8GB RAM chips which I can’t remember what brand they are (you’ll definitely want 16GB or more of RAM), and a Samsung SSD (which I chose to be 500GB). It’s worth pointing out that an SSD is almost essential for the ML side of things, because the speed at which you can read data into RAM is important for training when you’re using large datasets. This means that you don’t actually need an SSD for Windows, and it might be worth picking up another cheap, smaller drive for Windows if you don’t already have one lying around. Case and Power Supply Again, PC Part Picker to the rescue — it does all the hard work and checks part compatibility for you. I chose a Corsair Carbide 400C because it’s relatively cheap, looks pretty slick, and doesn’t have all of that gross “gamer” stuff all over it. The power supply just needs to be large enough to cover PC Part Picker’s power estimate (329W in my case), and you’ll probably want to pick one which has some sort of precious metal in the name as that corresponds to energy efficiency. I chose a Corsair 550W “Gold” power supply. Other things I forgot to buy the first time I already had a monitor, but I had forgotten that non-laptop-people have to buy keyboards and mice. I also forgot that desktops typically don’t come with WiFi by default, so I had to buy a card for that too. If you would like some alternative opinions about how to select parts, I also liked the approach taken in this guy’s post. The full list of equipment in the build is here, and the total cost of the build (without a monitor) is $1,798. Not bad! The build process I won’t document the whole build process here as I’m anything but an expert on building PCs (I followed this video), but I will include a few photos to prove that I actually did it. I was pretty happy with my component selection, but I found the following things hard or surprising: The GPU card is huge. I saw the measurements and checked that it would fit, but I still wasn’t really expecting it to be so massive. The case is great and has some really excellent features, but it was pretty tight getting the back cover on after hiding all the cables back there. Closing the back cover was the hardest part of the whole build. I forgot that I had to put an operating system on it, so I had to use Windows Boot Camp on my Macbook to create a “Bootable Windows Installer USB” so that I could install Windows on the new machine. Not difficult, it just delayed launch by about an hour. Everything comes with a CD. I’m not sure why — all the software downloads automatically. Photo time! I think it looks pretty good. All the expensive bits, the RAM installed in front of the CPU fan, and the motherboard installed in the case. The GPU card next to a (very large) screndriver for scale, and the completed box sitting under my desk.
Building a cost-effective Deep Learning dream machine
2
building-a-cost-effective-deep-learning-dream-machine-147cf670ca0d
2018-05-29
2018-05-29 13:16:01
https://medium.com/s/story/building-a-cost-effective-deep-learning-dream-machine-147cf670ca0d
false
963
null
null
null
null
null
null
null
null
null
Tech
tech
Tech
142,368
Perry Stephenson
null
9c1d39a3c2a1
perrystephenson
20
16
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-26
2017-10-26 05:22:01
2017-10-26
2017-10-26 05:22:48
0
false
en
2017-10-26
2017-10-26 05:22:48
1
147d7cf436af
0.267925
0
0
0
Best coaching Institute for IAS Coaching in Chandigarh is DELHI CAREER GROUP. In this Institute you will detect highly experienced teacher…
4
IAS COACHING IN CHANDIGARH Best coaching Institute for IAS Coaching in Chandigarh is DELHI CAREER GROUP. In this Institute you will detect highly experienced teacher that will prepare for you IAS Entrance exam with MANY simplest methods. We are providing best study materials and we are help student that will make them confidence. The classroom is so interacting and better climate. DELHI CAREER GROUP offering some seats for new batches. https://plus.google.com/113669923370764997690
IAS COACHING IN CHANDIGARH
0
ias-coaching-in-chandigarh-147d7cf436af
2017-10-26
2017-10-26 05:22:49
https://medium.com/s/story/ias-coaching-in-chandigarh-147d7cf436af
false
71
null
null
null
null
null
null
null
null
null
Coaching
coaching
Coaching
18,603
dcg academy
null
d36e437af8ff
dcgacademy64
6
17
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-01
2017-11-01 19:46:27
2017-11-01
2017-11-01 20:50:57
5
false
en
2017-11-01
2017-11-01 20:50:57
2
147d990ea381
5.055975
20
0
0
null
5
Deep Reinforcement Learning: From Toys to Enteprise Reinforcement learning is an increasingly popular machine learning technique that is particularly well suited for addressing problems within dynamic and adaptive environments. When paired with simulations, reinforcement learning is a powerful tool for training AI models that can help increase automation or optimize operational efficiency of sophisticated systems such as robotics, manufacturing, and supply chain logistics. However, moving from the games commonly used to demonstrate these techniques into real-world applications isn’t always straightforward. Structuring solutions to move beyond purely data-driven training introduces all sorts of new complexity, requiring you to consider things like how to use simulations to target your learning objectives, what kinds of simulations are applicable, how to deal with long-running simulations, how to incorporate ongoing training refinement once deployed, how to account for scaling and performance, and ultimately how to bridge from simulation to the real world. I was recently able to talk about how to effectively leverage reinforcement learning in real-world use cases at the O’Reilly AI conference in San Francisco. You can see my talk in full below, or keep reading to learn more about deep reinforcement learning and the problems it can solve. What is Deep Reinforcement Learning? Let’s first understand what we mean when we talk about deep reinforcement learning. Deep reinforcement learning (DRL) is different from supervised learning in that you have an agent interacting with environment. Once it interacts with the environment, it gets an assessment of a reward function for its interaction with that environment and that then drives subsequent behaviors. The challenge with DRL is different because you don’t know what the correct answer is. With supervised learning, it’s learning because you’re telling it the right answer. But RL models learn by exploration. The system has to explore the environment, and understand what moves it can make in order to achieve the outlined reward objective. You don’t tell the system, “at this point in time, the right move to make is X.” Instead, you ask the system, “Did you achieve the overall end objective that I set out for the agent to accomplish?” Deep Reinforcement Learning + Games It’s very natural to think of games when you think of deep reinforcement learning. Games are, by construction, environments where the players have to interact with the game. You’ve probably seen reinforcement learning models playing games like Lunar Lander. Training a DRL model to play Lunar Lander is actually part of the getting-started tutorial in the Bonsai Platform. Games are a great way to get a feel for reinforcement learning technology and understand how it works. The Bonsai Platform used DRL to train an AI model to play Lunar Lander Enterprises, however, face different problems when trying to apply this technology to real-world systems. That’s what I want to talk about today, how we make the leap from games to the environment of the enterprise. What is Industrial AI? If we’re going to talk about “Industrial AI”, we should define what we mean by that term. Industrial AI techniques help enterprise companies, both commercial and industrial build control and greater optimization into their physical operations or systems. There are a number of use cases where industrial AI techniques are applicable — but if you look at the chart below you’ll see a few things that are different from the pure database scenarios that you run into with supervised learning. For example, you have a lot of devices. Frequently the environment you’re working in is not a device, it’s a whole set of devices that need to interact. Another thing to highlight is that these use cases typically require reinforcement learning technology and simulations or digital twins. The AI Use Case Spectrum The AI Use Case spectrum seen above the broad spectrum of problems to which AI technology is being applied today. But when we talk about industrial AI, we focus on the business problems on the left side of the chart. These problems tend towards optimization and automation of control systems, and away from the pure data analytics and prediction problems further to the right. Industrial AI use cases are rarely scenarios in which you go in with a large, curated, and labeled dataset. Instead, you’ll have physical equipment that you actually want to control or for which you want to optimize behavior. Unique Requirements and Challenges of Industrial AI Through this lens, you start to see a progression of how AI is being applied to these systems, and what that means for the business. It progresses from monitoring to maintenance, then to optimization and ultimately automation of those systems. This is a sequence regularly followed as enterprise engagements build more sophisticated AI into their industrial systems, and net greater return from its capabilities. Through this lens, you start to see a progression of how AI is being applied to these systems, and what that means for the business. It progresses from monitoring to maintenance, then to optimization and ultimately automation of those systems. This is a sequence regularly followed as enterprise engagements build more sophisticated AI into their industrial systems, and net greater return from its capabilities. But building AI into systems requires unique techniques and technologies as industrial AI applications are fundamentally different in a lot of ways: The state spaces are inherently large; sufficiently large that the reason enterprises turn to AI is because traditional programming techniques, even traditional dynamic programming techniques, are insufficient to solve these problems. There’s a huge reliance on subject matter expertise. Every organization has domain expertise; it doesn’t make any sense to ignore that and make systems learn from scratch. Having techniques and capabilities that enable you to capture and use that subject matter expertise becomes highly important. These systems are highly regulated Safety matters Downtime is a huge expense. The stakes are incredibly high From Games to Industry If you want to watch streaming video and the recommendation is not to your liking, that’s not the end of the world. But if the AI system monitoring your airplane maintenance system to gauge when to replace engines gives you a false positive, you’ve cost yourself $200,000. The predictions of these systems are high stakes. On top of that, you don’t want systems to get into states where things break. That’s equally expensive and equally damaging. You can’t simply deploy live AI models straight to the realsystem. First, you need to set up and connect a simulation or digital twin to build reinforcement learning models that are capable of solving real-world problems. I’ll talk more about the landscape of simulators, and why they’re so important for enterprises building AI into industrial systems, in subsequent posts. Until then, you can view my O’Reilly AI talk below or download our whitepaper, “AI for Industrial Applications.”
Deep Reinforcement Learning: From Toys to Enteprise
34
deep-reinforcement-learning-from-toys-to-enteprise-147d990ea381
2018-04-29
2018-04-29 03:15:16
https://medium.com/s/story/deep-reinforcement-learning-from-toys-to-enteprise-147d990ea381
false
1,119
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Bonsai
Empowering enterprises to build intelligent systems| http://bons.ai/
b5e7f244a49c
BonsaiAI
1,544
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-18
2018-05-18 19:07:54
2018-05-18
2018-05-18 19:54:07
34
false
cs
2018-06-14
2018-06-14 12:02:48
16
147d9e8156eb
15.19717
4
0
0
Tento blog vznikl za účelem dokumentace cesty naším projektem (závěrečnou prací) a jejích výstupů, kterou zpracováváme v rámci Digitální…
5
Evropská migrační krize z pohledu českých novinářů Tento blog vznikl za účelem dokumentace cesty naším projektem (závěrečnou prací) a jejích výstupů, kterou zpracováváme v rámci Digitální Akademie s Czechitas. Na projektu jsme spojily síly my dvě: Kristýna Mlatečková a Ema Hlaváčová. K našemu mentorství se upsal Pavel Chocholouš z Avast, který se nad technickým dozorem střídal s Petrem Šimečkem z Keboola. Smysluplnost práce po analytické stránce jsme hojně konzultovaly s Petrem Hamerníkem z Geneea. Jak jsme přišly na téma Popravdě, naše dvojice měla trochu složitější start. Daly jsme se dohromady s myšlenkou na relativně pohodové téma s praktickým přínosem z oblasti e-commerce (šlo by o webovou analytiku e-shopu, kde Kristýna pracuje jako asistentka marketingu), ale během Meet your Mentor jsme s tímto příliš neuspěly a musely jsme za běhu vymýšlet téma náhradní. Naštěstí jsme brzy přišly na to, že se myšlenkově potkáváme i v úplně jiném okruhu. Ema se totiž jednou při cestě z výuky zmínila, že v době, kdy odjížděla restartovat svůj život na Nový Zéland (léto 2015), začalo se v české společnosti intenzivně řešit téma uprchlíků a migrace, ale protože tou dobou dění doma přestala sledovat (jiný svět, jiné radosti), tyto informace k ní prosakovaly vlastně jen náhodou a neuceleně. Kristýna naopak, jako původně studovaná socioložka, vývoj krize sledovala se zaujetím. A tak nové téma bylo na světě: řekly jsme si, že se podíváme, jak o migrační krizi psala a píšou česká média. Konkrétně jsme si v rámci cílů projektu stanovily, že… Zmapujeme, jakým způsobem se proměňovala a vyvíjela témata spojená s migrační krizí v článcích čtyř vybraných internetových médií od počátku krize do dnešních dní (tedy zhruba od jara roku 2015 do jara 2018). Pro jednotlivá média: Vyhodnotíme četnost vydávaných článků v čase. Zjistíme, jakým tématům se věnovaly články, které byly vydávány v nejsilnějších momentech mediálního “boomu” uprchlické krize (skrze tagy a jiné entity). Vysledujeme, jakou citovou zabarvenost (sentiment) měly články, které se týkaly nejzajímavějších témat. Zjistíme, jaká témata a entity (např. osoby, místa, apod.) měly k sobě nejblíže. A dál, co nás v průběhu analýzy napadne, že by bylo zajímavé vizualizovat… Za sledovaná média jsme vybraly následující servery: Novinky.cz (Dlouhodobě jedno z nejčtenější internetových medií v ČR.) Echo24.cz (Zástupce názorového média, které samo sebe označuje jako „pravicové a liberálně konzervativní médium“.) Reflex.cz (Internetová podoba magazínu, který se věnuje převážně společenským, politickým, kulturním publicistickým tématům.) Parlamentnilisty.cz (zástupce dezinformačního média, které dle ankety Křišťálové lupy 2017 získalo anticenu za kradení obsahu.) Ďábla hledej v detailech Poté, co jsme konečně definovaly téma projektu, začalo shánění dat. Bylo jasné, že data existují, otázkou však bylo, jak je získat. Šlo vlastně o boj srovnatelný s jejich samotnou analýzou a stál nás opravdu spoustu času a energie. První volba padla na stahování článků z jednotlivých webů přes jejich API, což nám ale mentor rozmluvil se slovy, že mu to nepřijde jako úplně funkční varianta, a navíc placená… Druhou volbou byla databáze GDELT, v tomto případě se ale ukázalo, že to pro náš projekt nebude úplně vhodný zdroj, protože nejde tolik do detailu… Další možností bylo scrapování webového obsahu přes platformu Apify. To by znamenalo naučit se doslova během pár dnů plně orientovat v JavaScript, regulárních výrazech a HTML, a napsat skripty pro každý sledovaný web zvlášť. Už to i vypadalo, že se touhle cestou vydáme, (a uprosíme podporu v Apify, aby nám s tím pomohla)… Úryvek konverzace s mentorem o Apify …než nás kolegyně z DA, které s Apify v rámci svého projektu pracují, upozornily, že pokud chceme články až tři roky staré, potřebujeme do skriptů přidat přístupové údaje do placené části archivů, jinak se tam crawlery stejně nedostanou…Ďábla hledej v detailech! Došly jsme tedy v kruhu zpátky na začátek a čas nás začal pekelně tlačit. Zkoumáme, co je to HTML… (hledání “ clickable elements” pro Apify) Příklad použití připraveného crawleru z knihovny Apify — Novinky.cz (názory čtenářů) Poslední volba padla na placenou databázi Anopress, kde jsme si vyjednaly podmínky přístupu k 6.000 článků. Hurá, 5 dnů před odevzdáním první verze blogu máme přístup k datům! Zajímavost: Anopress hrubým odhadem určil, že pokud bychom měly zájem opravdu o všechny články týkající se tématu, bylo by jich při rozsahu cca 7 medií (v našich původních úvahách) bez ohledu na jednotlivé rubriky až 40.000! Procesní kroky projektu Tím, že jsme už opravdu neměly na samotnou analýzu moc času, snažily jsme se hledat co nejjednodušší cestu k prvním grafům (nechaly jsme se inspirovat otázkou: “What would this look like , if it were easy…”). A řešení se ukázalo vlastně samo : použijeme Keboola Connection (KBC) alias „chobotnický aparát”, který v sobě integruje možnost skladování, transformace, volání externích aplikací a přímé extrakce ven do dalších nástrojů na jednom místě. Prvním, vlastně nultým, krokem bylo vyfiltrování potřebných článků z databáze Anopress. Vše se zapisovalo přes formulář, což bylo výhodou i nevýhodou zároveň. Zdánlivé pohodlí totiž skončilo ve chvíli, kdy jsme se potřebovaly vejít do požadovaného limitu 6.000 článků. To znamenalo použít metodu pokus-omyl pro nalezení, jak nadefinovat podmínky vyhledávání klíčových slov a omezit prohledávané rubriky. Bez SQL je to nekonečný proces variantního klikání a hledání v rolovacích menu…Ale nestěžujeme si, odměnou nám byla relativně pěkná data bez nutnosti většího čištění (kdo by si to nepřál :-)). Nakonec jsme podmínky stanovily takto: v článku se musí vyskytovat minimálně 2 slova „uprchlíci” (nebo jejich variace) v kombinaci s alespoň jedním dalším výrazem „migranti” nebo „imigranti”(nebo jejich variace). Rubriky, jako prostor pro vyhledávání, jsme omezily na dva typy — zpravodajské články a názory odborné veřejnosti — a agregovaly jsme je pod námi definované kategorie „Domov & Svět“ a „Komentáře“. Období pro filtrování jsme stanovily na 1.3.2015–1.3.2018 a požadovanou minimální relevanci článků vzhledem k hledaným výrazům jsme navýšily na 90%. Za uvedených omezení jsme tedy vyfitrovaly a postahovaly 5618 článků v tomto rozložení: Vizualizace Power BI Pozn.: Média reflex.cz a parlamentnilisty.cz nemají tradiční rozložení na domácí a zahraniční zpravodajství. Zvláště u Parlamentních listů je toto dělení problematické, protože člení své rubriky na mnoho kategorií, a ne vždy je úplně jasné podle jakého klíče. Nakonec jsme do výběru pod kategorie „Domov & Svět“ a „Komentáře“ zařadily tyto rubriky: Vizualizace — PowerBI Články jsme stahovaly dávkově po jednotlivých médiích (resp. rubrikách) ve formátu csv a importovaly do KBC skrze Google Drive. Náhled obsahu jedné z importovaných tabulek — KBC Další krokem v KBC bylo spuštění textové analýzy přes externí aplikaci Geneea (NLP Analysis). Jako Czechity — studentky bez zkušeností (tj. na začátku DA si pro zpracování dat v rámci projektu nadefinujete nástroje, o jejichž existenci často ani nevíte. Že ta věta nedává smysl? Přesně takhle to ale probíhá, je to paradox :-) ) — jsme se rozhodly zatím nespojovat zdrojové tabulky v jeden celek a metodou per partes nechávaly analyzovat rubriky zvlášť. Tím jsme měly proces trochu víc pod rukama a mohly průběžně kontrolovat, co nám Geneea vrací. Spuštění analýzy pro jednu zdrojovou tabulku včetně vrácení výsledků trvalo max. několik minut, a to i díky tomu, že celé nastavení jsme pár dní předtím konzultovaly s Petrem Hamerníkem. Získaly jsme tak ke každému vstupnímu souboru 4 výstupní soubory, které k jednotlivým článkům identifikovaly např.: tagy — témata a jiné entity, které jsou zde k nalezení (osoby, místa, organizace, významná slovní spojení atd.) celkový sentiment článku apod. Náhled tabulky obsahující nalezené entity pro médium echo24.cz (Komentáře) — KBC Tímto krokem nám pohodlná automatizace skončila a v projektu jsme se musely vrátit ke staré dobré ruční práci, pospojovat všechny zdrojové tabulky v jeden celek a obdobně i výstupní tabulky z Geneea. Posunuly jsme se tedy v rámci KBC do záložky „Transformations”, kde jsme si připravily několik SQL queries (ty jsme nejprve odladily v cvičném prostředí sandboxu ve Snowflake). Vytváření agregované tabulky médií — KBC Vytváření agregovaných tabulek “entities” a “sentences” z výstupů analýzy Geneea — KBC Agregovanou zdrojovou tabulku jsme pak ještě propojily s agregovanou tabulkou výstupů „entities”, kde jsme jako „join key“ určily URL článku (to by se určitě dalo i v následujícím kroku mimo KBC, ale každá jsme se rozhodla pracovat v jiném vizualizačním nástroji, tak to bylo pohodlnější…) Propojení agregované zdrojové tabulky s tabulkou výsledků analýzy (entities) — KBC Pozn.: Transformovanou tabulku jsme tedy směřovaly k vizualizacím v Power BI a Tableau. V případě Tableau by se dala exportovat v rámci KBC přímo s příponou „.tde“, ovšem tento formát se dá otevřít pouze ve verzi Tableau Professional, a my jsme potřebovaly Tableau Public kvůli možnosti zálohování vizualizací v cloudu (to by šlo i v Professional, ovšem ne v trial verzi:-)). Exportovaly jsme tedy do csv. První výsledky analýzy aneb znova a lépe K demo dnu (tj. ke dnu generálky na finální prezentaci) jsme si připravily vizualizace k analýze dvou medií — echo.24 a parlamentnilisty.cz. Nejprve nás zajímalo, jak jsou na tom uvedená média co se týče průměrného sentimentu vydávaných článků, a to v závislosti na sledovaném období (to jsme členily na měsíce) s rozdělením na rubriky „Domov & Svět” a „Komentáře” zvlášť. Vizualizace Tableau Public Docela nás překvapilo, že sentiment článků ve zpravodajské rubrice Parlamentních listů („Domov & Svět”) je vesměs neutrální až lehce pozitivní (ty kumulované modré sloupečky vpravo), zvláště pak ve srovnání se sentimentem obdobné rubriky u média Echo24.cz (ty naopak oranžové a červené sloupečky vlevo). V prvotních úvahách o projektu jsme totiž Parlamentní listy vybraly jako zástupce média s nižší jazykovou kulturou. Dokonce jsme kvůli tomu přešly zpět do KBC a kontrolovaly, zda jsme propojily správné tabulky:-). Propojily. Následovala tedy úvaha, zda Geneea je správně „odladěná”, a na to konto jsme i dostaly na demo dnu doporučení na rozšíření projektu o machine learning. Zašly jsme tedy za Petrem Hamerníkem, v hlavě nápad, že Geneea potřebuje naše úpravy. Jenže… Petr, ne že by nám naše představy úplně vymlouval, ale tak trochu nám vysvětlil, jakým způsobem Geneea vlastně průměrný sentiment na článek získává. Jde o průměr sentimentu jednotlivých vět v článku, a ten je dále dán sentimentem jednotlivých (vybraných) slov ve větě. Celkový průměrný sentiment je tak proto tím vypovídající („pravdivější”), čím kratší text necháváme analyzovat. (Jinými slovy, když necháváme analyzovat článek o 180 větách, tj. 2000 slovech, jednotlivé sentimenty se navzájem prakticky vyruší. A to potom může vzniknout taková situace, že na agregované úrovni získají Parlamentní listy neutrální až lehce pozitivní citové zabarvení svých článků…). Bylo nám tedy doporučeno, že než se pustíme do machine learningu, měly bychom upravit cíle analýzy a v otázce sentimentu si položit konkrétní otázky, nenahlížet tak obecně. Druhá fáze projektu Okay, nebudeme pohlížet na sentiment tak vágně, podíváme se, jak byla jednotlivá média jazykově přívětivá (nebo právě naopak) ve spojitosti s nejzajímavějšími tagy a jinými entitami (osobami, frázemi, lokacemi…). Navíc je potřeba nedávat do kontextu entity a články, ale jít více do detailu a propojit entity s konkrétními větami, ve kterých se nachází. Teprve tento sentiment vět pak můžeme s klidným svědomím uvažovat jako zdrojový do průměrného sentimentu. V praktickém řešení to znamenalo vrátit se do KBC a znovu zavolat Geneea NLP Analysis, aby nám vytáhla entity a sentiment, který se pojí přímo k větám (Pozn.: Naštěstí jsme měly články rozčleněné na věty ve výsledku první analýzy, nemusely jsme tedy dělat extra transformaci). Poté jsme opět provedli propojení původní agregované zdrojové tabulky s těmito výsledky, ale tentokrát jsme jako „join key“ stanovily věty samotné…a získaly jsme tabulku, která měla přes 40 milionů řádků, což bylo o řád jinde, než jsme čekaly…a v tu chvíli se nám spustil alarm v hlavě, co že se to tam děje? Samozřejmě, jednalo se o duplicity. Díky tomuto kroku jsme si teprve všimly, že nemalá část článků, které jsme stáhly (především z echo24.cz) obsahuje stejné věty dvakrát i vícekrát (jako výsledek připojení summary v těle hlavního textu). Byl to „detail”, který nám způsoboval, že po propojení tabulek skrze věty nám počty přiřazených entit exponenciálně narostly. Zbavovat se duplicitních vět už na vstupu do analýzy by bylo nepřiměřeně náročné byly součástí zdrojových článků), tak jsme se duplicit zbavily až ex post: KBC A po opětovném propojení s tabulkou entit už jsme dostaly reálné počty řádků (1 046 280). Tím jsme se konečně dostaly do poslední fáze projektu a už nás čekaly „jen” vizualizace. Podle rychlého náhledu v Power BI nás informací k možné vizualizaci čekalo dost… Náhled tabulky v PowerBI Analýza V této části naplníme pomocí vizualizací konkrétní cíle analýzy, které jsme si stanovily úplně na začátku projektu v DA. Jako první úkol jsme řekly, že: Vyhodnotíme četnost vydávaných článků v čase. Ta-dá: Dobře, teď trochu přehledněji. Když se podíváme na konkrétní čísla a křivky v průběhu jednotlivých měsíců: Z grafu vidíme, že největší počet článků na téma uprchlíci za celé sledované období vychází na září 2015 a to u všech čtyřech médií (konkrétně je to u Parlamentních listů 274, u Novinek 196, Echo24.cz 138 a u Reflexu 96 článků). Od září 2015 již pak křivka postupně klesala. Pozn.: Záznamy k Parlamentním listům v databázi Anopress náhle končí na konci února 2017. Po tomto datu nenacházíme v žádné rubrice ani jeden článek, který by odpovídal kritériím vyhledávání. Naší analýzu by to mohlo narušit z pohledu vzájemné komparace médií, ale vzhledem k tomu, že od roku 2017 není aktivita ani ostatních médií nijak výrazná… Zajímá Vás, co se v září 2015 dělo, že aktivita novinářů byla na vrcholu? Inu, na začátku září 2015 přicestovalo značné množství uprchlíků do maďarské Budapešti. Tito uprchlíci obsadili celé náměstí před nádražím. Na konci srpna 2015 zároveň došlo ke smrti udušením 71 uprchlíků tísnících se v malém náklaďáku v Parndorfu. Tato tragédie byla pak médii hojně diskutována i během září. Zároveň v září započaly tzv. pochody naděje, což byly pochody uprchlíků z Maďarska do Rakouska. Zároveň Angela Merkelová měla v noci z 4. na 5. září 2015 vydat tzv. pozvání pro uprchlíky do Německa a Rakouska. Členské státy začaly intenzivně řešit rozdělení uprchlíků do jednotlivých zemí EU dle kvót. Výjimku v případě prudkého nárůstu v září 2015 tvoří pouze Parlamentnilisty.cz, které mají vysoký nárůst počtu článků již v červnu 2015, tedy v době, kdy ostatní média ještě “tak trochu spala”. Významný vzrůst sledujeme i v dubnu 2016, kdy vzrostl počet článků na dané téma i na serveru novinky.cz. Zajímavé je také prohlédnout si křivku množství článků v případě Reflexu. U ostatních tří médií sledujeme zvýšení počtu článků na dané téma již od února/března 2015, v případě Reflexu je tento nárůst ale mnohem pozvolnější. Konec roku 2015 je pak už ve znamení významného poklesu aktivity novinářů až na třetinu předchozích hodnot. Rok 2016 má v prvním pololetí hned dva vrcholy — v únoru a pak hned v dubnu — které ale výrazněji zaznamenávají jen Parlamentní listy a Novinky (ale ani tak se četnost nedostane přes 108 článků, což je zhruba polovina v porovnání se zářím 2015). Třetí vrchol v srpnu se sice nachází již v pokračujícím trendu úpadku popularity tématu, ale je výrazný téměř u všech médií. Jen Reflex.cz se zdá být tématem uprchlíků ještě více unavený (publikuje pouhých 30 článků). Rok 2017 a začátek roku 2018 nepředstavují žádný výraznější růst ani pokles aktivity u žádného z médií, jsou vydávány spíše už jen jednotky článků měsíčně. Dále jsme chtěly zjistit: 2. Jakým tématům se věnovaly články, které byly vydávány v nejsilnějších momentech mediálního “boomu” uprchlické krize (skrze tagy a jiné entity). Nakonec jsme se raději podívaly na celé období 3/2015–3/2018, a hledaly jsme rozdíly v tématech článků v momentech všech výraznějších milníků pro všechna média dohromady (vizualizované wordcloudy jsou založené na nejfrekventovanějších entitách typu „phrase” (slovní spojení přídavné jméno — podstatné jméno) a „relation” (slovní spojení sloveso — podstatné jméno): Vizualizace - Tableau Public Ha, tady už jsou jisté proměny v klíčových slovech patrné! Zatímco v červnu 2015 se intenzivně řešily povinné kvóty, v září se dostává naplno do popředí uprchlická krize, která v únoru dostává název migrační krize. V dubnu se do popředí dostává “křesťanský uprchlík” — jak jsme zjistily, v dubnu 2016 z České republiky odešlo 16 íráckých křesťanů za lepším do Německa, což se v médiích hojně diskutovalo. V srpnu hrála prim německá kancléřka, neboť právě ke konci srpna navštívila Prahu. V tomto měsíci také výrazně vystupuje slovní spojení “teroristické útoky”. Ve dnech 17. až 18. srpna došlo na východě Turecka k sérii tří teroristických útoků. Není překvapením, že v roce 2018 byly v hlavním zájmu novinářů prezidentské volby, kdy se zejména s kandidaturou Miloše Zemana často spojovalo, že je to právě on, kdo Českou republiku “ochrání před uprchlíky”. Dále jsme se podívaly, jak se jednotlivá témata proměňovala v rámci celého období 3/2015–3/2018 pro všechny média (vizualizujeme na entities typu “phrases” a “relations”): Ale výrazné rozdíly nenacházíme, u všech čtyř médií hrálo prim klíčové spojení „uprchlická krize”, což není asi vzhledem ke klíčovým slovům v kritériích vyhledávání žádné překvapení :-). Wordcloudy se u jednotlivých médií příliš neliší. Za povšimnutí stojí snad jen Parlamentní listy, které mají tuto frázi oproti ostatním třem sledovaným médiím lehce výraznější. K úplnosti druhého cíle jsme se ještě podívaly na entity typu „person“ a „location“: Vizualizace Power BI Následujícím úkolem bylo: 3. Vysledovat, jakou citovou zabarvenost (sentiment) měly články, které se týkaly nejzajímavějších témat. Z jednotlivých wordcloudů výše jsme si vybraly nejfrekventovanější entity a spárovaly k nim citovou zabarvenost, s jakou o nich psala naše vybraná média. (Pozn.: Metodiku výpočtu průměrného sentimentu jsme nastínily hned v začátku kapitoly „Druhá fáze projektu”.) Sentiment nabývá hodnot od 1 do -1 (1 = pozitivní sentiment, 0 = neutrální sentiment, -1 = negativní sentiment). V ideálním případě by tedy médium, které chceme označovat jako objektivní ve vztahu k vybrané entitě, mělo mít průměrný sentiment svých článků okolo hodnoty 0. Vizualizace Power BI V případě frází “uprchlická krize”, “migrační krize” dle očekávání vyšel výrazněji negativnější sentiment než u ostatních entit. Důvod je nasnadě: již samotná fráze obsahuje slovo “krize”, která sama o sobě tvoří sentiment negativní v takřka jakémkoliv kontextu. Vizualizace Power BI Je nutné upozornit, že sentiment může být i mírně zkreslen: např. Angela Merkelová je v případě Parlamentnilisty.cz označována např. jako “milosrdná Angela Merkel” apod., což sice vytváří pozitivní sentiment, ale ve skutečnosti se jedná o spíše posměšné označení. Z dat výrazně vystupuje silně negativní sentiment v případě Ergogana v Reflexu. Jak jsme zjistily, Reflex se vyjadřoval na adresu Erdogana do značné míry kriticky. Naopak, parlamentnilisty.cz mají v jeho případě sentiment spíše pozitivní. A jako poslední cíl v rámci projektu jsme si stanovily, že : 4. Zjistíme, jaká témata a entity (např. osoby, místa, apod.) měly k sobě nejblíže V této otázce jsme se nakonec soustředily na vizualizace vztahů mezi nejčastěji zmiňovanými osobami a entitami typu “relation” a “phrase”.(Pozn.: Sílu vztahů měříme celkovým počtem současného výskytu daných entit ve stejném článku.) Vpravo — Angela Merkelová ve spojitosti identifikovaných frází (PowerBI) Barevný Chord diagram (v otrockém překladu “diagram souzvuku”) nám prozrazuje, že v průběhu sledovaného období tří let se v článcích všech čtyř médií vedle jména německé političky hojně vyskytovali i představitelé nejvyšší české politické scény (Miloš Zeman, Andrej Babiš a Bohuslav Sobotka), ale k naší smůle je na první pohled viditelné, že zde nejsou fráze, ke kterým by měli osobnosti výrazněji blíže či naopak. Výjimku tvoří jen širší tyrkysový pruh, který znázorňuje silnější vztah entit “Angela Merkelová x německá kancléřka” (proč asi:-)). Na druhém diagramu se opět zaměřujeme na české a evropské politiky, ale tentokrát si bereme na paškál k nim blízké “relations” (slovní spojení typu sloveso + předmět). Vpravo -ve světle reflektorů Miloš Zeman (PowerBI) Vztahy jsou tu silnější v kontextu ke dvěma slovním spojením “mít právo” a “mít pravdu”, ostatní výrazy (“mít problém”, “mít strach”, “žádat o azyl”, apod.) se nacházely spolu s výše uvedenými osobnostmi ve stejném článku v podstatě ve stejné míře. A jako sladkou tečku naší analýzy si vybíráme k vyzdvihnutí vztahy prezidenta Zemana, který se v blízkosti zmiňovaných obratů práva a pravdy necházel ve stejném textu nejčastěji v porovnání k ostatním výrazům. The end. Co jsme tedy zjistily? V bodech: největší novinářská aktivita připadá na září 2015, od té doby měla až na pár výkyvů klesající tendenci za rok 2015 napsaly novináři jednou tolik článků na téma “uprchlická krize” než za roky 2016, 2017 a 2018 dohromady mezi nejčastější slovní spojení patřilo: “uprchlická krize”, “povinná kvóta”, “migrační krize”, “německá kancléřka” a “vnější hranice” (vzhledem k vybranému tématu nic překvapivého :) ) mezi osoby s nejpozitivnějším sentimentem patřil u všech 4 sledovaných médií Vladimir Putin mezi osobami s nejnegativnějším sentimentem u všech sledovaných médií najdeme Václava Klause st. Závěr Hlavním přínosem našeho projektu je získání a předání informace. Hodit se může například Hlídači médií či komukoliv, kdo čte zprávy na internetu a má zájem vědět, jakým způsobem jím vybraná média vlastně píší. Výstup projektu se může hodit také médiím či politickým subjektům. Zjištění sentimentu médií může pomoci indikovat, jaké strategie jednotlivá média uplatňují, tedy zda jsou např. parlamentnilisty.cz skutečně tendenčním serverem apod. Dalším možným pokračováním projektu je napárování počtu článků na počet příchozích uprchlíků do Evropy dle měsíc/rok a zjistit, zda se počet vydávaných článků zvýšil paralelně s nárůstem počtu přicházejících uprchlíků nebo zkusit pomocí GDELT zjistit, jakým způsobem psala o uprchlících nejčtenější média našich nejbližších sousedů (Polsko, Německo, Rakousko, Slovensko). Poděkování Děkujeme všem, kteří nás Digitální akademií provázeli a pomáhali nám dovést projekt do zdárného konce. Byla to jízda a stálo to za to! :-)
Evropská migrační krize z pohledu českých novinářů
7
evropská-migrační-krize-z-pohledu-českých-novinářů-147d9e8156eb
2018-06-14
2018-06-14 12:02:49
https://medium.com/s/story/evropská-migrační-krize-z-pohledu-českých-novinářů-147d9e8156eb
false
3,378
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Kristyna Mlateckova
null
d4323104d3cd
mlateckova.kristyna
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-30
2018-01-30 18:12:11
2018-01-30
2018-01-30 18:21:31
1
false
en
2018-10-12
2018-10-12 18:15:44
2
147dd4e783aa
3.6
12
0
0
by Stephanie Glass, Head of Product Marketing for Cognitive Skills at Aera Technology
5
AI and the Evolution of Demand Forecasting by Stephanie Glass, Head of Product Marketing for Cognitive Skills at Aera Technology The complexities of demand forecasting have bedeviled businesses for decades. Consider these lessons from 1970s and ’80s, at the dawn of enterprise computing, as outlined in the Harvard Business Review: · U.S. electric utilities lost millions in the 1970s and ’80s after investing in new power plants based on forecasts that demand would rise 7 percent a year. In fact, demand grew a mere 2 percent a year. · In 1980–81, the petroleum industry invested $500 billion in infrastructure and services, only to suffer massive losses when projections for higher oil demand didn’t materialize, triggering an industry-wide price collapse. · In the early 1980s, personal computer makers built millions of PCs on forecasts of explosive growth. When demand fell far below projections, many manufacturers abandoned the market or went bankrupt. Decades later, businesses have at their disposal vast volumes of data scarcely imaginable in the early ’80s. Theoretically, demand forecasting should be easier and more accurate. While gains have been made, demand forecasting remains a high-stakes guessing game based on outdated historical information and software applications that rely on simplistic business rules. The problem is that businesses aren’t equipped to leverage the volume and diversity of today’s big data. They can’t account for real-time changes that can make the difference between profit and loss. Though financial and demand planners collaborate closely, manual work and spreadsheets complicate the problem. The result is forecasts that don’t adapt to constantly changing variables, and the risk of a multimillion-dollar impact from one small oversight or miscalculation. How AI Improves Demand Forecasts Artificial intelligence and machine learning are poised to revolutionize demand forecasting. AI gives demand and financial planners breakthrough capabilities to extract knowledge from massive datasets assembled from any number of internal and external sources. The application of machine learning algorithms unearths insights and identifies trends missed by traditional human-configured forecasts. Using big data and massive compute power in the cloud, AI can simultaneously test and refine hundreds of advanced models, far beyond what is possible with traditional demand forecasting. The optimal model can then be applied at a highly granular SKU-location level to generate a forecast that typically improves accuracy by more than 10 percent. User organizations have flexibility in how AI is applied in demand forecasting, from insights, recommendations and predictions to autonomous actions based on cognitive automation. At Aera Technology, we see the biggest impacts of AI in demand forecasting in data impact, real-time processing and decision-making: Leverage more data. AI for demand forecasting helps planners optimize decisions by aggregating massive datasets from internal ERP, CRM, IoT and other systems, as well as external information such as partners, market intelligence, social media or weather forecasts. With more data, AI uses sophisticated algorithms supported by massive compute power to recommend optimal forecasts. Real-time adaptation to change. Rather than relying on historical data, AI adapts in real time to account for changes that can range from a supply chain disruption to competition, new channels or products, or natural disasters. Incorporating this information, AI constantly learns and can course-correct to minimize risk and capitalize on opportunity. Strategic decision-making. AI can tell users which materials, plants and SKUs to focus on, and recommends actions to close gaps between unit-level and financial forecasts. Planners are able to spend less time on manual work and focus on value-added strategic decisions. Real-world Success at Merck Merck, the global biopharmaceutical company, is among the businesses putting the power of AI to work to optimize demand forecasting. Previously, planners spent inordinate time in manual efforts to improve forecast accuracy. Merck dealt with inefficiencies across demand predictions and service levels, and could end up with costly excess inventory. “Aera displays real-time measures of supply chain performance down to the stock keeping unit, drawn from sensors on factory machines and data collected from the company’s SAP SE enterprise software,” said Alessandro de Luca, CIO of Merck KGaA. “Using Aera’s software, Merck KGaA was able to increase its level of service of pharmaceuticals to hospitals from around 97 percent to 99.9 percent.” Incorporating AI into demand forecasting delivers the benefits that planners have chased for decades, with mixed results. With AI, businesses can fine-tune production and optimize inventory to reduce working capital and improve customer service levels. They can craft data-driven sales plans by reps, regions and products. And companies can accurately predict demand for new products and the associated sales uplift based on sales history, similar products and market intelligence. AI offers what-if scenario analysis to help users understand and predict that tradeoffs between inventory and service when deciding inventory policies. Demand forecasting has traditionally been heavy on art, lighter on science. AI elevates the (data) science aspect of the equation to pave the way for a more efficient and profitable business. It’s time to go beyond traditional forecasting and leverage transformative AI technology to solve real business problems. — Stephanie Glass is the Head of Product Marketing for Cognitive Skills at Aera Technology. She is responsible for product strategy and product marketing for Cognitive Skills, a critical component of Aera’s AI solutions. She brings to Aera extensive experience in AI, machine learning, planning and analytics, with past roles at Anaplan, GoodData and Jive Software.
AI and the Evolution of Demand Forecasting
62
ai-and-the-evolution-of-demand-forecasting-147dd4e783aa
2018-10-12
2018-10-12 18:15:44
https://medium.com/s/story/ai-and-the-evolution-of-demand-forecasting-147dd4e783aa
false
901
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Aera Technology
Aera is the cognitive operating system that enables the Self-Driving Enterprise.
9549a91f387c
Aera_Technology
86
15
20,181,104
null
null
null
null
null
null
0
null
0
7384ee9b7ff5
2018-06-14
2018-06-14 15:41:56
2018-06-28
2018-06-28 10:01:21
1
false
en
2018-06-28
2018-06-28 10:01:21
21
147fb77482de
6.649057
9
0
0
While immersed in the hype surrounding emerging technologies at CogX in London, with discussions around the broad range of possible…
5
Emerging Technologies: Testing Challenges & Opportunities While immersed in the hype surrounding emerging technologies at CogX in London, with discussions around the broad range of possible applications of the convergence of AI and Blockchain, I found it easy to lose sight of the software testing challenges and indeed the opportunities presented by this changing landscape. Photo by Dominik Scythe on Unsplash Now that I have had some time to decompress and fire up my healthy dose of tester cynicism, I’ve collated some thoughts on where I think we need to focus our attention in the coming months and years: data, environments, usability, and tooling. Test Data & Environments Whether it is for the purposes of testing form validation, script injection, API calls or load testing, data is a fundamental building block of any test effort and as Prof. AC Grayling of the New College of the Humanities reminded us that CogX: “AI is data hungry” “AI is data hungry” — Prof. AC Grayling of the New College of the Humanities If the predicted trend is for the software we develop to utilise increasing amounts of AI and the ability of those systems to perform valuable learning is only as good as the quantity and quality of the data it consumes, then we as testers need to consider the structure, growth and maintenance of our test data sets to ensure that we are always exercising the system under test in a way that is loyal to real-world usage while still unearthing those elusive edge cases. At a previous employer, the accountants maintained the books for a fictitious company, in an attempt to simulate the growth of the books and to expand the coverage the data-set provided. While this is a sensible attempt to simulate an organic data set that will exercise the system under test in realistic ways, it is not really scalable. To scale data sets in parallel with the application AI’s hunger for data, we too will need to harness AI. In the aforementioned example, this might mean creating a number of AI managed “businesses”; trading with each other and contributing transactions to each other’s accounts. But this raises the question of reproducing failure conditions. It is important to be able to reproduce failure conditions and in a world where AI will be generating the test data, we need to find a way to seed that data set or take a snapshot of the system under test, which would be expensive in terms of storage. We could also try to build in re-playable or reversible state to the system under test by leveraging some auditing mechanism. In the “AI at Scale” talk by Chris Wigley of QuantumBlack, he suggested that there is no gentle approach to doing AI at scale “… if you do not do AI at scale now, you will never do it”. I think this is also true for testing — where the opportunity arises to use AI we need to seize it. When the application we are testing begins to use AI we also need to embrace it in our test framework or quickly find ourselves under-tooled for the complexity of the task. So what does this mean for our test environments? According to the World Quality Report 2017–18 from Capgemini, test environments are an increasing concern for testers, with 46% of those asked citing it is a major problem — up 3 points on the previous year’s report. This highlights that environments are already a pain point in the industry. Traditionally, test environments are light-weight representations of the production environment, getting less and less light the nearer the environment is to production. If we need to maintain larger (and growing) data sets on these environments, the storage and computational power required for AI to make use of this data will see the provisioning for test environments get closer to that of production. UX & Tooling At CogX, Sarah Gold of Projects by IF raised the prospect of AI-generated customised user experience. This could negatively impact usability when the designer cannot predict the journey of the user. Similarly, a tester or automated flow that emulates the user’s UI experience in order to highlight potential blockers and impairments will quickly find the non-deterministic nature of the AI-generated flow hinders progress. Multiple traversals through the UI will potentially produce different content, presentation or change the entire outcome based on the decisions made by the AI in response to what it learns from the user’s interaction. To combat this, we’ll need to re-invent our current automated UI tooling, which is often quite rigid and brittle, to deal with this new non-deterministic and dynamic user experience. The Page Object Model (POM) allows us to model a page which makes it easier to manage brittleness. When the Document Object Model (DOM) changes, we have a single point of failure in the POM, which can be updated and fix all reliant specs. If pages are no longer as easily definable, then the POM approach needs to make more use of a Component Object Model. In the world of Behaviour-Driven Development (BDD), Gherkin allows us to document functionality and drive automated tests. But if the flows within a system are more fluid and each interaction does not necessarily produce the same outcome, can the functionality be documented successfully in this traditional manner? BDD should still be a perfect fit for AI — writing Given/When/Then statements that avoid the implementation technicalities and maintain a business language level description of the actions performed and outcomes expected. This focus on the measures of success rather than the means of achieving it, are highlighted in the article entitled What’s The Score? Developing The Right Measurement Capability Is Critical To AI Success written by Steven Gustafson of Maana. Consider the following video that was presented by Mike Hearn of r3 at CogX. He explained that without adequate measures of success, AI will strive to achieve success by whatever means possible, hitting the Local Minimum — the 2D HalfCheetah found that it could achieve its goal by means of grinding along on it’s back. If AI has control of the priority, placement or even inclusion/exclusion of content and features on a page, we need to consult it’s inner workings to make sure that our test framework knows what to expect next. If AI has a more complex control of UX, for example: the application learns that a user prefers certain input types to others (a star rating input represented by a widget, slider or drop-down) — will our definitions in the POM need to be loose and will our means of interacting with the UI require more wrappers to handle the increased diversity of interactions with non-deterministic input types? The non-deterministic elements of the application will have to be tested using some of the inner workings of the system. This doesn’t sit well with me, since I have always advocated avoiding the use of the internals of a system to test the system. “The test infrastructure would need to support learning expected test results from the same data that trains the decision-making AI” — Moshe Milman & Adam Carmi, co-founders of Applitools An interesting take on AI-led QA is presented by InfoSys where the suggestion is that Machine Learning can be used to identify problems in the system by using existing data in the form of defects, tests cases, logging and the codebase itself. The historical test resources and results is consumed by AI and it learns how to predict similar faults. Conclusion Prof. AC Grayling of the New College of the Humanities suggested that something similar to the Antarctic Treaty is required to ensure that AI is only used for good. He went on to highlight the 3 main types of human-AI relationship in combat systems: “in the loop” where the decision is presented to the human, “on the loop” where the human steps in at escalation points and “out of the loop” where the human is removed from the equation entirely. Gil Tayar of AppliTools goes one step further and defines “6 levels of AI-based testing: Have no fear, QA pros”. The “out of the loop” (Grayling) approach is reminiscent of the film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb. I believe that we are a long way off being fully out of the loop (Level 5, Tayar) in terms of testing, but I think we’ll soon be developing test systems in an “in the loop” (Level 2) or “on the loop” (Level 3) style to help us keep up with the growth of AI in the software we test. We need to lay the foundations on which to build the testing approaches of the future. We need to leverage the technologies that will be used in our products, to create frameworks fit to test them. In a world where systems are capable of learning and changing; data is king and deterministic flows cannot be taken for granted, we need to be prepared to build test frameworks that harness AI capable of generating growing test data sets, accept that our test environments will become less light-weight in order to handle the processing required and acknowledge that the predictable and reproducible user journeys through our systems may soon be a thing of the past. Sources & Further Reading: CogX 2018 — https://www.youtube.com/c/cognitionx 5 Ways AI Will Change Software Testing — Paul Merrill — https://techbeacon.com/5-ways-ai-will-change-software-testing Artificial Intelligence-led quality Insurance — InfoSys — https://www.infosys.com/IT-services/validation-solutions/service-offerings/Documents/machine-learning-qa.pdf World Quality Report 2017–18: State of QA & Testing — Ericka Chickowski — https://techbeacon.com/world-quality-report-2017-18-state-qa-testing 6 levels of AI-based testing: Have no fear, QA pros — Gil Tayar — https://techbeacon.com/6-levels-ai-based-testing-have-no-fear-qa-pros Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1964) — https://www.imdb.com/title/tt0057012/ HalfCheetah: Local Minimum — Patrick Coady — https://www.youtube.com/watch?v=2-cU-_bdfHQ What’s The Score? Developing The Right Measurement Capability Is Critical To AI Success — Steven Gustafson — https://www.forbes.com/sites/forbestechcouncil/2018/01/22/whats-the-score-developing-the-right-measurement-capability-is-critical-to-ai-success/#3db01bc51898
Emerging Technologies: Testing Challenges & Opportunities
218
emerging-technologies-testing-challenges-opportunities-147fb77482de
2018-06-28
2018-06-28 10:01:21
https://medium.com/s/story/emerging-technologies-testing-challenges-opportunities-147fb77482de
false
1,709
The Acorn Collective is using blockchain to provide crowdfunding that is accessible, transparent and more likely to succeed. Visit https://aco.ai/ to learn more
null
TheAcornCollectiveICO
null
theacorncollective
theacorncollective
CROWDFUNDING,BLOCKCHAIN,ENTREPRENEUR,STARTUP,FUNDRAISING
acocollective
Software Development
software-development
Software Development
50,258
Steven Markey
SDET @ The Acorn Collective
8159f0ee8fc8
stevenmarkey
8
6
20,181,104
null
null
null
null
null
null
0
null
0
14b136a9a60c
2018-09-26
2018-09-26 23:33:49
2018-09-28
2018-09-28 15:56:59
2
false
en
2018-09-28
2018-09-28 15:56:59
8
1480b755a8b6
1.372013
3
0
0
Will AI take over the world? The jury is still out…
5
All About the Future of AI Will AI take over the world? The jury is still out… 🤖 YES, OUR ROBOT OVERLORDS ARE COMING AI will take over the world - and that's a good thing The impending rise of intelligent machines and "Super AI" systems has been heralded in many headlines this year. Many…www.cio.com So, workers, experts say artificial intelligence will take all of our jobs by 2060 If any of us are still alive in the year 2100, we'll likely look back on artificial intelligence as the definitive…www.newsweek.com Why an AI takeover may not be a bad thing February 23 For years now, some of the smartest and most influential people on Earth have been warning about the…www.washingtonpost.com 👩🏻 NO, HUMANS ARE STILL THE BEST Don't Believe The Hype: Why AI Won't Take Over The World Yup, I said it: Robots and artificial intelligence are not going to take over the world. Am I a pariah now? What's…www.corporatemonkeycpa.com Will AI Take Over The World? Will the machine become so powerful that it begins to think to a point where it is more capable than the humans…www.forbes.com Will ‘AI’ Take over our Jobs? An AI Enthusiast’s vision of the ‘Future’hackernoon.com 🎛 BUT SERIOUSLY FOLKS… Economists worry we aren't prepared for the fallout from automation Will a robot take your job? Maybe. Perhaps. But, a new paper argues, we should worry less about the capabilities of new…www.theverge.com Read more of our thoughts on the industry, check out our work, or drop us a line at our website. We’d love to hear from you!
All About the Future of AI
150
all-about-the-future-of-ai-1480b755a8b6
2018-09-28
2018-09-28 15:56:59
https://medium.com/s/story/all-about-the-future-of-ai-1480b755a8b6
false
262
Industry news, agency life, and more from the creative marketing team at QUANGO.
null
quangodesign
null
Quango Inc.
null
quango-inc
MARKETING,TECH,AGENCY,PORTLAND,DESIGN
quangoinc
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Quango Team
Brands bring their ambitions to us.
2d88a549b6bc
quangoinc
15
5
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-13
2017-10-13 19:59:39
2017-10-13
2017-10-13 20:03:58
6
false
en
2017-10-16
2017-10-16 05:09:13
2
14822fa43a22
2.700943
4
0
0
When it comes to digital transformation it can seem like there are too many areas to invest in initially. But if it’s a movement you…
5
The 5 key digital transformation areas that enterprises need to be adopting now When it comes to digital transformation it can seem like there are too many areas to invest in initially. But if it’s a movement you ignore, your enterprise risks falling behind rivals. Instead, you should focus your digital transformation efforts on that could really have a positive impact on your daily operations and how your business is developing. With the right digital technology in place, your business can get ahead of the curve. Not sure which digital transformation areas you should be adopting now? We’ve identified five key areas that you should be considering. 1. Mobile development This is a trend you should already have embraced in an ideal situation. Did you know that mobile has overtaken desktop traffic? Your customers today want to be able to do everything on the go, from browsing emails to making a purchase. If your business isn’t keeping up with the times and providing a simple way to do that through responsive web design and apps, they’ll go elsewhere. 2. Big data Every business should be collecting, storing, and correlating data on a daily basis. The information that can be uncovered from large, varied data sets can prove invaluable for a range of business operations. From giving you the insights to understand which marketing efforts are working the best to identifying the areas where you could be making savings, the value of big data shouldn’t be underestimated. It’s a tricky balance of sourcing the right tools, finding an effective team, and creating correct methodologies. 3. Machine learning Machine learning might be seen as something of a buzzword but it holds enormous potential for forward thinking businesses. It can make personalizing customer service simple, taking into account historical data and using natural language processes, or automating finance tasks. Those that embrace machine learning across operations now are well placed to take advantage of the next step too — artificial intelligence. 4. Internet of things Interconnected technology is taking over our homes and over the next few years it’s going to represent a real opportunity for businesses in some sectors. It gives you a chance to not only create efficient, smart products that consumers will want but you should also be thinking about how you can create more opportunities. The internet of things is going to shake up the digital marketing area as we know it and it’ll provide you with masses amount of data that could be used. 5. Cyber security As with any opportunity, there are challenges too. Cyber security should be an essential part of your development. It’ll have an impact on the other aspects of your digital transformation too, it’s not an area you can afford to miss. Learn more: techsmartdigital.com | thomashsain.com
The 5 key digital transformation areas that enterprises need to be adopting now
4
the-5-key-digital-transformation-areas-that-enterprises-need-to-be-adopting-now-14822fa43a22
2018-01-23
2018-01-23 00:07:54
https://medium.com/s/story/the-5-key-digital-transformation-areas-that-enterprises-need-to-be-adopting-now-14822fa43a22
false
464
null
null
null
null
null
null
null
null
null
Digital Transformation
digital-transformation
Digital Transformation
13,217
Thomas Hsain
CEO & Founder at TechSmart Digital, Innovation & Change Catalyst, Growth Coach, Speaker, Author & Entrepreneur. Learn more at www.thomashsain.com
41d3fcf66e69
thomashsain
571
430
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-10
2017-11-10 05:19:57
2017-11-10
2017-11-10 05:26:54
1
false
en
2017-11-10
2017-11-10 05:26:54
0
14827f1bff80
1.935849
1
0
0
Fujitsu held its Fujitsu World Tour — Asia Conference Singapore on 2 November 2017 at Suntec Singapore where partners such as ConnectedLife…
5
ConnectedLife co-creates with Fujitsu Fujitsu held its Fujitsu World Tour — Asia Conference Singapore on 2 November 2017 at Suntec Singapore where partners such as ConnectedLife were invited to join in the event. The Fujitsu World Tour centers on the theme of Digital Co-creation, focusing on how Fujitsu works in tandem with customers to connect the digital dots and support business growth. Mr Wong Heng Chew, Country President, Fujitsu Singapore, comments: “Digital co-creation has become even more essential as we are faced with an increasingly widening gamut of technologies and solutions. By working together and leveraging on the expertise of both public and private sector partners, Fujitsu hopes to work hand in hand with enterprises, both MNCs and SMEs, to stand out among the competition, locally and in the region.” In the healthcare vertical, there is a global increase in demand for new services to support independent living while delivering quality care. ConnectedLife has achieved an added dimension to its solution by incorporating Fujitsu’s cutting-edge Sound Sensing Technology to look at coughing, snoring, loud sounds as “events” to provide monitoring while respecting the individual’s privacy, enabling access to important health data. It is so important to be proactive around the well-being of our older adults rather than to be reactive only after an unfortunate incident, resulting in hefty healthcare costs and sometimes the lost of independence. Fujitsu’s “Resident Monitoring Solution” extracts events from the sounds in the older adult’s life, accumulates the results on the cloud and enables family and caregivers to remotely view the older adult’s status in real-time, online or from a mobile device. For example, the algorithm recognizes continuous coughing by older adults as something unusual and will send an alert to the mobile phones of their family and caregivers. In case of an emergency, older adults can also directly speak to a 24/7 call center or to their family and caregivers by pressing the emergency button on the device. It can detect sounds such as snoring to analyze sleep quality as well. Mr Daryl Arnold, Chairman, ConnectedLife, added: “It has been great pleasure working with Fujitsu. We have both learnt so much from each other. Our creativity and innovation are well-received by Fujitsu which helps in creating a competitive solution together. Fujitsu’s extensive research and knowledge about ageing communities globally can provide us with important insights on how we can adapt our solution to further improve the quality of life for older adults. At the same time, Fujitsu’s global network of partnerships can help us grow our ecosystem, scale and access customers. All in all, Fujitsu’s ubiquitous hardware and software is an excellent fit with our whole vision, and we are so excited about our future together with Fujitsu.”
ConnectedLife co-creates with Fujitsu
50
connectedlife-co-creates-with-fujitsu-14827f1bff80
2018-01-08
2018-01-08 05:13:17
https://medium.com/s/story/connectedlife-co-creates-with-fujitsu-14827f1bff80
false
460
null
null
null
null
null
null
null
null
null
Wellness
wellness
Wellness
16,403
ConnectedLife
Be connected. Live well. Enjoy Life. ConnectedLife provides you with round-the-clock peace of mind at home and on-the-go | http://connectedlife.io
17ce9b5bb104
connectedlife
90
97
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-10
2017-12-10 22:32:14
2017-12-11
2017-12-11 20:58:06
11
false
en
2017-12-11
2017-12-11 22:26:58
7
148332c028da
8.360377
3
0
0
This post is another tutorial on genetic algorithms with a sprinkling of DFO about halfway through! If you would like to see the other blog…
5
Mr Miyagi’s Metaheuristics This post is another tutorial on genetic algorithms with a sprinkling of DFO about halfway through! If you would like to see the other blog posts then you can find them here. The No Free Lunch Theorem Optimising Neural Nets With Biomimicry Fine Tuning Dispersive Flies Optimisation Introducing Stochastic Diffusion Search Evolving Solutions With Genetic Algorithms This post tackles various challenges with metaheuristics, and I see it as a bit of bread and butter exercises. Hopefully you’ll enjoy working through them too. Warming Up Whilst I’d rather take some time out to binge Stranger Things on Netflix, my professor has had the good graces to pose the following problem to us lazy students. I have 10 cards, numbered 1–10. You are asked to divide the cards in two groups where the sum of the numbers on the cards in the first group is as close to 36 and the product of the numbers in the second group is as close as possible to 360. Find a way of obtaining good solutions. Rolling up our sleeves around our skinny nerd arms, let us examine the problem. So given the set of ten unique cards, this means there is !10 possible permutations. In other words, the search space has 3,628,800 potential solutions. This is obviously a lot of potential solutions and is will take a very long time with a greedy search. A random search would likely speed things up a little bit, but approaching this with a metaheuristic to help guide the stochasticism, such as a Genetic Algorithm, will perform even better. Each genotype in this context will be a list (size ten) of unique integer values in the range one to ten. We can create a random set size population of shuffled cards as follows: Some solutions will be better than others, as the problem states we need two groups of cards from the set with two different targets. To simplify the problem, we can make the groups of cards equal sizes. The first half of the genotype will be used for the first group, and the second half for the second group. For the first group, we take the absolute difference (L1) between the sum of the group and 36. For the second group, we take the absolute difference between the product. We consider this problem solved when the combined L1 differences sum to zero. The mutation and crossover is next. For mutation, we loop through the genotype and roll the dice against the mutation rate probability. If we need to mutate, the card’s position in the list is swapped with another randomly selected card. Crossover takes a random selection of cards for the first deck and takes the remaining needed cards from the second deck to complete the new dna’s genotype, preserving order where it can, and concatenates the two together. We can then take all of this and run the GA. The iterative code to do this is as follows. Note the arguments to the method. The population size and mutation rate should be fairly obvious, but note the elitism, amount iterations and amount experiments parameters. Elitism is a parameter the user can tune as they solve their problems with a GA, and helps to choose a balance between exploitation and exploration. Elitism reserves a percentage of the fittest genotypes and prevents any crossover or mutation being applied to them for the next round of optimisation. Amount iterations in this case is the maximum rounds of optimisation that should be done in any one experiment, and amount of experiments denote the amount of times we should repeat the experiment to ascertain how well on average the optimisation is going. We can run this GA in two modalities, a generational GA and a steady state GA. This is just a fancier way saying there is or isn’t elitism in the GA — a generational GA has no elitism and a steady state GA has some elitism, although be wary of applying too much otherwise you will only find local minima! First we run 50 steady state experiments. Running the above yields a graph and the variance of 24.57 of how many iterations it took each experiment to find an optimal solution. Each experiment found an optimal solution before the maximum amount of iterations was up. The first solution had this order [2 10 7 9 8 1 3 5 4 6]. We can verify it manually as a quick sanity check by summing the first five elements and checking that they equal 36. And of course 2 + 10 + 7 + 9 + 8 = 36! We then can compute the product of the final five elements and check if this equals 360. Naturally, we can see that 1 * 3 * 5 * 4 * 6 = 360. Due to the large population size, 30% of the experiments (15/50) found an optimal solution in the first round of optimisation. Running the function again but this time as a generational GA (no elitism) yields a variance of 22.12 and we can see in the histogram that the generational GA had more lucky starts with zero iterations to find an optimal solution. It also had some outliers took a little while longer than the steady state to find an optimal solution of zero difference, which is more typical of what we expect with the generational GA that favours exploration over exploitation. Wax On, Wax Off We can also use another metaheuristic to solve the same card problem. Although DFO was designed for continuous search spaces, we can neatly make this suitable for this problem with a few small edits. The most interesting part is the fitness function; how do you take a continuous vector of floats and turn it into a vector of non-repeating integers? The code documents the method, so I wont bother going into any detail. From there we apply the usual DFO algorithm and get the output left in the comments at the bottom of the code. A Little Bit Of Manual Labour Never Hurt Nobody Having stretched out our unathletic limbs with the prior warmup, we move to the next problem. Returning to Genetic Algorithms, we will initially do this problem by hand to get an intuitive understanding of the underlying mechanics. The problem has a scenario as follows. We have chromosomes of eight elements, where each element can take on any integer between zero to nine (inclusive). The fitness of each genotype can be calculated as a function of its elements as noted in the third line below. This is a maximisation problem, so the higher the fitness, the better. Let us presume we have the four following genotypes. We can compute the fitnesses by passing each genotype into the fitness function. In a genetic algorithm, once we know which are the fittest individuals, we can apply crossover. We can take the two fittest individuals, 1 and 2, and apply different types of crossover. The most basic form of crossover would be uniform crossover. We split each chromosome down the middle and cross the four resultant halves with each opposing other, resulting in two new genotypes. Let us then assume that during selection, the second and third fittest chromosomes (genotype one and genotype three) are selected (it is good to promote diversity for exploration of the feature space) and that this time we use a two point crossover. Let us assume we cross after the second and sixth element. Finally, let us ignore the fact we are making more chromosomes than the initial round’s population size, which classically Genetic Algorithms do not do, and do another uniform crossover of genotypes one and three. We now have a population of size six. We then calculate the fitnesses of each new genotype, six to ten, as follows. In the first round of fitnesses, for genotypes one to four, there was a mean fitness of -0.75. Now in the second round, for genotypes five to ten, there is now a mean of 3, which is a good move in the right direction. But, how much in the right direction actually are we? To understand this, we need to look at the fitness function and the values x can take on, and work out the chromosome that would be the optimal solution with the maximum fitness, which would be this: So with this in mind, is is possible through our current methodology to reach this global optimum? The answer is no, because no crossover splits at a odd interval, which means that we can never propagate the zeros and nines (which are all situated on even indies in the population) to the odd indices in the genotype. This could be sorted with a random crossover operator, and/or a mutation operator. If you want the code for this exercise, running this will return the correct results. Note I have adjusted the crossover. NP Complete Problems — The Knapsack Problem So let's move to a meatier challenge — the knapsack problem, which is a problem in combinatorial optimisation. It is NP complete, which means there is no known algorithm both correct and fast (polynomial-time) on all cases. The problem is posed as follows: Imagine you move out and only have a backpack (or knapsack) of fixed capacity. You have a set of objects which each have a weight and a value. You can put a subset of the objects in your backpack as long as you do not exceed the total weight you can carry. You aim is to maximise the value of the subset of items that you will carry. The datasets of item weights and their values used in the code below were hard coded from the datasets found here. This time, I’ll construct a class as the GA like my previous blog post on GAs. This class works as a manager of the population; it instantiates a population at the start of an experiment, and then passes them externally to be evaluated in terms of genotype fitness. The population is a binary list of ones and zeros. The stochasticism is a little different this time, whereby the mutation works by looping over the genotype and simply randomly resetting the element if the random sample is within the bounds of the mutation rate. The crossover copies one genotype, and randomly shuffles in another genotypes values to make the new dna. So the important question is fitness — how can we evaluate the knapsack challenge? Whilst it makes for fairly ugly code, I have hard coded in the solutions into a function, that takes a genotype and the current challenge (I solved one to five on the aforementioned website) and it then returns a fitness. Note that if we go over the threshold weight then our fitness automatically becomes negative one. We can then run this code and get the solution for each challenge as follows, and we can see a condensed version of the output in the comments of the code at the bottom. And that’s me done for the day — thanks for reading. Let me know if you have any questions in the comments below, or hit me up on twitter!
Mr Miyagi’s Metaheuristics
105
mr-miyagis-metaheuristics-148332c028da
2018-03-13
2018-03-13 23:35:40
https://medium.com/s/story/mr-miyagis-metaheuristics-148332c028da
false
1,871
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Leon Fedden
Wizard nerd summoning TensorFlow, C++ and Python black magic from the void.
80bcde7d6e7
LeonFedden
376
13
20,181,104
null
null
null
null
null
null
0
null
0
9603d63b25e7
2018-04-24
2018-04-24 12:26:17
2018-04-24
2018-04-24 12:45:14
7
false
en
2018-04-24
2018-04-24 12:45:14
1
14834bb38d76
8.717925
0
0
0
Use of the ‘Function-Behaviour-Structure’ and ‘Design Process Unit’ perspectives for system design
5
CX-data: a look at the technology Use of the ‘Function-Behaviour-Structure’ and ‘Design Process Unit’ perspectives for system design Asito’s clients are asking to add elements of Customer eXperience (CX) to the cleaning services. This project designs a framework that combines different methods and technologies. The goals of the datafication framework are to enable data-driven decision making in the planning of a CX focused way of working, and to explicate and evaluate the consequences of implicit biases in data. To test and develop the framework, a prototype is implemented at several Asito clients. This document presents the project, and applies the Function-Behaviour-Structure (FBS) and Design Process Unit (DPU) models following Jauregui-Becker and Wits (2012). Project summary: Datafication of CX for the cleaning industry The project starts with a market need: CX of cleaning — end-users want a clean environment to work, live, travel, etc. Travellers feel less safe if the train station is not clean. Such experiences are at the core of every customer interaction. Following positive experiences, customers are more likely to return; in case of a bad experience, customers might look for alternatives in the future. Within cleaning services, the current business model and way of working are based on quality level and output (technically clean). What is being cleaned is a tactical conversation between clients’ facility managers and Asito’s object leaders and branch managers. The experience of end-users (employees, students, patients, travellers) is currently used very little and only ad-hoc, not as a formal part of the process. This new line of thinking about customer experience is entering the cleaning industry: not only see cleaning as a hygiene factor / cost centre, but focus on contributing to the experience of the clients’ customers — the end-users. Requirements for a datafication framework This project develops a datafication framework that serves two main purposes: Support a way of working that focuses on CX, Evaluate how design choices impact data generation and decision making in later steps. The project delivers this framework as a design artefact, and in addition implements a prototype at two or three of Asito’s clients. The prototype serves as both an evaluation of the framework, as well as a result in its own right for these client and for Asito. To implement the prototype consists of roughly two phases, as shown in Figure 1: Configure and design the different components of the system, Running the components during daily operation to generate data and support decision making. During the first month of running, the system is actively monitored to calibrate and fine-tune the configurations and outcomes. Once the system is in operation, periodic evaluations will test if the data still meaningfully represent and contribute to CX. Figure 1 A stepwise approach to implement and operate a datafication prototype Configuration follows four main steps: Define what to measure: CX KPIs. Configure measurement tools: apps for manual input from end-users and cleaners and IoT sensors to measure behaviour and environmental changes. Select and implement tools for analysis: interfaces and algorithms to store, analyse, and manipulate data. Design how insight is presented: dashboards and visualisations make data readable for humans. The three main components of operation will be further elaborated in the next sections. Introduction to FBS and DPU modelling Breaking down the system in components and understanding how these components interact helps to make sense of how the system as a whole is supposed to work and deliver on the requirements. The purpose of this exercise is to not commit to specific technologies too early and steer every discussion towards that technological solution — think of Maslow’s hammer “it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.” Instead, first look at the problem in conceptual form, and only later define the concrete techniques to deliver the required functions in the following steps. Function-Behaviour-Structure The FBS model breaks the system down into desired functions and related behaviours and structures to deliver those functions: The functions are very high-level descriptions of the goals that the system has to fulfil — the function of a washing machine is to separate dirt from clothes. The behaviours are (technological) solutions that can be applied to deliver the functions — the behaviours used to separate dirt from clothes is to add water with cleaning agent, stir the whole, and then remove the dirty water. The structures are the concrete techniques and components that do the actual work — the main structures in the washing machine are the pump to add and remove water and the centrifuge to mix the water with the clothes and remove the water afterwards through centrifugal forces. Design Process Units Figure 2 Basic structure of the DPU The DPU models the behaviour of a component in the system, building on the earlier defined function, behaviour and structure as depicted in Figure 2: The core of the DPU is the behaviour: this is what the component does. The behaviour is enabled by a structural component: the artefact. Input comes in the form of external stimuli or raw material: a (use) scenario. This scenario also defines the constrains of normal operation e.g. which limits will not be exceeded. The final part of the DPU is the performance / output delivered. This corresponds to the function that the behaviour supports, and evaluates how well the artefact delivers on its requirements. Figure 3 Linking multiple DPUs A system of multiple structural components that interact with each other can be modelled as multiple DPUs linked together. Within this project, this is used in the implemented prototype as following step in production chain. Output from one DPU provides the scenario for the next DPU as depicted in Figure 3: step 1 creates output (materials) that is used as input for step 2. Other interactions between DPUs can also exist, but are not relevant for this project. The project delivers two artefacts: the datafication framework and the implemented prototype. The FBS breakdown and DPU models for the two artefacts are presented in the next sections. FBS and DPU of the datafication framework Figure 4 FBS breakdown of the framework The two primary functions of the framework are to improve customer satisfactions and evaluate how design choices impact data generation and decision making in later steps, shown in Figure 4. Customer satisfaction is supported by changing the way of working to fit CX. A framework that delivers this contains a number of methods to evaluate and (re)design the way of working and supports data-driven decision making and planning of activities. Implicit biases are explicated and traced across the datafication chain to provide insights in data quality and the unexpected impacts that design choices can have on other parts of the prototype. Part of the framework are questions and methods to make those biases explicit so that they can be investigated. Figure 5 DPU for the datafication framework For the DPU, the artefact delivers both behaviours as depicted in Figure 5: In terms of scenarios and constraints, the prototype is implemented in the context of pilot projects with Asito clients. The selected clients will be companies that are interested in using technology and open for experimentation. They are not expected to pay for the prototype and the potential results, but are expected to actively engage in configuration and evaluation activities. Performance of the framework is evaluated in terms of its ability to help improve customer satisfaction and provide understanding of how design choices lead to biases and side-effects in the data and decision making. FBS and DPU of the implemented prototype The main functions of the prototype are to measure CX, to create insight and to support decision making, as presented in Figure 6. As already discussed earlier, the three functions have a relation in which they build on one another. Figure 6 FBS breakdown of the prototype to be implemented In order to measure CX indicators, data is generated: about the experiences and behaviours of end-users and cleaners, and about changes in the environment. Smartphone apps can be used to record responses from end-users and cleaners, by asking questions. Physical like-buttons (green-yellow-red) can be used to ask specific questions to end-users regarding their experiences. IoT sensors can be used to observe behaviours, such as counting how many people are in the building. IoT sensors can measure changes in the environment, such as air quality (temperature, humidity, CO2), which influences performance of employees and students. Existing (external) data sources can be connected to add for a different insight, such as weather data (publicly accessible), bad weather makes for more work cleaning. 2. To create insight, the data that has been generated and collected is analysed: data science methods and algorithms can discover patterns in large and combined datasets. A database, data warehouse or data lake is used to gather all the data generated and collected in the previous step. Interfaces are needed to connect to the data-generating components and load the data into the repositories. Data science methods and algorithms are used to explore, aggregate, and optimise the data — to create insight. Software that includes these algorithms will automatically analyse real-time data streams as they are incoming, and present the results and possible alerts to data-users. 3. To support decision making, the data and insights are visualised in a format that is usable for the data-user (client facility manager, Asito object leader or branch manager). A visualisation is anything that can be visually interpreted by a human reader. Computers work in bits, and data come in byte-packages, of ones and zeros. A visualisation can be text, and excel table, or a colourful graph or set of graphs. Insights created in the previous step has to be put into these visualisations. This can be done directly base on real-time data-streams, or periodically e.g. at the beginning or end of the week. A dashboard or alert-message can be used to present specific results in an accessible way. The DPU consists of three parts, corresponding to the three structural components described above. Output from one DPU is used as input for the next DPU, as depicted in Figure 7. Figure 7 DPU for the prototype to be implemented Generate data Apps and IoT sensors generate data on experiences, behaviours, and the environment. Artefact: The artefacts are a set of apps, IoT sensors, and existing (external) data sources. Scenario: The scenario is an implementation of the prototype in a pilot project at a client of Asito. During this pilot project, the application environment will be explored and further defined in terms of constraints and boundary conditions of normal operation. Performance: The main criterion for evaluation is data quality: do the data generated reflect the (CX) indicators that the apps and IoT sensors are intended to measure? Analyse data Data science methods and algorithms analyse the data, looking for (meaningful) connections and patterns. Artefact: A system consisting of databases, interfaces, and algorithms that can analyse incoming data-streams. Scenario: Data generated and collected in the previous step serve as the external stimuli for this component. Boundary conditions are set to work with the data sources defined in the previous step, and can be updated when new sources are added to the system. Performance: Evaluation criteria are analytics results and level of automation. Are the algorithms able to provide the insights related to the CX KPIs and data, e.g. number of people in the building per day? How many manual actions are required to turn data into insight? Visualise data data and insights produced in the previous steps are turned into visually readable materials for the data-users (client facility manager, Asito object leader or branch manager). Artefact: A set of tables, graphs, dashboards and alerts that visually present the most important information for decision making. Scenario: Data analysed and insights created in the previous step serve as the external stimuli for this component. Boundary conditions are set to work with the outputs defined in the previous step, and can be updated when new features are created. Output: Evaluation criteria are usability of the visualisation: is the information presented in a way that is understandable for the data-users and can they use this in planning of cleaning activities? References Jauregui Becker, J. M., & Wits, W. W. (2012). Knowledge structuring and simulation modeling for product development. Procedia CIRP, 2, 4–9.
CX-data: a look at the technology
0
cx-data-a-look-at-the-technology-14834bb38d76
2018-04-24
2018-04-24 12:45:16
https://medium.com/s/story/cx-data-a-look-at-the-technology-14834bb38d76
false
2,032
The digital transformation of our society is a far-reaching and rapid transition of our culture, economy, and institutions; of what it means to be human. Datafication: What is the mindset, what are the tools, and what are the consequences of trying to capture the world in data?
null
null
null
Datafication of Experience
datafication-of-experience
DIGITAL,DATAFICATION,EXPERIENCE,PHILOSOPHY
null
Data Science
data-science
Data Science
33,617
Berend Alberts-de Gier
null
c75459b1b68a
albertsdegier
15
8
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-09
2018-07-09 17:53:10
2018-07-11
2018-07-11 19:39:36
3
false
en
2018-07-11
2018-07-11 19:39:36
1
148502f95e47
2.248113
1
0
0
null
5
More than half of the energy the world produces is wasted. Even worse, the biggest contributor to this waste is in the process of energy production itself. However, recent developments in quantum machine learning and nanotechnology could unlock the solution to the ever-growing energy crisis that affects over 1.2 billion people. The world wastes more than half of the energy it produces Despite the upward trend in energy production, there has also been an upward trend in the proportion of energy waste. Last year, 66% of our energy was wasted, and even more surprisingly, most of it was lost in the process of electricity generation itself. Not only are coal and natural gas bad for the environment, they’re also extremely inefficient, losing almost twice the amount of energy they produce (along with nuclear fission and hydroelectric generation). 37% of our energy is generated by steam turbines and they’re not as efficient as we think All of these energy sources suffer from the same fundamental flaw: they all rely on steam turbines — the same ones that were used over 50 years ago. Turbines are used to convert large amounts of kinetic energy on the molecular level (think heat) into usable electricity by directing rising steam through its rotor. The problem is, this mechanism is only capable of harnessing energy from a thermal gradient, i.e. the steam has to be surrounded by a cold medium for it to work, and thus, energy is spent on cooling the apparatus. If not steam turbines, what should we be using? As it turns out, biology has spent a few billion years developing this process: ATP synthesis involves the conversion of kinetic energy in moving protons into chemical energy using an enzyme. With the right mechanisms built into a nanostructure, creating a synthetic molecular machine for this process could completely disrupt the energy sector. The mechanism of the ATP synthase enzyme. Building nanostructures with the Legos of the universe There’s no instruction manual for the optimal molecular structure, but quantum machine learning offers the solution: quantum generative adversarial networks, or QuGANs, have the potential to simulate and optimize the properties of a given molecule. By iterating over hundreds of thousands of possible nanostructures, we could create the blueprints for the perfect molecule. What does it look like? 1.2 billion people are still without a source of electricity. The developed world is currently caught up in optimizing energy generation incrementally, but true change depends on a fundamental shift in the way we think about energy. The vision behind EnergyInq (energyinq.com) is the outright replacement of steam turbines to dramatically increase the efficiency of energy generation, which drives our mission to power the world at a fraction of the financial and environmental costs.
Developing nanomaterials to increase energy efficiency by 300%.
3
developing-nanomaterials-to-increase-energy-efficiency-by-300-148502f95e47
2018-07-11
2018-07-11 19:39:36
https://medium.com/s/story/developing-nanomaterials-to-increase-energy-efficiency-by-300-148502f95e47
false
450
null
null
null
null
null
null
null
null
null
Quantum Computing
quantum-computing
Quantum Computing
1,270
Tommy Moffat
Hi! I'm 17 years old and I'm passionate about quantum machine learning and nanotechnology. Feel free to learn more at my site! http://tommymoffat.ca/
5e11979a569e
tommy.moffat
20
10
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-07
2018-05-07 13:51:44
2018-05-07
2018-05-07 15:01:37
3
false
en
2018-05-07
2018-05-07 15:01:37
3
14855518f916
2.429245
3
0
0
Unless its running towards you, the sprint of a cheetah is joy to watch. Wouldn’t it be cool, if you could write a code that makes a…
5
How to train your Cheetah with Deep Reinforcement Learning Training of My Half-Cheetah Unless its running towards you, the sprint of a cheetah is joy to watch. Wouldn’t it be cool, if you could write a code that makes a cheetah learn to run? Actually it’s not that hard, I will show it to you by explaining underlying principle. Once you get the gist, you can read rest of the code easily. Lets first start with basics of Reinforcement Learning. Reinforcement Learning allows machines (or software agents) to automatically determine the ideal behavior within a specific context, in order to maximize the total rewards. Simple reward feedback is required for the agent to learn its behavior; this is known as the reinforcement signal. There are many great articles that explain different algorithms for Reinforcement Learning. We would be using Deep Deterministic Policy Gradients or DDGP for training our Cheetah. I would recommend reader to this article for general understanding of RL and DDGP. Overview of DDPG Algorithm In short, Actor Network tries to predict the best action based on state, while Critic Network predicts the basis of what is good and bad i.e. Q-value. Q(s, t) value determines the total discounted future reward for current state and action pair. Critic tries to learn this value by satisfying Bellman equation: Q(s, a) = r + 𝛾Q(s’, a’), where s’ is the next state to s and a’ is next action. Implemented Algorithm for DDGP Hyper-parameters of DDGP In training RL agents, it is very often to get stuck in suboptimal policies, like the one you saw at the start of the video on top. Thus, we need to adjust the hyper parameters of training to learn desired performance. γ (Gamma) : Discount factor which determines the amount of importance given to future rewards compared to present. Number of Roll-out steps : The number of steps after which networks are trained for K number of train steps. These are the steps allowed for random exploration experience build up. Standard Deviation of Action Noise : We use Ornstein-Uhlenbeck process for adding action noise, which suits to systems with inertia. By changing the deviation, we change the degree of exploration. Number of Train steps : We train the parameters of Actor and Critic for K number of steps, after exploration steps. As we see in Half-Cheetah environment, cheetah gets reward to run in forward direction. In the training, we find that it first flips itself as a result of initial random exploration and then behaves in a way to keep moving forward but in flipper state. Also, as it flips initially or in between, its running speed gets affected causing reduction in overall reward. We tackle this situation, by reducing number of roll-out steps, which are exploration steps causing it to flip at start. We see the change reflected in cheetah’s gait. Instead of rushing in forward direction, which would cause the loss of balance, cheetah learns a stable gait with increased speed and receives overall maximum reward. Comparison of Different Training Settings
How to train your Cheetah with Deep Reinforcement Learning
13
how-to-train-your-cheetah-with-deep-reinforcement-learning-14855518f916
2018-05-08
2018-05-08 14:55:41
https://medium.com/s/story/how-to-train-your-cheetah-with-deep-reinforcement-learning-14855518f916
false
498
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Shrinath Deshpande
null
ee092556ec4d
deshpandeshrinath
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-06
2018-09-06 15:59:26
2018-09-06
2018-09-06 16:01:01
7
false
en
2018-09-06
2018-09-06 16:01:01
20
14871027bbd3
2.212264
0
0
0
null
4
#NLMLfr — Numéro #9 N’hésitez pas à transférer cette newsletter à toutes les personnes qui pourraient être intéressés par ce genre de contenu ! Une chouquette lundi matin par nouvel inscrit ! News pour la filière/domaine Deep Learning and ‘Hyper-Personalization’ are the Future of Marketing Automation — www.entrepreneur.com Applications intéressantes du deep en marketing Data visualisation, from 1987 to today — The Economist — medium.economist.com tedlt Humans grab victory in first of three Dota 2 matches against OpenAI — The Verge — www.theverge.com Revanche des humains ! Tutoriels et articles How to explain gradient boosting — explained.ai Quoi que c’est le boosting déjà ? Un petit reminder de temps en temps ça fait pas de mal Unsupervised machine translation: A novel approach to provide fast, accurate translations for more languages — Facebook Code code.fb.com Utiliser des embeddings de mots pour faire de la traduction de manière non supervisée en superposant les mappings Prometteur pour les langues ou la possibilité d’avoir des data pour entraîner est onéreux ou difficile (dialectes, etc) FriendlyData · A Pragmatic Approach to Structured Data Querying via Natural Language Interface — friendlydata.io Transformer des questions en language naturel en requêtes SQL. Code Part 2: Scheduling Notebooks at Netflix — Netflix TechBlog — Medium — medium.com Faire en sorte que les datascientists, les data engineers et les analystes utilisent la même architecture (des notebooks). => Utiliser directement des notebooks en prod avec Papermill Reinforcement Learning: a comprehensive introduction [Part 0] · A Machine Learning journal — www.lpalmieri.com série de tutos sur du RL Papiers et Publications A neural attention model for speech command recognition arxiv.org Le mécanisme d’attention du papier de la semaine dernière appliquée à de l’audio passé au préalable à la moulinette CNN. PRAGMATIC APPROACH TO STRUCTURED DATA QUERYING VIA NATURAL LANGUAGE INTERFACE arxiv.org le papier un peu léger qui va avec l’article
#NLMLfr — Numéro #9
0
nlmlfr-numéro-9-14871027bbd3
2018-09-06
2018-09-06 16:01:02
https://medium.com/s/story/nlmlfr-numéro-9-14871027bbd3
false
308
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Renaud de la Guéronnière
I am a Machine learning engineer, looking for opportunities in AI research and development.
2051e79bf3b9
r.delagueronniere
1
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-24
2018-04-24 17:10:53
2018-04-24
2018-04-24 17:14:48
0
false
en
2018-04-24
2018-04-24 17:14:48
1
148845a275b8
0.041509
0
0
0
Even our AI VR customer service robot likes Orbis
3
https://www.youtube.com/watch?v=l45iOR_JlUA Even our AI VR customer service robot likes Orbis https://www.youtube.com/watch?v=l45iOR_JlUA
https://www.youtube.com/watch?v=l45iOR_JlUA
0
https-www-youtube-com-watch-v-l45ior-jlua-148845a275b8
2018-04-24
2018-04-24 17:14:49
https://medium.com/s/story/https-www-youtube-com-watch-v-l45ior-jlua-148845a275b8
false
11
null
null
null
null
null
null
null
null
null
Fintech
fintech
Fintech
38,568
Orbis Money Transfer & Investment
Manage your money different and start changing the world!
108e1bca670c
OrbisTransfer
11
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-17
2018-06-17 00:16:15
2018-06-17
2018-06-17 00:53:02
0
false
en
2018-06-18
2018-06-18 06:21:56
1
14892f83b915
1.943396
0
0
0
As we’ve been adding skills to Optic (yes that’s the new name for Optic Knowledge), we’ve found ourself playing critical feature…
3
Weekly Update (June 10-June 16) As we’ve been adding skills to Optic (yes that’s the new name for Optic Knowledge), we’ve found ourself playing critical feature whack-a-mole. We definitely underestimated the number of critical use cases to get some of the everyday libraries and SDKs developers love supported. The good news is we’re just about there. As reported last week we’ve already added support this month for: Mutating transformations — transformations that change existing code. Supports things like “Add Action to Reducer” for Redux Multi transformation — transform one kind of code into multiple other types of code. Supports transformations that need to write to multiple files. This week we finished what we believe are the last core engines needed for a while: Insert locations — allows you to create new files as part of a transformation. Parser proxies — enables lenses to be created with otherwise invalid source code. For instance putting a Case statement in the top level of a code snippet won’t parse. Parser proxies allow parser authors to define a wrapper on a per AST Type basis to overcome this limitation. End users won’t have to think about these but they are important. Multi Node Lenses — allows you to create a lens composed of n number of AST Nodes. User cases include creating a lens for Redux Containers. Each with a Component, PropTypes, MapStateToProps, Connect and Export node. Multi Node lenses behave like any other lens and can be synced. Sublime support added. Ajay Patel from Plasticity finished the plugin. We’ll be packaging this in the next version of the installer. What’s Next This next release is already very big and contained a lot of risky features. It’s nearly ready to go but we’re going to delay it a little longer. We want to switch our focus for the remainder of the summer from new features to building Optic’s repository of skills. To help with this task we feel like we have to make sure Optic Markdown 2 is part of this release. Optic Markdown 2 is going to make it easier than ever to teach Optic new skills. It’s designed around the premise that Training > Explaining. Right now Optic Markdown is very verbose and requires users to fully grasp our system before they can make anything useful. V2 deploys some of the code pattern matching tech that makes Optic work to that training process. In v1 users created lenses by giving a example snippet and then defining extractors to read/write certain properties of from that code. In v2 you provide a sample snippet and the expected JSON model to describe the code. Optic will search sample space and figure out how to round trip the code. If there are ambiguities that need resolution Optic will ask questions to resolve them and advanced users will still be able to override Optic’s decisions when needed. It’s really cool and I’ve already seen some first time Optic users take advantage of the system without any coaching. Thanks for supporting Optic! We hope you’re ready for the big release.
Weekly Update (June 10-June 16)
0
weekly-update-june-10-june-16-14892f83b915
2018-07-13
2018-07-13 05:24:19
https://medium.com/s/story/weekly-update-june-10-june-16-14892f83b915
false
515
null
null
null
null
null
null
null
null
null
Software Development
software-development
Software Development
50,258
Aidan Cunniffe
Founder focused on Creative-AI, Humanist, Runner and Speaker. I build tools that help people create amazing products.
bbc16443a639
aidandcunniffe
212
201
20,181,104
null
null
null
null
null
null
0
null
0
b37e38a14b8a
2018-07-09
2018-07-09 14:22:28
2018-07-19
2018-07-19 06:13:54
6
false
es
2018-07-19
2018-07-19 06:13:54
5
1489fbd9a527
4.972642
7
0
0
Las curvas ROC y PR (precision-recall) son herramientas utilizadas en la evaluación del rendimiento de clasificadores binarios. Estas…
5
Curvas PR y ROC Photo by Juan Gomez on Unsplash Las curvas ROC y PR (precision-recall) son herramientas utilizadas en la evaluación del rendimiento de clasificadores binarios. Estas curvas nos indican de manera visual la relación entre la precisión y la sensibilidad de nuestro modelo, a la vez que sirven para comparar el rendimiento de distintos modelos de clasificación. Precision y Recall Es necesario conocer los conceptos de precision y recall para entender estas curvas. A pesar de que son conceptos bastante conocidos en el mundo del machine-learning y la inteligencia artificial, los explicaré brevemente. Estos conceptos nacen de una serie de indicadores que contabilizan si las predicciones de un clasificador binario(positivo o negativo) se han realizado correctamente: TP (True positives): ejemplos positivos clasificados correctamente. Por ejemplo, supongamos que tenemos un clasificador binario de imágenes de gatitos que nos indica si la imagen es de un gatito o no. El TP es el número de imágenes de gatitos clasificadas positivamente. FP (False positives): ejemplos negativos clasificados incorrectamente como positivos. En el ejemplo de los gatitos, sería el número de imágenes sin gatitos que son clasificadas positivamente como imagen de gatito. Se puede considerar este valor como aquellas fallos cometidos debido a clasificaciones muy optimistas (muy sesgadas hacia el caso positivo). TN (True negatives): ejemplos negativos clasificados correctamente. De nuevo, en el ejemplo de los gatitos, sería el número de imágenes sin gatitos que son clasificadas negativamente (imagen sin gatito) FN (False negatives): ejemplos positivos clasificados incorrectamente como positivos. En nuestro ejemplo, es el número de imágenes de gatitos que son clasificadas como negativamente (imagen sin gatito). Podemos pensar en este indicador como el número de ejemplos correctos que escapan a nuestro “radar”. Con estas cuatro métricas, podemos calcular los ratios de precision y recall. Precision: la precisión es el ratio o porcentaje de clasificaciones correctas de nuestro clasificador. En otras palabras, de todo lo que nuestro clasificador clasifica como positivo, correcta o incorrectamente (TP + FP), cual es el ratio de clasificaciones correctas. Recall: el recall o sensibilidad de nuestro modelo es el ratio de positivos detectado en el dataset por nuestro clasificador. En otras palabras, de todos los positivos reales de nuestro dataset, detectados o no(TP + FN), cual es el ratio de positivos detectados. Equilibrio entre Precision y Recall Las métricas de precision y recall están relacionadas de manera que si entrenas tu clasificador para aumentar la precisión, disminuirá el recall y viceversa. Veamos la aplicación de esta relación en nuestro ejemplo de los gatitos: Supongamos que queremos hacer que nuestro clasificador sea muy sensible a imágenes de gatitos, es decir, queremos aumentar el recall, probablemente a un valor cercano a 1 (100%). Podríamos hacer esto fácilmente si hacemos que nuestro modelo sea una función simple que siempre devuelva True. La desventaja de esto es que nuestra precisión sería muy pobre, prácticamente aleatoria, probablemente cerca del 0.5 si nuestro dataset está balanceado. Ahora supongamos que queremos un clasificador muy preciso, con una precisión cerca del 1. Evidentemente, si nuestro clasificador es muy “tiquismiquis” a la hora de considerar una imagen de gatito como válida, nos dejaremos muchas por el camino (tendremos muchos FN) y esto reducirá el recall. Nuestro clasificador será muy preciso, pero menos sensible. ¿Y como le decimos al clasificador cuál si queremos que sea más preciso, o más sensible? Generalmente, el modelo tomará la decisión de si la clasificación es positiva o negativa si el valor devuelto por el modelo supera un decision threshold o umbral de decisión. Si aumentamos este valor, estaremos aumentando la precisión, si lo disminuimos, aumentamos la sensibilidad (recall). Equilibrio precision-recall en función del umbral de decisión Aquí es donde las curvas PR y ROC nos ayudan a ver el equilibrio entre precisión y sensibilidad de un modelo. PR curve (curva precision-recall) La curva PR es el resultado de dibujar la gráfica entre el precision y el recall. Esta gráfica nos permite ver a partir de qué recall tenemos una degradación de la precisión y viceversa. Lo ideal sería una curva que se acerque lo máximo posible a la esquina superior derecha (alta precisión y alto recall) En en título del gráfico vemos AP=0.62. Este valor es el Average precision y es una manera de calcular el área bajo la curva PR o PR AUC, o lo que es lo mismo, el resultado de integrar la curva. El Average Precision nos sirve para evaluar y comparar el rendimiento de modelos. Cuanto más se acerque su valor a 1, mejor será nuestro modelo. ROC curve La curva ROC (receiver operating characteristic) es parecida a la curva PR pero cambiando algunos valores. Relaciona el recall con el ratio de falsos positivos. Es decir relaciona la sensibilidad de nuestro modelo con los fallos optimistas (clasificar los negativos como positivos). Tiene sentido ya que, generalmente, si aumentamos el recall, nuestro modelo tenderá a ser más optimista e introducirá mas falsos positivos en la clasificación. En las curvas ROC, nos interesa que la curva se acerque lo máximo posible a la esquina superior izquierda de la gráfica, de manera que el hecho de aumentar la sensibilidad (el recall) no haga que nuestro modelo introduzca más falsos positivos. En este caso también podemos calcular el ROC AUC, que también nos sirve como métrica para resumir la curva y poder comparar modelos. De manera similar al Average Precision, nos interesa que su valor se acerque lo máximo posible a 1. Diferencias entre curvas ROC y PR Por lo general, usaremos la curva PR o el Average Precision cuando tengamos problemas de datasets no balanceados, es decir, cuando la clase positiva ocurre pocas veces. Cuando hay pocos ejemplos positivos, la curva ROC o el ROC AUC puede dar un valor alto, sin embargo, la curva PR estará lejos de su valor óptimo, poniendo de manifiesto un indicador de precisión relacionado con la baja probabilidad de la clase positiva. Será una opción interesante usar la curva ROC y el ROC AUC cuando tengamos un dataset más balanceado o queramos poner de manifiesto un indicador más relacionado con falsas alarmas (falsos positivos). Referencias Differences between Receiver Operating Characteristic AUC (ROC AUC) and Precision Recall AUC (PR… edit 2014/04/19: Some mistakes were made, but the interpretation follows. Sorry about that.] Use Precision Recall area…www.chioka.in The relationship between Precision-Recall and ROC curves Edit descriptiondl.acm.org sklearn.metrics.average_precision_score - scikit-learn 0.19.1 documentation AP summarizes a precision-recall curve as the weighted mean of precisions achieved at each threshold, with the increase…scikit-learn.org
Curvas PR y ROC
38
curvas-pr-y-roc-1489fbd9a527
2018-07-19
2018-07-19 06:13:55
https://medium.com/s/story/curvas-pr-y-roc-1489fbd9a527
false
1,066
We design, deploy and manage 24/7 the cloud architecture that best suits your business needs. We ensure optimal performance of your servers and applications by identifying the most demanding processes and components of your infra and fine tuning them thanks to our super teams.
null
bluekiriTeam
null
bluekiri
bluekiri
CLOUD SERVICES,CLOUD COMPUTING,HIGH AVAILABILITY,DEVOPS,HIGH PERFORMANCE TEAM
bluekiriTeam
Machine Learning
machine-learning
Machine Learning
51,320
Jaime Ramírez
Software architect at Bluekiri. PhD student in Artificial Intelligence at UCLM.
581b6c223324
jramcast
166
239
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-18
2017-12-18 00:31:30
2017-11-17
2017-11-17 00:00:00
2
false
en
2017-12-18
2017-12-18 00:36:02
2
148c322c0dff
2.375786
0
0
0
You’ve heard of the industrial revolution, and we are still feeling the effects of the incredible change that happened in that time. What…
5
The Second Industrial Revolution You’ve heard of the industrial revolution, and we are still feeling the effects of the incredible change that happened in that time. What you might not know is that we are potentially on the cusp of a second industrial revolution brought about by one thing in particular: automation. What will this mean for us in the future? Not quite like this, but people will have a lot of time on their hands if we don’t change the way we think about work. Explosion in the amount of leisure time — as more people work primarily on machines that do the jobs we used to do, many in my generation and those older than me will be left behind. Schooling hasn’t even close to caught up with the demands of the future by teaching programming, critical thinking, or by putting more priority on teaching (encouraging) arts and creative expression. Children are still being taught to memorise facts for a test that are easier to look up on a smartphone that they all have. This is because it’s the easiest way to test them en-mass, but not because those are the skills that they will need. We will have a lot more time on our hands without a job to go to. This is both a good and a bad thing, and I am definitely concerned about what a world will look like when so many of us will have no purpose to wake up in the morning. Unemployment — Automation will lead to a lot of unemployment in the next ten or twenty years, and many believe that basic income will become standard throughout the world. Basic income basically means that the government welfare programs will expand to everybody, as a high percentage of people may not have jobs, but will still need money to buy things they need and keep the flow of money in the economy. I’m not just talking about low-skilled manual jobs. A lot of those are already gone or are disappearing as we speak. Think long-haul trucking and taxis (self-driving cars), banking (ATMs are just the start), financial analysts (it turns out computers are much better at predicting and organising the flow of money than people) and even journalism (actually writing isn’t particularly hard for computers). Computers are good at a lot of things, so we need to become better at the things that computers can’t do. What does the second industrial revolution mean for us today? Elon Musk has his kids working on real problems, not on memorising facts. It means that the times we live in right now offer great opportunities and dire warnings. Elon Musk has admitted to “unschooling” his children because, in his words: “I didn’t see the regular schools doing the things I thought should be done.” When we teach our children to how to learn and think for themselves rather than simply which facts to memorise, we give them an incredible gift. There is still room, after the automation of so much of our work today, for the human and emotional work that provides incredible value, and that machines can’t take over anytime soon. We just need to lean in and accept that automation is coming whether we like it or not, and we can rise to meet the challenge. Originally published at newpioneers.jp on November 17, 2017.
The Second Industrial Revolution
0
the-second-industrial-revolution-148c322c0dff
2017-12-18
2017-12-18 00:36:03
https://medium.com/s/story/the-second-industrial-revolution-148c322c0dff
false
528
null
null
null
null
null
null
null
null
null
Basic Income
basic-income
Basic Income
2,763
Charlie Moritz
Head Teacher at The New Pioneers Academy in Tokyo, and Founder of Live Work Play Japan.
7aedaca1d82d
charliemoritz
18
30
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-06
2017-11-06 06:08:40
2017-11-06
2017-11-06 06:44:20
0
false
zh-Hant
2017-11-06
2017-11-06 06:44:20
4
148d31b9e27b
0.075472
2
0
0
兩星期前的舊聞︰華爾街的大危機:「AI 」ETF推出 績效輕鬆超越大盤 https://news.cnyes.com/news/id/3946587
3
偶然談一下別的話題,我對人工智能與股市的一己之見 兩星期前的舊聞︰華爾街的大危機:「AI 」ETF推出 績效輕鬆超越大盤 https://news.cnyes.com/news/id/3946587 最近的新聞︰事实证明,人工智能选股也跑不赢市场…… https://wallstreetcn.com/articles/3039050 關於這支AI ETF,詳細資料可以輕易在網上搜到 我就長話短說只說明我覺得是整件事情的唯一重點︰ 由IBM的AI平台「華生」(Watson)模仿一大群分析師如何研究股市 資料來源︰https://udn.com/news/story/6811/2765391 同時我想引用我覺得對人工智能或是深度學習對投資最滿意的解答︰ 我个人认为深度学习不过是个复杂的函数映射逼近算法,你的逻辑就是你的函数,逻辑都不正确,逼近得再好又如何? 資料來源︰https://www.zhihu.com/question/54542998 這樣我會給出結論︰ 一大群分析師研究股市,並不是在股市獲利的好邏輯 AI再無限接近地學懂這個邏輯,這個邏輯也不會神化到由不獲利變成獲利 同理,有很多人在做著相似的工作,如︰NLP大量新聞,找出市場的情商、把人工智能運用在價格圖表的模式識別之類的工作 我對以上工作的成效都抱著懷疑的態度
偶然談一下別的話題,我對人工智能與股市的一己之見
2
偶然談一下別的話題-我對人工智能與股市的一己之見-148d31b9e27b
2017-11-07
2017-11-07 01:10:46
https://medium.com/s/story/偶然談一下別的話題-我對人工智能與股市的一己之見-148d31b9e27b
false
20
null
null
null
null
null
null
null
null
null
Mathematics
mathematics
Mathematics
6,509
magataresearch
投資宅一名。主講MPF投資
76c668950a20
10546839d
6
7
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-13
2018-03-13 14:05:42
2018-03-13
2018-03-13 14:12:16
16
false
en
2018-03-13
2018-03-13 14:17:24
76
148ebe3556f5
20.493396
6
0
0
TL;DR: Today’s small business owners represent the most technologically savvy cohort of SMBs that has ever existed. From email…
5
A Practical Approach to Emerging Tech for SMBs: AI, Blockchain, Cryptocurrencies, IoT, and AR/VR TL;DR: Today’s small business owners represent the most technologically savvy cohort of SMBs that has ever existed. From email infrastructure to search engine optimization, CRMs to automated customer support — SMBs today have access to unprecedented technological power, and use it every day to grow better. But the development of new technology never stops. In recent years, the rate of emerging technology seems to have reached a fever pitch as headlines announce the arrival of artificial intelligence and cryptocurrency prices threaten the arrival of a new bubble (and ensuing burst). Should SMBs be paying attention to all the noise? Do these emerging technologies represent imminent disruption? What’s hype and what represents real opportunity for business growth? This report is designed to break down the big five emerging technologies: artificial intelligence (AI), Blockchain, Cryptocurrency, Augmented Reality/Virtual Reality, and the Internet of Things (IoT). We’ll walk through the practical implications of each technology, highlighting the aspects that SMBs should be paying attention to, and what can be deprioritized (for now). Emerging Technologies — The HubSpot Guide Artificial Intelligence What is Artificial Intelligence When we hear the term “artificial intelligence,” we often think of a human-like robot or machine that is able to think, act and feel just like us. Although we have time until artificial intelligence (AI) can do things like write New York Times Best-Sellers and perform all human tasks, many forms of AI have already arrived. In fact, most of us interact with AI every day, whether we realize it or not. We do things like interact with Alexa and Siri, talk to chatbots online, tag photos on Facebook, and listen to Discover Weekly on Spotify. The work of speech recognition, visual perception, and data processing no longer requires human intelligence, AI is capable of managing these tasks with ease. All of these technologies are considered Artificial Narrow Intelligence (ANI) as opposed to Artificial General Intelligence (AGI). ANI is AI focused on a completing a single task (like talking or facial recognition), rather than all human tasks. AGI is the human-like AI we imagine — like Rosey the Robot who cleans and cooks for the Jetsons. As the chart below shows, experts don’t expect AGI for another 45–50 years. Source: arXiv via the MIT Technology Review Why Artificial Intelligence Matters Andrew Ng, Co-founder of Coursera recently called AI, the “new electricity.” He said, “just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years.” Out of all the emerging technologies discussed in this report, artificial intelligence is the tech category that will have the biggest impact on society, individual consumers, and businesses. The reason AI will be so impactful is that it promises one thing we just can’t get enough of these days: convenience. AI Technology and How it Makes Our Lives More Convenient As we grow accustomed to machines that magically seem to understand us, we’re already beginning to expect personalized and convenient interactions with all businesses — regardless of their size. This is why even small and growing businesses need to be watching the evolution of AI technology very closely. How SMBs Should Use AI Unless you’re a small business or startup building AI technology, it is unlikely that you will hire someone to “do AI.” What is far more likely is that you will start to notice AI capabilities show up in the software you are purchasing and already using. The landscape of existing AI technology for businesses is already remarkable: Source: Shivon Zilis It’s not realistic for any business to invest in all of these AI tools at once — stay practical and assess your top business needs, the greatest potential impact a tool may have, and then, most importantly, try it. Here are some guidelines to keep in mind as you evaluate AI’s potential for your business: The Do’s and Don’ts of Implementing AI The Biggest AI Opportunity for SMBs: Chatbots Chatbots fall under the “Agents” section of the current AI landscape, where you see household names like Amazon Alexa, Cortana, and Siri. These are bots that can do everything from answer your questions, play music, or place orders for you. But chatbots are rapidly moving out of the personal and into the business world. According to the Facebook Messaging Survey conducted by Nielsen, 67% of people say they will message with businesses more over the next two years, and 53% say they are more likely to shop with a business they can contact via chat. To manage this massive interest in communicating with businesses via messaging apps, business will need chatbots. “Now that tech behemoths have enabled businesses to use their messaging channels, we’re starting to see valuable use cases emerge. Already, Facebook Messenger is playing host to 65 million businesses.” — Dylan Sellberg, Product Manager at HubSpot It’s likely you’ve already interacted with a business chatbot. Think of the last few websites you’ve visited — has a little chat window opened up while you were browsing, asking if you needed help? When HubSpot’s marketing team added onsite messaging in 2017 it resulted in 20% more qualified opportunities. Many of these interfaces today are simply human-to-human on-site chat — super convenient for the customer, but not particularly sophisticated technologically. Over the coming months, watch for this sophistication to increase rapidly. Today, messaging is how customers and prospects want to communicate with businesses, and chatbots are how businesses make these conversations scale. Automating messaging using chatbots is the most impactful artificial intelligence use case today, having the potential to deliver better results across all departments–marketing, sales, and customer support. Where there is a messaging app, there will be a chatbot. And these chatbots will enable businesses of any size to have scalable, convenient, 1-to-1 conversations whenever and wherever the customer prefers. Blockchain What is Blockchain According to our research, blockchain is one of the most confusing emerging technologies for SMBs to understand. So let’s try and clear this up. In its essence, a blockchain is an electronic ledger with some nifty security features: It’s a record-keeping technology that is nearly impossible to tamper with. That’s because a blockchain’s records, or “ledger”, is hosted by everyone in the network and openly available to everyone in the network, like a public spreadsheet that they add to, but can never edit or delete. Here’s the simplified process: When someone wants to add new record, or ‘block’, to the ledger, they need to first solve what is essentially a math problem. This is where the term ‘mining’ comes into play — to solve the encryption math problem, computers use their computing power to ‘mine’ for the answer. When they get the answer, it is vetted by all the others in the network, and the new block is allowed to be added to the ledger if the answer is correct. A token, or ‘coin’, is generated when this occurs almost like a receipt to prove it happened. Source: marketoonist.com Why Blockchain Matters Blockchain has two incredibly valuable elements: 1. It records the transfer of digital assets, like files or bitcoin, which can be used to prove transactions or ownership. 2. Its ledger technology can never be overwritten or tampered with due to the way it stores data and its distributed architecture. Because this ledger is distributed across everyone who is part of the blockchain network, if someone manages to mess with their ledger for nefarious reasons, there will be a million other instances of the ‘correct’ ledger that exist. And what’s more, the technology itself can ‘reject’ the fake ledger by recognizing that it does not match the millions of other records out there. Additionally, Blockchain is set up to leverage the computing power of everyone involved in the ledger, so that means there’s no central database that could be vulnerable to attack or change. This decentralization and distribution of the official system of record makes blockchain virtually tamperproof. What’s interesting is that Blockchain’s total transparency makes it secure — a counterintuitive idea to be sure. “Blockchain technology is going to change consumer behavior around ownership and security. Now, we feel much safer having a business own and manage our sensitive information rather than bearing the burden ourselves. The rise of decentralization and the ability to have more control over your own personal assets will change that mindset. More and more businesses will be pressured to shift data ownership back to their customers. Those that resist may get left behind.” -Matt Howells-Barby, Global Director of Acquisition, HubSpot Source: Autonomous Research via Business Insider The promise of blockchain is that our most important records — whether it’s for a cryptocurrency, a car, medical information, or even a deed to a house — would be tamper proof and safe from fraud on a blockchain. Let’s use an example of a shipment of bananas to clarify: As the shipment of bananas makes its way from Warehouse A to Delivery Truck B to Grocery Store C, its movement is captured and stored on the electronic blockchain ledger that’s accessible on the internet. All three institutions (and anyone else on the ledger) have access to that information online. Let’s say Delivery Truck B gets into some trouble, loses the bananas, and decides to be less than honest. They tell Grocery Store C that they delivered the bananas, when in fact they didn’t. They may even mess with their version of ledger to ‘prove’ that there was a delivery. But because Warehouse A and Grocery Store C has a record of the bananas’ movement on their ledger, they can inspect their records to see if the delivery actually happened. Even if Delivery Truck B managed to tamper with their own ledger (which is very difficult to do), there are systems in place that prevents them from pushing out their false changes to the rest of the ledgers on the network. How SMBs Should Use Blockchain Blockchain’s unchangeable ledger and its potential security strengths make it an exciting new technology for any business that holds or transmits sensitive information. Unchangeable ledgers can reassure consumers and businesses that important transactions will not be hacked or rerouted. But while blockchain has an enormous potential, its use cases are still niche and appeal primarily to enterprises. Blockchain is unlikely to have a meaningful impact on SMBs for at least another 3–5 years. There are three exceptions to this prediction: Supply chain management: For example, a small jeweler could leverage blockchain technologies to assure their customers the stones they sell are sourced from a sustainable or humanely run mine. An SMB concerned with tracking the movement of goods through a supply chain will want to pay closer attention to improvements in blockchain technology. Standards in data storage: One of biggest potential use cases for blockchain is to change how we store data — making ledger-based storage superior to (sometimes questionably) secure databases. Start-up R3 promises to do just this, and has more than 60 firms using its blockchain platform, including Microsoft and Intel. But it is mostly blue chip companies exploring this blockchain use case currently, and it is unlikely to impact SMBs without significant improvements in cost & usability. Password protocols: Currently, there are two top contenders for disrupting the painful system of passwords: bio recognition, and blockchain. If blockchain technology delivers in solving this problem, SMBs will certainly want to take note. Cryptocurrency What is Cryptocurrency The last few months have seen interest in cryptocurrency explode as the value of these currencies rises. Voices in finance, business, and regulatory world have weighed in for and against cryptocurrencies. But amid all the hype, cryptocurrencies still promise one undeniable benefit: easy, cheap, international payments. Source: Coinbase Cryptocurrency and blockchain are often tied together in news stories and explainer guides. This is because the blockchain system was built as a way to enable the ‘exchange’ of cryptocurrencies. Cryptocurrencies, such as Bitcoin, leverage blockchain technology to allow users to directly send digital assets (like a ‘coin’) to one another in a secure, traceable, and tamper-proof way. Proponents argue that cryptocurrencies offer these benefits: They are a decentralized, borderless currency. Cryptocurrencies are tracked and stored in a decentralized system, with no single point of failure, which makes it a robust and relatively secure form of wealth transfer. They are tamper-proof and trackable. The ledgers that record transactions from owner to owner sits on a blockchain, which we know are immutable and more secure than conventional data storage constructs. Users can then be comforted knowing their records will always be correct. They are a secure way of directly sending and storing wealth. By virtue of recording transactions on the blockchain, users have a higher level of trust and security as they exchange cryptocurrencies or other assets. Now, no system is completely impervious to attacks, but many believe cryptocurrencies offer a superior level of security. They offer anonymity. While transactions and ledgers are openly available, it’s possible to employ tactics to mask identifying information (name, address etc) that ties a user as an owner of a particular block’s key. Each cryptocurrency has varying levels of anonymity, but in general, a user has more opportunity to ensure their anonymity with cryptocurrency compared to using a standard bank account or credit card to make transactions. The benefit of anonymity has been a complicated incentive for people to invest in cryptocurrencies. When cryptocurrencies first came into prominence, many worried that that currency’s primary use case was for cyber criminals to stash their ill gotten gains or trade illegal goods or services on the black market. In the early days, cryptocurrency was not seen as a respectable investment vehicle to the masses because of its association with dark web activities. Cut to the present, where there is now a mass frenzy for cryptocurrencies like Bitcoin, and many are reevaluating cryptocurrencies’ place in society. Concerns about black market trading, a speculative bubble, and lack of rigorous regulation still exist. But due to the explosive growth in value of Bitcoin, Ethereum, and Litecoin, among others, more and more investors are jumping onto the bandwagon. Some are doing so in the hopes of getting rich quickly, both other are buying because they believe in the benefits listed above. Why Cryptocurrency Matters Cryptocurrency is an important emerging technology because of the threat it poses to financial institutions. A bank makes money in two primary ways: Loaning money Fees Cryptocurrencies threaten to disrupt both of these revenue streams. If wealth is stored in the ledger of cryptocurrency rather than the ledger of a bank, the bank can’t leverage that money for other business pursuits. And if two parties can exchange money directly, with no middleman, the banks are also losing out on of their biggest fees — international exchanges involving currency conversion. The mass adoption of a currency like Bitcoin wouldn’t just upend financial institutions, it could also upend the dominance of the dollar as the world standard currency. A world where Bitcoin was the first international currency would be a huge step forward in globalization; making international payments of any size easy, cheap, and secure. The promise of cryptocurrency is that it could democratize the transfer of money, creating a world that is financially borderless. Still, this new technology has hurdles to overcome. First, entrenched powers are threatened by the advent of cryptocurrency, with many governments internationally offering hints that they may move to ban crypto trading. South Korean officials recently said they had no plans to ban cryptocurrency trading, but even the talk of a ban crashed the value of many popular currencies. Second, while the technology that enables cryptocurrency (blockchain) is secure relative to existing technologies in existence today, users are still dependent on the classic login process. These steps have proven to be the weakest link in security. There are instances of large scale hacks, most recently with Coincheck in Japan, that results in millions stolen. And there are countless stories of individuals who forgot their login information or cold storage password and thus are unable to access or use coins they own. How SMBs Should Use Cryptocurrencies Cryptocurrencies still have some fundamental issues to work out before this technology will achieve mass adoption. And until consumers show adoption of and a preference for crypto, it won’t impact the day-to-day of SMBs. Two exceptions to this would be if: You are an SMB that does a significant portion of your business internationally. Payment processing is one of the most complicated pain points of international business, and cryptocurrency is a promising solution. Shopify, one of the top ecommerce platforms; and Stripe, a major processor of internet payments; began accepting Bitcoin in 2013 and 2014 respectively. This seemed like a milestone in Bitcoin adoption, but earlier this year Stripe announced it is discontinuing support for Bitcoin–citing slow processing times and volatility in value. You are a startup looking to raise funding. Initial Coin Offerings (ICOs) have become the hottest new to earn capital, with startups like Mark Cuban backed Unikoin jumping on board this trend. Think of it as a ‘GoFundMe’ for businesses, where individuals receive ‘coins’ from the business in exchange for investing. But the kicker is that these coins may never grow in value, and the investors will not get a share of the business or its profits, which is usually the case in traditional fundraisers for capital. Shareholders of businesses can typically exert some pressure if they feel the business they’ve invested in is going awry. It’s less obvious how ICO coin holders can influence the businesses they invest in and there’s the potential for them to lose their entire investment. As a result, this practice has drawn extensive criticism and regulators are evaluating the practice today. ICOs allow small businesses and startups to bypass the traditional investment channels today to raise capital, but they pose some dangers to individual investors. Internet of Things (IOT) What is IOT Today, we can buy light bulbs that we switch on from our phone, refrigerators that send us snapshots of what’s inside, and even cooking pans that tell us the optimal temperature to sear our salmon. Consumers today are truly living in the age of convenience. These internet-enabled products are IoT devices and are designed to give users the ability to tailor their experience with a product to their specific preferences. In turn, IoT products generate a flow of usage data that’s usually accessible to both the product’s user and the product’s maker. Consider a fitness tracker that someone wears on their wrist. The user can access the data they generate to see how many steps they walked in a day, track exercise, sleeping patterns, and even track their calorie count and weight. The producer of the fitness tracker also has access to that data and can leverage it to send tailored recommendations to the user or gather information across their user base to develop new features or accessories. When businesses collect, store, and analyze data from the connected devices they produce, they can use the insights to improve their product’s user experience, improve the connected experience across all customer touch points, and better appeal to prospective customers. Many people are connecting the various devices they own to create a tailored experience at home: they use voice commands to turn on TVs or lights, activate their coffee makers from bed so a fresh cup is ready once they hit the kitchen, or integrate security monitoring devices with their phone so they can check in on pets or children who are home unattended. On a larger scale, cities are installing IoT enabled infrastructure, such as street lamps or traffic signals, that allows them to control and monitor electric use or redirect traffic. These are the interconnected ‘Smart Cities’ that we often hear about. Many cities around the globe are investing in IoT devices and leveraging their data outputs to better allocate their resources, save money, and understand where and when people congregate. Why IoT Matters Like artificial intelligence, IoT will greatly increase convenience, raising consumer expectations for effortless experiences. It will also involve massive data volumes and the associated risks of processing and storing this data. This data is the result of automated collection. Connected homes and cities are constantly recording data behind the scenes. John Rossman, author of “The Amazon Way on IoT”, states that IoT data collection is faster, cheaper, and more up to date than manual data collection. IoT connected devices essentially convert the analog world into a digital one where everything can be measured and monitored. Pairing this data with AI analytics technologies will surface new insights that we’ve never been able to in the past, simply because: previously the data did not exist, and we didn’t have the technology available to analyze it. Most people envision consumer products when they think of IoT. But IoT’s reach is incredibly deep in the B2B world. There are numerous applications for IOT across industries to track moving objects, monitor machinery, and post progress updates in central repositories. Currently, the Transportation, Healthcare, Energy/Utilities, and Manufacturing sectors are amongst the biggest adopters of IoT. In fact, most businesses that work with industrial machinery, logistics, and inventory have IoT enabled devices and programs to ensure its operations are running smoothly. Healthcare practitioners are also increasingly leveraging IoT devices to monitor and at times even diagnose patients based on the data IoT devices have collected. Source: Forrester Research The IoT ecosystem is massive today. There were 10 billion devices connected to the internet in 2015. Forecasts estimate that will increase to 31 billion IoT devices by 2020. McKinsey estimates IoT’s economic impact will be $11.1 trillion by 2025, and roughly 70% of the total economic impact of IoT systems will be from B2B applications. How SMBs Should Use IoT The quickest win in IoT today is in voice controlled speakers. Industry analysts estimate that Amazon has sold over 30 million Alexa-enabled units, and project that by 2022, there will be 244 million voice-enabled IoT devices in households. Other players in the space include Google, Apple, and Microsoft. These devices present two voice-related opportunities: Building for voice controlled speakers. Both Google Home and Amazon Alexa allow developers to build apps that integrate with devices. For example, Capital One built a skill that allows you to check your credit card balance and pay a bill through Alexa, and OpenTable lets you book a reservation at your favorite restaurant through Google Home. Voice search. With the proliferation of voice-enabled devices which span from speakers to TVs to phones, more and more people are comfortable using their voice to search for content. SMBs should treat voice search as an emerging marketing channel, and begin experimenting with their voice search strategy. The second opportunity in IoT is for SMBs that currently produce physical products. Any manufacturer today should at least be exploring if there is potential to increase the value of their product by turning it into a data-collecting, connected device. But if that leap makes sense for your business, you will go from being a manufacturer to being a software business, and that has some important implications which we will cover in the next section. The third opportunity in IoT is for SMBS to use IoT devices to improve business operations. Vendors such as Losant, ThingWorx, and Carriots all offer services for businesses to connect various products, collect data, and conduct analysis. McKinsey put together some key questions to ask when developing a comprehensive IoT strategy. Source: McKinsey & Company Take note: Security is the biggest flaw the IoT ecosystem The flow of user generated data is the biggest opportunity and risk when it comes to IoT. A wealth of user data creates compilations around security, data ownership, and privacy. Recent examples include: Hackers remotely killing a car while it was driving on the highway. Hackers breaking into baby monitors. A fitness-tracking map inadvertently revealing the location of military training facilities. IoT devices are evolving rapidly and are subject to few existing regulations around privacy, data ownership, and security. This puts the onus on SMBs that create or leverage IoT products to be thorough in how users’ data is collected, stored, used, and secured. Virtual and Augmented Reality What is Virtual and Augmented Reality Augmented reality (AR) technology allows people to add digital assets into their physical environment. Think of Pokemon Go, the mobile game that rose to stratospheric popularity earlier in 2017, as good example of augmented reality. With the help of a phone, you could ‘see’ a Pokemon near you by pointing your camera at your surroundings. You augment your reality by superimposing a pokemon into your surroundings, and you get to interact with that pokemon through your phone. Augmented reality is the precursor to the more immersive virtual reality (VR) experience, which is now predominantly focused on the gaming and entertainment sector. Today, you can buy headsets like the Oculus that allows you to completely ‘live’ in the virtual reality simulation. It’s a wholly immersive environment which allows the user to interact with a virtual world. Source: Dueling Analogs Why VR and AR Matters Our concept of reality is going to completely change as AR and VR technologies are refined in the next few years. AR already allows people to interact with digital objects in their home, in stores, and in public spaces. VR allows people to completely leave their surroundings and experience new worlds created by artists, software engineers, and businesses. Both AR and VR technology acts as a bridge between the physical and digital world, and is a step forward in how people will interact with digital objects. In particular, VR has clear cut ramifications for the gaming, leisure, and healthcare industry. People today primarily think of gaming when they think of virtual reality — and that’s because the top use cases for both AR and VR fall under gaming. Based on VR sales figures, millions are interested in having more immersive gaming experiences. Some VR arcades are already creating incredible virtual experiences, with wind machines blowing in the users face to simulate movement and rigs to stand or sit on. The leisure industry is also focusing on AR and VR. AirBnB wants to help people visualize their potential accommodations before they book. Stores, resorts, and cities are creating completely immersive experiences where visitors can visit a vacation spot without leaving their home. Though the ultimate goal is the entice people to make the physical journey, virtual vacations can open up the world to people who previously could not travel for health or safety reasons. Finally, medical students today use VR to learn or practice medical procedures. Doctors at the Children’s Hospital, Los Angeles are using VR to prepare for crisis situations or uniquely complicated procedures. Ultimately, the impact of AR and VR extend beyond the boundaries of gaming into verticals such as healthcare. How SMBs Should Prioritize VR and AR With the exception of breakout hits like Pokemon Go, adoption of AR and VR technology has been slow. But this could change quickly — both Appleand Google’s Android are investing in the AR capabilities of their operating systems. Once AR-enabled smartphones are the norm, expect consumer interest in and comfort with these emerging technologies to rise. Enterprise companies are already beginning to experiment with these technologies: Ikea used Apple’s ARkit to allow users to drop a piece of furniture into their home to see how it would look. The New York Times is experimenting with AR news stories that embed the reader in the location or event they’re reading about. Many got to experience a new side of the 2018 Winter Olympics through the Times’ AR stories. Retail companies like Starbucks are hoping that by becoming early adopters, they can corner the AR/VR market and lead the way with new, compelling use cases. And for SMBs with an appetite for experimentation and a compelling idea, there are ways to affordably experiment with this new technology. What’s more likely than building a complete AR experience, however, is the opportunity to capitalize on the next big AR trend. Thanks to Pokemon Go, many SMBs have already capitalized on AR as a means of increasing customer traffic and interactions without actually building software. Small businesses partnered with the developer of Pokemon Go by making their business a PokeSpot and saw upticks in foot traffic and a $2000 average increase in weekly sales. In example like this, small businesses sponsor the AR experiences instead of building them, and SMBs should pay attention to trends like this that offer growth opportunities. Virtual reality, which requires custom hardware to use, is much harder to crack. Industry data shows about 1.7 million units of Oculus Rift, Playstation VR, and Vive were sold in 2016, with an estimated 2.4 million units shipped in 2017. The current VR landscape is a mix of three separate virtual reality platforms with an overall a footprint of just 4.1 million units worldwide. If only a few million people purchase VR headsets, is it worth it for a small business to create an immersive application for those people to use? For most businesses, the answer is no. There are select industries that will see a VR opportunity open up in a more near-term timeframe (see Gartner’s Landscape below). But until one VR platform begins to show market dominance, and consumer behavior shows a strong preference for VR experiences — it’s hard to see investment in VR high on an SMB’s list of strategic priorities. The real test is if long-term, sustained use cases for AR and VR develops that are outside of the gaming and leisure categories. That’s when businesses of all sizes can jump into the technology. Source: Gartner Interested in learning more about how people have adopted emerging technologies today? Subscribe to HubSpot Research and we’ll update you each time a new report is published.
A Practical Approach to Emerging Tech for SMBs: AI, Blockchain, Cryptocurrencies, IoT, and AR/VR
73
a-practical-approach-to-emerging-tech-for-smbs-ai-blockchain-cryptocurrencies-iot-and-ar-vr-148ebe3556f5
2018-04-26
2018-04-26 09:06:15
https://medium.com/s/story/a-practical-approach-to-emerging-tech-for-smbs-ai-blockchain-cryptocurrencies-iot-and-ar-vr-148ebe3556f5
false
5,020
null
null
null
null
null
null
null
null
null
Bitcoin
bitcoin
Bitcoin
141,486
HubSpot Research
Account run by the HubSpot Research team. For the latest research subscribe to our content at https://research.hubspot.com/subscribe
22682275ed3d
HubSpotResearch
285
101
20,181,104
null
null
null
null
null
null
0
null
0
661161fab0d0
2018-07-31
2018-07-31 13:48:33
2018-07-31
2018-07-31 06:52:00
3
true
en
2018-10-11
2018-10-11 13:01:04
6
148f633ef271
4.972642
11
2
0
What if our beliefs about how the brain works have been all wrong?
5
Future Hype: The Myths of Technological Change What if our beliefs about how the brain works have been all wrong? A couple years ago a friend shared with me an interesting article that argued that one of our basic notions of how the brain works is completely wrong. The article, titled The Empty Brain, begins with this controversial fundamental premise: Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer. A lot of us have grown up with the idea of our brain as something like a computer. We assume that our memories are stored in the same way the documents and files are etched onto a hard drive. It may not be ones and zeroes, but it’s something akin to that. Our synapses, I’ve believed, were like electrical impulses that work like software protocols. And, as many a sci-fi writer has suggested, one day scientists will be able to patch our brains to real computers for augmented powers and other currently inconceivable experiences. I myself have talked about this as if it were almost a sure thing. In fact, my short story The M Zone is built on this premise, that we’ll one day be able to access our memories by means of electronic devices. The article came at an interesting time in my readings since I’d been voraciously consuming books about the future, about the world of tomorrow. Most striking is how varied authors’ pictures of the future have been. In fact, they’re often downright contradictory, whether it’s cars, geopolitics or the environment. In the realm of computer science, some writers believe that an A.I. singularity will occur in our lifetimes. Others suggest it will never occur at all, ever. Some see humans living in space stations on Mars in the not too distant future. Others scoff as if this notion is ridiculous. One of the books along these lines is Bob Seidensticker’s Future Hype. The title alone encapsulates the author’s premise. Here’s the book’s description at Amazon.com: Conventional wisdom says that technology is the greatest new growth frontier, coupling infinite potential with an ever-growing number of faster, more efficient, and more reliable products and instruments. According to this view, we live in an unprecedented golden era of technological expansion. Future Hype argues the opposite. Author Bob Seidensticker, who has an intimate understanding of technology on professional, theoretical, and academic levels, asserts that today’s technological achievements are neither fast nor progressive. He explodes seven major myths of technology, including “Change is exponential,” “Product cycle time is decreasing,” and “Today’s high-tech price reductions are unprecedented.” Examining the history of tech hype, Seidensticker skillfully uncovers the inaccuracies and misinterpretations that characterize the popular view of technology, explaining how and why this view has been created, and offering specific strategies for measuring progress against what is actually known rather than against what its boosters have promised. Over the years I’ve written several times about mistaken predictions by people supposedly in the know (cf. “Who Are Your Experts”). And being aware of this tendency (to envision futures quite wide of the mark) nearly all prediction making begins with extensive qualifying statements that indicate that the author is fully aware that the further into the future we go the more wildly askew we might be. In fact, this is one of the big question marks for those who write about A.I. No one can really know what will happen when the machines are so smart that their knowledge becomes exponentially greater than ours. It’s a moment in time beyond which we cannot see. That is, if it happens at all. This is what makes this book mentally entertaining. It’s an antidote to the unbridled optimism some writers have expressed. Here’s an Amazon.com review from a reader named Byron Justice: As an audio specialist these last 30 years, I’ve seen my share of technology hype. Even though history is littered from horizon to horizon with inflated expectations, broken promises, unfounded predictions, and a few outright lies, I’m always amazed at how easily people are goaded into throwing down their credit card for the latest tech toys. Of course, this book reaffirms that which I already know — technology for its own sake may be cool, but has no other benefits. I was predisposed to enjoy this book, and I wasn’t disappointed. The guy who should read this book is the same guy who replaced his Beta tape library with LaserDiscs. A reviewer named Brian wrote: The people of every period in history tend to think that the time in which they live is somehow special: dramatically better than, or worse than, or in some way fundamentally different from all the times that came before. (And/or the people of today are *themselves* somehow different from the people of yesterday.) * * * * In one section of the book the author notes that although times have changed people aren’t that different and that many times in the past there have been writers lamenting the fast pace of change taking place in their time. I myself have written about how people in many eras of history have felt this sense of being overwhelmed by the fast pace of change. An article in The Atlantic from the 19th century and another by Kafka back in the Twenties both expressed the emotional exhaustion caused by the breathtaking changes taking place and the feeling of being rushed, pushed, shoved by life. On the other hand, there are many who have become weary by how long the changes are taking place in our time. The author of Future Hype claims the rate of change is actually slowing down. Seidensticker devotes the middle portion of his book to presenting specific myths and misconceptions such as the idea that change is exponential, or that important new high-tech inventions are arriving ever faster. He even calls Moore’s Law a myth, or rather, the notion that it really matters. And even the changes wrought by the internet are overrated. Ultimately the book serves as a nice antidote to the roiling media hype about all the “inevitable” changes that are moving toward us like a steamroller. You can find it here at Amazon.com. * * * * This book brings to mind an article I once read in Writer’s Digest about making money by writing articles that are contrarian. This is certainly that kind of book. Nearly every argument has two sides, and it is often quite easy to shoot holes in what are often popular misconceptions. (By mentioning “shooting holes” I’m not aiming to veer this blog post toward the topic of guns.) Anyways, I’m confident that this author enjoyed assembling his book. I also believe it can serve a useful function to keep things in perspective. * * * * If you like short stories and enjoyed The M Zone, you can find my Unremembered Histories at Amazon. Meantime, life goes on all around you. Just keep breathing. Originally published at pioneerproductions.blogspot.com
Future Hype: The Myths of Technological Change
75
future-hype-the-myths-of-technology-change-148f633ef271
2018-10-12
2018-10-12 02:33:13
https://medium.com/s/story/future-hype-the-myths-of-technology-change-148f633ef271
false
1,172
where the future is written
null
null
null
Predict
predict
FUTURE,SINGULARITY,ARTIFICIAL INTELLIGENCE,ROBOTICS,CRYPTOCURRENCY
null
Technology
technology
Technology
166,125
Ed Newman
Retired ad man, Ed Newman is an avid reader who writes about arts, culture, literature and all things Dylan. @ennyman3 https://pioneerproductions.blogspot.com/
ddd8c63788ce
ennyman
187
268
20,181,104
null
null
null
null
null
null
0
null
0
1488e8fb9e66
2018-09-13
2018-09-13 13:41:03
2018-09-13
2018-09-13 16:01:01
1
false
en
2018-09-13
2018-09-13 16:01:01
6
148f6d37f691
0.603774
1
0
0
To make artificial intelligence safe, we need cultural intelligence. by Gillian Hadfield in TechCrunch
5
We Need Cultural Intelligence to Make AI Safe: Five Best Ideas of the Day — September 13, 2018 To make artificial intelligence safe, we need cultural intelligence. by Gillian Hadfield in TechCrunch This aggressive post-partum depression treatment could be a game-changer for new moms. by UNC Health Care and UNC School of Medicine How hot springs can change farming in the future. by Daliah Singer at BBC Future Is it ethical to live longer? by John K. Davis in The Conversation US It’s time to teach entrepreneurship in college. by Justin Dent in RealClearPolicy Get the Five Best Ideas of the Day in your inbox.
We Need Cultural Intelligence to Make AI Safe: Five Best Ideas of the Day — September 13, 2018
1
we-need-cultural-intelligence-to-make-ai-safe-five-best-ideas-of-the-day-september-13-2018-148f6d37f691
2018-09-13
2018-09-13 16:01:01
https://medium.com/s/story/we-need-cultural-intelligence-to-make-ai-safe-five-best-ideas-of-the-day-september-13-2018-148f6d37f691
false
107
A nonpartisan forum for values-based leadership and the exchange of ideas.
null
aspeninstitute
null
The Aspen Institute
the-aspen-institute
IDEAS,THOUGHT LEADERSHIP
AspenInstitute
The Five Best
the-five-best
The Five Best
691
Aspen Institute
Exploring ideas. Inspiring conversations on issues that matter.
4afe9c3e00ad
AspenInstitute
19,673
892
20,181,104
null
null
null
null
null
null
0
#Create a 3 X 20 matrix with random values. mu_vec1 = np.array([0,0,0]) cov_mat1 = np.array([[1,0,0],[0,1,0],[0,0,1]]) samples = np.random.multivariate_normal(mu_vec1, cov_mat1,20).T #Compute the mean vector mean_x = np.mean(samples[0,:]) mean_y = np.mean(samples[1,:]) mean_z = np.mean(samples[2,:]) mean_vector = np.array([[mean_x],[mean_y],[mean_z]]) #Computation of scatter plot scatter_matrix = np.zeros((3,3)) for i in range(all_samples.shape[1]): scatter_matrix += (all_samples[:,i].reshape(3,1) - mean_vector).dot((all_samples[:,i].reshape(3,1) - mean_vector).T) print('Scatter Matrix:\n', scatter_matrix) arange = np.arange(0, 40) samples = np.array([arange * 3 , arange * -1]) scatter_matrix = np.zeros((2,2)) for i in range(samples.shape[1]): scatter_matrix += (samples[:,i].reshape(2,1) - mean_vector).dot((samples[:,i].reshape(2,1) - mean_vector).T) print('Scatter Matrix:', scatter_matrix) Output : 'Scatter Matrix:', array([[ 47970., -15990.], [-15990., 5330.]]) arange = np.arange(0, 40) samples = np.array([arange * 3 , arange * 1]) Output : ('Scatter Matrix:', array([[ 47970., -15990.], [-15990., 5330.]])) print('Covariance Matrix:'np.cov(samples)) print('Scatter Matrix:', scatter_matrix) print('Unscaled covariance matrix which is same as Scatter Matrix:'np.cov(samples) * 39) Output : 'Covariance Matrix:', array([[1230. , 410. ], [ 410. , 136.66666667]]) 'Scatter Matrix:', array([[47970., 15990.], [15990., 5330.]]) 'Unscaled covariance matrix which is same as Scatter Matrix:', array([[47970., 15990.], [15990., 5330.]]) print('Covariance Matrix:',np.cov(samples)) std_dev_of_x1 = np.std(arange * 3) std_dev_of_x2 = np.std(arange * -1) std_dev_products = np.array( [[std_dev_of_x1 * std_dev_of_x1, std_dev_of_x1 * std_dev_of_x2], [std_dev_of_x1 * std_dev_of_x2, std_dev_of_x2 * std_dev_of_x2]] ) print('Covariance Matrix:', np.corrcoef(samples)) print('Std deviation products :', std_dev_products) print('Covariance Matrix computed from covariance :', np.divide(np.cov(samples), std_dev_products)) ('Covariance Matrix:', array([[1., 1.], [1., 1.]])) ('Std deviation products :', array([[1199.25, 399.75], [ 399.75, 133.25]])) ('Covariance Matrix computed from covariance :', array([[1.02564103, 1.02564103], [1.02564103, 1.02564103]]))
20
null
2018-07-07
2018-07-07 07:36:45
2018-08-16
2018-08-16 14:41:21
2
false
en
2018-08-16
2018-08-16 14:41:21
4
14921741ca56
2.896541
1
0
0
It is common among data science tasks to understand the relation between two variables.We mostly use the correlation to understand the…
5
Scatter matrix , Covariance and Correlation Explained It is common among data science tasks to understand the relation between two variables.We mostly use the correlation to understand the relation between two variable.But often we also hear about scatter matrix (also scatter plot) and covariance too. Lets look into what each of them are , how they are calculated and what each signifies. We will also implement each one of them and build on top of another. Scatter matrix generated with seaborn. The question all of the methods answers is What are the relation between variables in data? Scatter Matrix : A scatter matrix is a estimation of covariance matrix when covariance cannot be calculated or costly to calculate. The scatter matrix is also used in lot of dimensionality reduction exercises. If there are k variables , scatter matrix will have k rows and k columns i.e k X k matrix. How scatter matrix is calculated In python scatter matrix can be computed using The scatter matrix contains for each combination of the variable , the relation between them . Lets observe the scatter matrix for the following matrix : If we tweak the matrix generation by changing -1 to 1 : The output also changes sign : We can observe that the sign of the scatter matrix for a pair of two variables denote weather one increases when other increases / decreases. Covariance Matrix : The covariance is defined as the measure of the joint variability of two random variables . Given that we have calculated the the scatter matrix , the computation of covariance matrix is straight forward . We just a have to scale by n-1 the scatter matrix values to compute the covariance matrix. This can we verified with : With both the scatter matrix and covariance matrix, it is hard to interpret the magnitude of the values as the values are subject to effect of magnitude of the variables. Hence the only the sign of value is really helpful.( Feature scaling do help. More on feature scaling here) . To really understand the strength of the relation of the variables we have to look to the correlation. Correlation Matrix : The correlation matrix gives us the information about how the two variables interact , both the direction and magnitude. The commonly used covariance is based on the Pearson correlation coefficient . The way we compute the correlation matrix is by dividing the covariance values of two variables by product of the standard deviation of two variables. It can be verified as follows : Hola !! The correlation matrix from numpy is very close to what we computed from covariance matrix.
Scatter matrix , Covariance and Correlation Explained
1
scatter-matrix-covariance-and-correlation-explained-14921741ca56
2018-08-16
2018-08-16 14:41:21
https://medium.com/s/story/scatter-matrix-covariance-and-correlation-explained-14921741ca56
false
666
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Raghavan
Software Engineer at Indix , Kravmaga Trainer
c89e4638a116
raghavan99o
6
33
20,181,104
null
null
null
null
null
null
0
null
0
a28351c67031
2018-07-19
2018-07-19 12:53:13
2018-07-19
2018-07-19 15:01:01
1
false
en
2018-07-19
2018-07-19 15:01:01
10
149318f1bab
1.864151
4
0
0
Last week, six members of the Effect.AI team flew out to Hong Kong to attend RISE, Asia’s biggest tech event. We had the privilege of being…
4
Effect.AI update: RISE Conference and Quadrant Protocol Partnership Last week, six members of the Effect.AI team flew out to Hong Kong to attend RISE, Asia’s biggest tech event. We had the privilege of being selected as one of the Growth startups, which includes companies that already have funding and are scaling up their operations. The conference itself was a huge success, and we were able to make some key contacts with new partners and requesters. Quadrant Partnership One of the most compelling meetings was with Quadrant Protocol, a blockchain project that enables the access, creation, and distribution of data products and services. Effect.AI CEO Chris Dawe had been in discussions with the CEO of Quadrant, Mike Davie, over the last few weeks, and their meeting at RISE allowed them to hammer out the finer details of the partnership. Quadrant has an agreement with the Government of Singapore to map the city using satellite images. Quadrant will be using Effect Force to identify and categorize geographic elements from satellite photos of Singapore. Basically, Effect Force workers will be given sections of these satellite photos, and then will have to draw polygons around different elements and define what those are. For instance, they could see an apartment block next to a bike path and a pond. Workers will then trace these elements and assign them to their corresponding GPS coordinates and submit them for review. This represents a really interesting new use case for Effect Force and gives our team the opportunity of designing and building these new features in just a few weeks. It also opens the possibility of future work with Quadrant, and indeed with other governments and local authorities for projects of a similar nature. Effect Force Private Beta recap Finally, here are a few stats about the recent Effect Force Private Beta. We had participants from over 70 countries, and peak participation was in the first three hours after release, with about 18,000 microtasks completed per hour over this period. Average task completion time was around 4 seconds. We aim to release another batch of microtasks very soon, so stay tuned for further announcements. Stay up to date Thank you for your continued support and interest in Effect.AI. We are thrilled to have such an enthusiastic and helpful community backing us up. To stay up to date on the latest Effect.AI developments, subscribe to our YouTube channel, join our Telegram channel, and follow our social feeds. All of them feature regular updates on Effect.AI and Effect Force. The Effect.AI Team Web | Telegram | Twitter | Facebook | Youtube | Github | Reddit | Linkedin | Medium | Steemit
Effect.AI update: RISE Conference and Quadrant Protocol Partnership
47
effect-ai-update-rise-conference-and-quadrant-protocol-partnership-149318f1bab
2018-08-27
2018-08-27 08:28:28
https://medium.com/s/story/effect-ai-update-rise-conference-and-quadrant-protocol-partnership-149318f1bab
false
441
Effect.AI is an Amsterdam based company that is working on a blockchain-powered, decentralized platform for Artificial Intelligence development and AI related services.
null
effectai
null
Effect.AI
effect-ai
BLOCKCHAIN,ARTIFICIAL INTELLIGENCE,DECENTRALIZED,CRYPTOCURRENCY
effectaix
Startup
startup
Startup
331,914
Effect.AI
null
470bf7e8d4d3
effectai
2,323
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-10
2018-04-10 12:11:15
2018-04-10
2018-04-10 12:57:12
2
false
en
2018-04-10
2018-04-10 12:57:12
4
14931aa910a2
3.375786
2
0
0
Remember how that Google neural net learned to tell the difference between dogs and cats? It’s helping catch skin cancer now, thanks to…
3
Deep learning algorithm diagnoses skin cancer as well as seasoned dermatologists Remember how that Google neural net learned to tell the difference between dogs and cats? It’s helping catch skin cancer now, thanks to some scientists at Stanford who trained it up and then loosed it on a huge set of high-quality diagnostic images. During recent tests, the algorithm performed just as well as almost two dozen veteran dermatologists in deciding whether a lesion needed further medical attention. This is exactly what I meant when I said that AI will be the next major sea-change in how we practice medicine: humans are extending their intelligence by underwriting it with the processing power of supercomputers. “We made a very powerful machine learning algorithm that learns from data,” said Andre Esteva, co-lead author of the paper and a graduate student at Stanford. “Instead of writing into computer code exactly what to look for, you let the algorithm figure it out.” The algorithm is called a deep convolutional neural net. It started out in development as Google Brain, using their prodigious computing capacity to power the algorithm’s decision-making capabilities.When the Stanford collaboration began, the neural net was already able to identify 1.28 million images of things from about a thousand different categories. But the researchers needed it to know a malignant carcinoma from a benign seborrheic keratosis. Telling a pug from a Persian is one thing. How do you tell one particular kind of irregular skin-colored blotch from another, reliably enough to potentially bet someone’s life on? Seriously, the skin colored blotches are a problem. This is what the algorithm had to work with. Fig. 1b, Esteva, Kuprel et. al, 2017 “There’s no huge dataset of skin cancer that we can just train our algorithms on, so we had to make our own,” said grad student Brett Kuprel, co-lead author of the report. And they had a translating task, too, before they ever got to do any real image processing. “We gathered images from the internet and worked with the medical school to create a nice taxonomy out of data that was very messy — the labels alone were in several languages, including German, Arabic and Latin.” Dermatologists often use an instrument called a dermoscope to closely examine a patient’s skin. This provides a roughly consistent level of magnification and a pretty uniform perspective in images taken by medical professionals. Many of the images the researchers gathered from the Internet weren’t taken in such a controlled setting, so they varied in terms of angle, zoom, and lighting. But in the end, the researchers amassed about 130,000 images of skin lesions representing over 2,000 different diseases. They used that dataset to create a library of images, which they fed to the algorithm as raw pixels, each pixel labeled with additional data about the disease depicted. Then they asked the algorithm to suss out the patterns: to find the rules that define the appearance of the disease as it spreads through tissue. This is how the AI split up what it saw into different categories. Fig. 1b, Esteva, Kuprel et al., 2017 The researchers tested the algorithm’s performance against the diagnoses of 21 dermatologists from the Stanford medical school, on three critical diagnostic tasks: keratinocyte carcinoma classification, melanoma classification, and melanoma classification when viewed using dermoscopy. In their final tests, the team used only high-quality, biopsy-confirmed images of malignant melanomas and malignant carcinomas. When presented with the same image of a lesion and asked whether they would “proceed with biopsy or treatment, or reassure the patient,” the algorithm scored 91% as well as the doctors, in terms of sensitivity (catching all the cancerous lesions) and sensitivity (not getting false positives). While it’s not available as an app just yet, that’s definitely on the team’s whiteboard. They’re intent on getting better healthcare access to the masses. “My main eureka moment was when I realized just how ubiquitous smartphones will be,” said Esteva. “Everyone will have a supercomputer in their pockets with a number of sensors in it, including a camera. What if we could use it to visually screen for skin cancer? Or other ailments?” Either way, before it’s ready to go commercial, the next step is more testing and refinement of the algorithm. It’s important to know how the AI is making the decisions it makes in order to classify images. “Advances in computer-aided classification of benign versus malignant skin lesions could greatly assist dermatologists in improved diagnosis for challenging lesions and provide better management options for patients,” said coauthor Susan Swetter, professor of dermatology at Stanford. “However, rigorous prospective validation of the algorithm is necessary before it can be implemented in clinical practice, by practitioners and patients alike.”
Deep learning algorithm diagnoses skin cancer as well as seasoned dermatologists
3
deep-learning-algorithm-diagnoses-skin-cancer-as-well-as-seasoned-dermatologists-14931aa910a2
2018-04-10
2018-04-10 19:37:03
https://medium.com/s/story/deep-learning-algorithm-diagnoses-skin-cancer-as-well-as-seasoned-dermatologists-14931aa910a2
false
793
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
lakshya
open sources (tech geek)
fe2b4cd6d2d6
xtaraim
48
201
20,181,104
null
null
null
null
null
null
0
null
0
53aa62090153
2018-02-07
2018-02-07 17:16:19
2018-02-07
2018-02-07 17:29:06
2
true
en
2018-06-12
2018-06-12 16:44:36
3
14939338cc4e
3.496541
8
2
0
By Susanne Burri and Michael Robillard
4
Why banning autonomous killer robots wouldn’t solve anything Photo: Devrimb/Getty Images By Susanne Burri and Michael Robillard Autonomous weapons — killer robots that can attack without a human operator — are dangerous tools. There is no doubt about this fact. As tech entrepreneurs such as Elon Musk, Mustafa Suleyman and other signatories to a recent open letter to the United Nations have put it, autonomous weapons ‘can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons [that can be] hacked to behave in undesirable ways’. But this does not mean that the UN should implement a preventive ban on the further development of these weapons, as the signatories of the open letter seem to urge. For one thing, it sometimes takes dangerous tools to achieve worthy ends. Think of the Rwandan genocide, where the world simply stood by and did nothing. Had autonomous weapons been available in 1994, maybe we would not have looked away. It seems plausible that if the costs of humanitarian interventions were purely monetary, then it would be easier to gain widespread support for such interventions. For another thing, it is naive to assume that we can enjoy the benefits of the recent advances in artificial intelligence (AI) without being exposed to at least some downsides as well. Suppose the UN were to implement a preventive ban on the further development of all autonomous weapons technology. Further suppose — quite optimistically, already — that all armies around the world were to respect the ban, and abort their autonomous-weapons research programmes. Even with both of these assumptions in place, we would still have to worry about autonomous weapons. A self-driving car can be easily re-programmed into an autonomous weapons system: instead of instructing it to swerve when it sees a pedestrian, just teach it to run over the pedestrian. To put the point more generally, AI technology is tremendously useful, and it already permeates our lives in ways we don’t always notice, and aren’t always able to comprehend fully. Given its pervasive presence, it is shortsighted to think that the technology’s abuse can be prevented if only the further development of autonomous weapons is halted. In fact, it might well take the sophisticated and discriminate autonomous-weapons systems that armies around the world are currently in the process of developing if we are to effectively counter the much cruder autonomous weapons that are quite easily constructed through the reprogramming of seemingly benign AI technology such as the self-driving car. Furthermore, the notion of a simple ban at the international level, among state actors, tacitly betrays a view of autonomous weapons that is overly simplistic. It is a conception that fails to acknowledge the long causal backstory of institutional arrangements and individual actors who, through thousands of little acts of commission and omission, have brought about, and continue to bring about, the rise of such technologies. As long as the debate about autonomous weapons is framed primarily in terms of UN-level policies, the average citizen, soldier or programmer must be forgiven for assuming that he or she is absolved of all moral responsibility for the wrongful harm that autonomous weapons risk causing. But this assumption is false, and it might prove disastrous. All individuals who in some way or other deal with autonomous-weapons technology have to exercise due diligence, and each and every one of us needs to examine carefully how his or her actions and inactions are contributing to the potential dangers of this technology. This is by no means to say that state and intergovernmental agencies do not have an important role to play as well. Rather, it is to emphasise that if the potential dangers of autonomous weapons are to be mitigated, then an ethic of personal responsibility must be promoted, and it must reach all the way down to the level of the individual decision-maker. For a start, it is of the utmost importance that we begin telling a richer and more complex story about the rise of autonomous weapons — a story that includes the causal contributions of decision-makers at all levels. Finally, it is sometimes insinuated that autonomous weapons are dangerous not because they are dangerous tools but because they could become autonomous agents with ends and interests of their own. This worry is either misguided, or else it is a worry that a preventive ban on the further development of autonomous weapons could do nothing to alleviate. If superintelligence is a threat to humanity, we urgently need to find ways to deal effectively with this threat, and we need to do so quite independently of whether autonomous-weapons technology is developed further. Originally published December 19, 2017 Susanne Burri is an assistant professor in the Department of Philosophy, Logic and Scientific Method at the London School of Economics and Political Science. Michael Robillard is a research fellow at the Oxford Uehiro Centre for Practical Ethics.
Why banning autonomous killer robots wouldn’t solve anything
45
why-banning-autonomous-killer-robots-wouldnt-solve-anything-14939338cc4e
2018-08-25
2018-08-25 01:41:48
https://medium.com/s/story/why-banning-autonomous-killer-robots-wouldnt-solve-anything-14939338cc4e
false
825
Longform explorations of deep issues written by serious and creative thinkers.
null
null
null
Aeon Magazine
null
aeon-magazine
FUTURE,CULTURE,SCIENCE
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Aeon Magazine
Aeon asks the big questions and finds the freshest, most original answers, provided by leading thinkers on science, philosophy, society and the arts.
d28ca05800b9
aeonmag
32,422
1,067
20,181,104
null
null
null
null
null
null
0
null
0
863f502aede2
2018-08-13
2018-08-13 14:48:14
2018-08-13
2018-08-13 14:52:26
2
false
en
2018-08-13
2018-08-13 14:53:34
4
149422627942
5.002201
3
0
0
Robert S. Warren, MD is a Professor of Surgery and a specialist in gastrointestinal and liver cancer. Dr. Warren joined UCSF Medical Center…
5
MORE Health Co-Founder & UCSF Physician Robert Warren On AI in Health Robert S. Warren, MD is a Professor of Surgery and a specialist in gastrointestinal and liver cancer. Dr. Warren joined UCSF Medical Center in 1988. Highly respected by his peers, Dr. Warren was named to the list of U.S. News “America’s Top Doctors,” a distinction reserved for the top 1% of physicians in the nation for a given specialty. His research focuses on the biology of colorectal cancer and how it spreads. Synced sat down with Dr. Warren to discuss his experiences, MORE Health, and the role of AI in health. Please tell us about your role at MORE Health I became involved out of my developing interest in global health. As I progressed in my career, I started thinking about ways that I can contribute in a much broader way. So rather than seeing twenty-five patients in the day, I put my thinking cap on to figure out how I can treat 2,500 patients — and in a better way. From a doctor’s perspective, what do you think of AI’s role in health care? In the United States, AI is sort of a silent process according to most physicians — unless they’re part of the organizations that have a considered effort to do AI. A good example is the Memorial Sloan Kettering Cancer Center which has been seeing patients with cancer for decades and collecting data. They’re starting to analyze it in a machine learning environment. It really comes down to medicine to say: Can we make a diagnosis better? Can we treat a patient in a better way or more effective way? Those are still the same questions we ask ourselves every day when we see a patient in the office. And so those are really the same questions we have to ask when it comes to AI. What do you think of the recent media stories about AI replacing doctors? I don’t think that will happen at all. I think the kind of work doctors do may change. For example, we might need fewer radiologists, as they could develop novel approaches for new imaging methods rather than spending their time in a darkroom reading X rays. Physicians’ roles are already changing, even without AI. More precise methods of describing a tumor is just another way of characterizing molecular features. But we still need the oncologists to really put it in context. Sometimes you need to lay a hand on a patient, talk to them, look at their eyes, talk to the family — to really come up with the right decision. I really don’t see AI replacing physicians. I see it as another tool in our little doctor bag that will make us better. What are specific issues MORE Health is addressing? The average amount of time a patient spends with a doctor in Beijing at a new patient visit is five minutes. So how can we improve the quality of life from a medical standpoint for as many people as possible. That’s our goal. In China, a patient might be seen at a community hospital and have some tests, medical imaging and possibly a biopsy done. Then the patient goes to the provincial hospital because they’re not confident in their doctor, and everything is repeated. It’s been a challenge to get all of the records going back to the original diagnosis. That’s where our boots on the ground in China has made a huge difference. Our folks in China have made a tremendous effort and gathered up those pieces of information, translated and uploaded them to the platform. What information do you look at before diagnosis? What I learned in medical school in terms of diagnosis, that 90 percent of the diagnosis comes from looking at the patient, talking with the patient and examining patients. The other ten percent are the blood tests, X rays, and everything else that goes into the big expense of starting to see a doctor. The challenge is how do you improve that 80 or 90 percent from the patient — doctor interaction when it’s remote? The platform’s telling us and we’re getting better at that. What is your focus at the UCSF? I’m involved in products that use big data to try to understand biology. I’m a specialist in cancer, that’s what I study, and I have a laboratory and a clinic. But what’s nice about being at an academic institution like UCSF is we have access to databases from around the world that correlate characteristics of tumors, mutations, gene expression, protein expression, and clinical information with a clinical outcome. Most AI applied in medicine is still used for predictive analytics, for mining, for big data, for medical imaging, pathology, for the evaluation. Ultimately there will be other people around the country who have developed algorithms for treatment suggestions. How can AI be implemented in patient care? I can speak to the issue of cancer. I think both the diagnosis and the treatment of cancer will be dramatically helped by AI. The term that’s used in cancer is “precision oncology.” And it’s a misnomer, because right now it’s not very precise. We identify a mutation. We have a drug we think will target that mutation. We can say, yeah, that’s what you should do. But I can tell you at least half the time that drug doesn’t work, even though it should. We keep requiring more and more data to understand the basis for sensitivity or resistance to a therapy. It’s an evolving field, so how do you keep up with all that information? I think machine learning will help synthesize that evolving landscape of diagnosis and treatment in an individual patient. The Atom project is a collaboration with UCSF, GSK, Frederick National Laboratory for Cancer Research, Lawrence Livermore National Laboratory, which has the largest and fastest computer in the United States right now. And so that’s where we’re focused right now. I’m taking available data just from a single pharmaceutical company and applying AI principles to try to understand the biology of cancer better. But there’s no reason that every large pharmaceutical company couldn’t contribute their data. Obviously, some proprietary information would have to be withheld. The idea is that if you combine all of the massive amounts of data with the massive computing power at Lawrence Livermore, it will somehow advance the field. We will find out exactly how, it’s happening as we speak, and ultimately will feedback to individual doctors everywhere. What are your thoughts on AI interpretability or “black box” issues? It’s easy for patients to say: ‘I’m going to this doctor, he’s been practicing for thirty years, knows what he’s doing, I trust him.’ Trust is the key. So how do you trust a computer? I think what you have to say is ‘the doctor is being advised by the computer, not being told what to do’. If the doctors are convinced over time that AI is a helpful tool, then that’s what it will be. Just like a stethoscope, it’s a tool. Read Synced’s coverage of MORE Health and Scaling Telemedicine here. Journalist: Tony Peng |Editor: Michael Sarazen Follow us on Twitter @Synced_Global for more AI updates! Subscribe to Synced Global AI Weekly to get insightful tech news, reviews and analysis! Click here !
MORE Health Co-Founder & UCSF Physician Robert Warren On AI in Health
74
more-health-co-founder-ucsf-physician-robert-warren-on-ai-in-health-149422627942
2018-08-15
2018-08-15 20:04:19
https://medium.com/s/story/more-health-co-founder-ucsf-physician-robert-warren-on-ai-in-health-149422627942
false
1,224
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
null
SyncedGlobal
null
SyncedReview
syncedreview
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
Synced_Global
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Synced
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
960feca52112
Synced
8,138
15
20,181,104
null
null
null
null
null
null
0
null
0
ca834513b05
2018-08-10
2018-08-10 09:20:30
2018-08-10
2018-08-10 12:02:33
5
false
en
2018-08-10
2018-08-10 14:52:41
3
14979ce7cc84
3.199371
11
0
0
AlexNet is a Convolutional Neural Network that rose to prominence when it won the Imagenet Large Scale Visual Recognition Challenge…
4
AlexNet: A Brief Review AlexNet is a Convolutional Neural Network that rose to prominence when it won the Imagenet Large Scale Visual Recognition Challenge (ILSVRC), which is an annual challenge that evaluates algorithms for object detection and image classification at large scale (think of it as the World cup for image classification algorithms). The ILSVRC evaluates the success of image classification solutions by using two important metrics, the top-5 and the top- 1 errors . When given a set of N images, often called test images and mapped to a target class for each metric. The Top-1 Error is the percentage of the time the classifier did not give the correct class the highest score while the top-5 error is the percentage of the time that the classifier did not include the correct class among its top 5 guesses. AlexNet received a top-5 error around 16% which was an extremely good result back in 2012. To put in context, the next best result trailed far behind (26.2%). When the dust settled deep learning became cool again and in the next few years, multiple teams would build CNN architectures that would beat human level accuracy. The architecture used in the 2012 paper is popularly called AlexNet after the first author Alex Krizhevsky. In this blog post, we will have a look at the details of the Alexnet architecture and try to re-implement it in keras. Lets dive in! Data The model was trained on imageNet data which contains about 1.2 million images of 1000 categories. Image pre-processing Since ImageNet images are variable resolution, and the model presented in this paper requires fixed size images, they scaled every image to 256x256 pixels. The scaling like so: Scale a possibly rectangular image so that the shorter side is 256 pixels. Take the middle 256x256 patch as the input image. preprocessing Data Augmentation This architecture used two forms of data augmentation. Translation and reflection. This consists of generating image translations and horizontal reflections. This scheme helped increase the size of the data by a scale of 2048 without which the network would have suffered from substantial overfitting. The other way they augmented the dataset involved perturbing the R,G,B values of each input image by a scaled version of the principal components (in RGB space) across the whole training set. This scheme helped reduce the top-1 error rate by over 1%. Architecture AlexNet is made up of eight trainable layers, five convolution layers and three fully connected layers. All of the trainable layers are followed by a ReLu activation function except for the last fully connected layer where a softmax function is used. The architecture also consists of non trainable layers: Three pooling layers, two normalization layers and one dropout layer (used to reduce overfitting). AlexNet architecture Choice of Non Linearity. The authors chose to use the Rectified Linear Unit (ReLU) function. They saw that deep convolutional neural nets with ReLUs trained several times faster than their equivalents with tanh units. When pitted against the tanh activation with no other changes, they were able to train their model to 25% error rate on the training set 6X faster with the ReLU activation. Training Stochastic Gradient Descent with learning rate 0.01, momentum 0.9 and weight decay 0.0005 is used. The training is done on two GPUs (GTX 580) for parallelism, and the setup is quite interesting. The GPUs used each have 3GB memory. The network is split into halves, as can be seen in the model description figure, across the two GPUs. Training details Implementation AlexNet implementation in Keras Conclusion. This work was the first of its kind to have trained deep convolutional networks on GPUs to achieve impressive results on the ImageNet dataset for image classification. I hope you got to learn something. Later.
AlexNet: A Brief Review
98
alexnet-a-brief-review-14979ce7cc84
2018-08-10
2018-08-10 14:52:41
https://medium.com/s/story/alexnet-a-brief-review-14979ce7cc84
false
627
The AI & Data Science research group at Makerere University specialises in the application of artificial intelligence and data science - including, for example, methods from machine learning, computer vision and predictive analytics - to problems in the developing world.
null
null
null
AI Research Lab Kampala
null
ai-research-lab-kampala
DEEP LEARNING,MACHINE LEARNING,DATA SCIENCE,SOFTWARE DEVELOPMENT,AFRICA
null
Deep Learning
deep-learning
Deep Learning
12,189
Benjamin Akera
null
7cf731d10f96
akera
15
34
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-26
2018-04-26 20:34:50
2018-04-27
2018-04-27 00:39:49
3
false
pt
2018-04-27
2018-04-27 00:39:49
0
149a177b4ac2
2.617925
1
0
0
Não é novidade para ninguém que a Inteligência Artificial é uma das maiores ondas da inovação e vem evoluindo e conquistando cada vez mais…
5
Artificial Intelligence : The Future is Comming… Não é novidade para ninguém que a Inteligência Artificial é uma das maiores ondas da inovação e vem evoluindo e conquistando cada vez mais espaço dos dias de hoje. Desde toda a evolução tecnológica que tivemos até chegarmos a obter nossos computadores pessoais, smartphones, Clouds (computação em nuvem) e a A.I se tornou uma tecnologia que de forma rápida, intuitiva e inteligente, leva você para onde quer chegar. Geralmente, é comum encontrar quem pense que a A.I é algo recente, muitos na verdade convivem com ela diariamente e nem se dão conta disso, porém o conceito de Inteligencia Artificial começou a surgir lá em 1956, quando John McCarthy criou um termo para descrever uma realidade onde as máquinas poderiam resolver os problemas do seres humanos. Entretanto, somente este termo não foi o suficiente para alavancar da maneira que se encontra hoje, pois, naquela época ainda faltavam três coisas para a evolução : 1º: Bons modelos de dados para que pudessem processar, classificar e analisar os dados de forma inteligente; 2º: Acesso a grande quantidade de dados não processados, para que se de continuidade no aprimoramento e alimentação de novos modelos; 3º: Computação de grande potência sem um custo inacessível para que o processamento seja mais rápido e mais eficiente. Hoje, com o Big Data, Computação em nuvem e máquinas com uma qualidade muito superior às anteriores, a A.I é realmente possível . Termos aplicados nas computações como Machine Learning, Deep Learning, Processamento de Linguagem Natural, entre outras, estão em conjunto, nos levando a um futuro em que os sistemas e plataformas são autossuficientes e inteligentes o bastante para aprender com nossas interações e dados. Além de que terão a capacidade de aprenderem sozinhas e se “relacionar” com as outras máquinas, compartilhando de dados, e até mesmo tendo interações entre elas. Muitas pessoas não apoiam o avanço da Inteligencia Artificial, pois acreditam que perderão seus cargos na sociedade para as máquinas…Por outro lado, outras pessoas acreditam que possa haver uma grande parceria entre seres humanos/máquinas. Acontece que, não há como saber se seremos “substituídos” pelas máquinas, mas que elas estão realmente ocupando cada vez mais lugar na sociedade, é verdade. Exemplo disso é um dos assuntos mais comentados, principalmente nos Estados Unidos. A robô Sophia foi a primeira robô a ganhar título de cidadania, ela tem nacionalidade da Arábia Saudita e foi desenvolvida pela Hanston Robotics. Sophia possui um sistema de inteligência artificial capaz de expressar as emoções como um ser humano, e tem tido ultimamente, várias interações com o público. Com o rosto inspirando na atriz Audrey Hepburn é capaz de aprender e realizar algumas atividades simulando a ação humana. Segundo o fabricante : “ O objetivo é criar máquinas mais inteligentes que os humanos que possam aprender a criatividade, a empatia e a compaixão, “três características humanas distintivas que devem ser integradas à inteligência artificial para que robôs possam solucionar problemas muito complexos para os humanos resolverem”. O futuro está cada vez mais próximo e a evolução da A.I é cada vez maior. Robôs, A.I, Internet das Coisas, e muitos outros avanços tecnológicos, estarão inseridos na sociedades cada vez mais, cada vez mais presentes em nossas vidas. E você, vai se adequar às novas tecnologias? Acha isso um fator positivo?
Artificial Intelligence : The Future is Comming…
1
artificial-intelligence-the-future-is-comming-149a177b4ac2
2018-05-10
2018-05-10 19:20:26
https://medium.com/s/story/artificial-intelligence-the-future-is-comming-149a177b4ac2
false
548
null
null
null
null
null
null
null
null
null
Technology
technology
Technology
166,125
Jennifer Diehl
null
576ae695c70e
jenniferdiehl
15
15
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-12
2018-08-12 00:53:51
2018-08-12
2018-08-12 03:58:43
5
false
en
2018-08-12
2018-08-12 04:06:29
1
149a6504246e
3.871069
0
0
0
Deep learning for object recognition/image processing is one of the greatest technological marvels in recent years — from “super-human…
3
Deeper understanding of visual cognition via adversarial images Deep learning for object recognition/image processing is one of the greatest technological marvels in recent years — from “super-human accuracy” in categorizing objects, day-dreaming robot artists, to sinister large-scale surveillance. It flops though, whenever it sees an adversarial image: one that is perturbated a tiny bit such that it stays the same to human eyes but induces an algorithm to give the wrong answer. Illustration from Szegedy et al. (2013): Images in the left column is added by a small amount of noise (middle column) to create those in the right column which make a model switch its prediction. To a technologist, adversarial examples represent a threat: autonomous cars might get into an accident, robots might make a mess when presented with one. Researchers have demonstrated in more than one way that this is possible not only in computer but also in the physical world. To a cognitive scientist, however, adversarial examples are an opportunity to better understand visual cognition. Why do we perceive the images in the left and the right columns as the same? One possible answer is that they are reduced to the same representation. Visual simplification It is no secret that humans simplify things. Picaso knew this already in 1945 when he drew the famous series of abstraction of a bull, before the birth of modern cognitive sciences. Stick figures must have existed thousands of years earlier. Picaso’s bull drawings. From our experience, it is natural to associate simplification with lines but, upon close inspection, it is unlikely that our visual system represents objects by them. Objects always come with a surface, not as wire-frames. The first times that a child holds a pencil, incongruent lines most likely come out, which is very surprising if they are the latent representation in the child’s mind. Lines are used not because they are natural but because they are easiest to draw without making the next strokes harder. It is much more likely that the human mind simplifies things by scaling them down. Take the above photo and scale them down to 100×72 pixels, we can see why those very different drawings can be all taken to represent the same bull: they look very similar in a small scale. Being able to recognize small things also gives you an evolutionary advantage because when you see a lion large and clear, it might be already too late. Picaso’s bulls are visually similar because they look similar in a small scale. Some lines are thickened to make sure that they are visible. Inductive bias All machine learning models are based on some assumptions: linear classifiers cut the world into beehive-like regions, support vector machines avoid conflicts by keeping opposing sides as far as possible. This inductive bias, not the amount of data and computation applied, is at the heart of an algorithm and determines what problems it can crack. The recent revolution in image recognition is unleashed by one such assumption called translational invariance: the same templates are slid around an image and at each place would tell if they find an edge, a circle, or a face. Translational invariance is not alone though, evolution has had millions of years to devise many others such as invariance to color, viewpoint, illumination, and size. I believe that scale (size) invariance also enables another simple yet powerful trick: shape (what stays when you scale an image down) is more important than texture (what disappears). Evidence for this preference, if necessary at all, is everywhere: babies are born with blurry vision, so all of us learn to recognize shapes first. In language, “bigger pictures” sound more important than “small details”. In twilight condition, we see only with our rod cells which can’t tell colors apart and even don’t see red light at all. Newborn babies only see shapes. Image credit: Dr Romesh Angunawela, original article: dailymail.co.uk. A counterattack Equipped with this observation, I suspect a simple yet effective solution for adversarial attack is scale invariance and a bias towards shapes. Even if this doesn’t solve all adversarial cases, it is hugely interesting to see if we could replicate some more aspects of the human vision system. One way to test this idea is to create a classifier that does the following: An incoming image is scaled down to a very small size, such as 20x20 The original image and the small image is classified separately by two neural networks, resulting in two sets of probabilities The final probability assignment is calculated by: p =α×p_small + (1-α)×p_original where 0.5 < α < 1 It is interesting to note that in the original paper, Szegedy et al. used a blown-up version of MNIST digits (which are originally bi-level 20x20 images). As can be seen in subfigure (c) below, the enlarged image size and value range give them much more room to add noise to the simple digits. If they stuck to the original format, their paper and the scientific discussion that follows would have been very different, which highlights the importance of, well, small details. Experiment with MNIST in Szegedy et al. (2013) References Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. “Intriguing properties of neural networks.” arXiv preprint arXiv:1312.6199 (2013).
Deeper understanding of visual cognition via adversarial images
0
deeper-understanding-of-visual-cognition-via-adversarial-images-149a6504246e
2018-08-12
2018-08-12 04:06:29
https://medium.com/s/story/deeper-understanding-of-visual-cognition-via-adversarial-images-149a6504246e
false
805
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Minh Lê
null
5822ddde62f1
minhle.r7
1
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-23
2018-03-23 18:54:10
2018-03-26
2018-03-26 14:49:15
2
false
en
2018-03-26
2018-03-26 14:49:15
5
149aff10275c
1.824843
6
0
0
A prominent feature of the integrated home is, and always will be, the ability to simultaneously set the scene. No matter the time of day…
5
Revolutionizing Smart Home Scene Control A prominent feature of the integrated home is, and always will be, the ability to simultaneously set the scene. No matter the time of day or occasion, your environment can be programmed to suit your exact specifications. At Josh.ai, we believe that everyone should have the ability to simply and intuitively create their best experience. However, control system programming has previously been quite labor intensive. Either involving onsite work by an integrator or tedious and confusing user interfaces that leave most homeowners unsure of the buttons they are actually pressing, this interface complexity often manifests itself as a source of frustration for not only the homeowner, but also the integrator. The integrator is the one who has to provide technical support that is a drag on time, energy, and resources. Leveraging the power of voice to make it easy We’ve always believed in the importance and effectiveness of using scenes in home control, and continuously look for ways to simplify the process. As the voice interface continues to permeate into mass market technology, we are able to apply it in ways that were previously not available. In general, we believe that this trend will only grow, with the intuitiveness of a voice interface simplifying traditionally challenging tasks for the end-user. With an innovative and user friendly experience in mind, we are excited to announce that you can now create scenes using voice commands — simply tell Josh what you would like the scene to accomplish, and save it. It really is that easy! For example, you can create a scene by telling Josh to: “Dim the lights on the first floor 50%, draw all the shades, watch surfing videos, and listen to Kygo in the living room.” Josh’s advanced AI will examine your statement (even with multiple device commands), figure out what you are trying to do, and automatically create the configurations required for the scene. As with any scene in Josh, you can trigger it by adding as many unique aliases or alternate phrases as you want, such as “Party time,” or “Get the house ready to entertain.” Josh.ai is an artificial intelligence agent for your home. If you’re interested in learning more, visit us at https://josh.ai. Like us on Facebook, follow us on Twitter.
Revolutionizing Smart Home Scene Control
14
revolutionizing-smart-home-scene-control-149aff10275c
2018-06-17
2018-06-17 16:59:54
https://medium.com/s/story/revolutionizing-smart-home-scene-control-149aff10275c
false
382
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Josh
null
66b5ae01967f
joshdotai
43,719
1,883
20,181,104
null
null
null
null
null
null
0
null
0
cdd8dc4c5fc
2018-07-11
2018-07-11 15:50:41
2018-07-11
2018-07-11 21:20:51
2
false
en
2018-07-12
2018-07-12 14:42:17
68
149b68ec51a8
6.907862
21
0
0
3 International Regulatory Discussions Lawyers Should Be Aware Of
4
Autonomous Vehicles: 3 International Regulatory Discussions Lawyers Should Be Aware of “Autonomous Vehicles: 3 International Regulatory Discussions To Be Aware Of” ©2018. Published in The SciTech Lawyer, Vol. 14, №4, Summer 2018, by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association or the copyright holder. Countries are developing new regulations and adapting their national laws to automated vehicles (AV), but many of these countries will also need to adapt certain international regulations if fully automated vehicles are going to become legal. Many countries are working on their national regulations to adapt them to AV. For instance, in the US, both, the US House of Representatives and the Senate’s Commerce, Science, and Transportation Committee have introduced two self-driving cars bills: The House’s SELF DRIVE Act,[1] and the Senate Committee’s AV START Act;[2] aiming to avoid state patchwork legislation. Illustration: Kussik Governments of the EU signed the Declaration of Amsterdam to work for a coherent AV regulatory framework by 2019, if possible.[3] The European Parliament (EP) also asked to prioritize AV regulation, at European and global level, to avoid fragmented regulatory approaches.[4] Some EU member states, such as France, Germany, Spain, or the UK already adapted their national regulations to allow AV testing on public roads. Singapore amended its Road Traffic Act in 2017 to allow AV testing on public roads too; and Japan is also working to have legislation ready by 2019, before the 2020 Olympics. Also, Brazil, China, and Russia are starting to think of adapting regulation for AV.[5] But many of these countries will also need to adapt international standards or treaties they are part of to make these vehicles legal; and lawyers need to be aware of these discussions to understand the AV regulatory maze. ROAD SAFETY The 1949 UN Geneva Convention on Road Traffic[6] and the 1968 UN Vienna Convention on Road Traffic[7] are a remarkable example because of their impact in national laws and the amount of countries they involve. The Vienna Convention (73 contracting parties) replaces the Geneva convention (96 contracting parties); however, those countries who signed Geneva, but did not ratified Vienna, are still bound by the former one. These conventions establish uniform traffic rules and were created to facilitate international road traffic and safety. However, if these conventions are not updated to technology, they may be hampering the deployment of full automated vehicles, which are expected to reduce drastically accidents due to human errors; errors that cause 90% of road accidents.[8] Since more than a decade, countries have been working to keep UN conventions and regulations on road traffic updated to technological safety improvements, but fully automated vehicles may require more profound changes in the Treaties. Two United Nations Economic Commission for Europe (UNECE) intergovernmental bodies have been working to adapt the Conventions and related UN technical regulations: WP.1 (Global Forum for Road Traffic Safety) and WP.29 (World Forum for the Harmonization of Vehicle Regulations). They have been, for example, generating guidelines for the design of advanced emergency braking systems or lane departure warning systems,[9] and updating the Conventions consequently. However, fully automated vehicles may require deeper changes in the Treaties to become legal, because both conventions rely on humans to control the vehicle whereas, in fully automated vehicles, humans surrender the control to the vehicle itself.[10] This shift has profound implications for the conventions, requiring further clarification on key concepts on the texts, such as the nature and role of the driver, and the nature of vehicle control.[11] For example, regarding driver requirements, Article 8 of both conventions specify that every vehicle shall have a driver, who should be able to control their vehicle at all times. Article 8 of the Vienna Convention adds additional requirements for the driver, such as possessing the necessary physical and mental abilities and conditions to drive; and minimizing his secondary activities, such as hand-held phones in vehicles (paragraph added in 2006). The fact that the Vienna Convention is, in some terms, more restrictive than Geneva could generate differences on the interpretation of the treaties and affect how fast countries adopt regulations. For example, it could happen that countries part of Vienna could find more difficult to issue full automated vehicle regulations than those who are only parties of Geneva (US, Japan, or Spain and UK from the EU, for example) or those who are not parties of any of them, like China. In 2016, WP.1 amended[12] the Vienna Convention, allowing the use of certain automated functions in vehicles, but still it is not enough for fully automated vehicles. This amendment still requires that every vehicle must have a driver, who may take off the hands from the wheel but who must be ready at all times to take back the control of the vehicle and override the system and switch it on and off; a requirement mostly incompatible with high or full automation.[13] UNECE is now working on aligning these Conventions and related UN regulations to the fully automated vehicles. The ongoing discussions are mainly considering three possibilities: developing guidance on interpreting problematic terms such as the role and nature of the driver, and nature of control; amending the conventions; or creating a new convention for AV. Some countries are advocating for a mixed approach: release guidelines for interpretation in the short term, and work on an amendment for fully automated vehicles for the long term, as protocols or amendments are very time-consuming processes.[14] CYBER-SECURITY AND DATA PROTECTION The international regulatory agenda is starting to include new aspects of road safety related to AV: Cyber-security and data protection. For example, the Ministers of Transport of the G7 are shifting their attention to these issues. In their Declaration on Automated and Connected Driving[15] in 2015, Ministers called for cooperation on ensuring data protection and cyber-security, in addition to the coordination and adaptation of the regulatory framework and technical regulations that UNECE is already undertaking. This vision was reinforced in their meeting in Japan, in 2016,[16] and again at Toronto, in 2017. [17] UNECE also realized that digitalization of transport is demanding new safety requirements to vehicles and infrastructure, protection for rights and liberties.[18] In November 2017, UNECE identified 86 threats to security, and it is now discussing how to prevent or mitigate them.[19] In December 2017, it created the Task Force for Cyber-Security and Over-The-Air issues (TF-CS/OTA),[20] which is working on two recommendations (cyber-security and data protection, and software updates) that are expected by March 2018.[21] In March 2017, UNECE already released guidelines on cyber-security and data protection for construction of AV[22] to help in the interim until more research is on place (art. 1.5 of the guidelines). These guidelines define the three elements of cyber-security as confidentiality, integrity, and authenticity of the information; and establishes the principles of data protection by default and data protection by design. Future UN regulations will have to follow these guidelines. Illustration: Kussik EU efforts also resonate with UNECE’s, reinforcing the cyber-security and privacy regulatory frameworks. In cyber-security, the EU is working on the so called “Cyber-security package”[23] to complement the Directive on Security of Network and Information Systems (NIS) of 2016. In the Cyber-Security Package, the EU will establish an European certification framework for ICT security products, as a one-stop-shop for cyber-security certification, which will rely mostly on international standards, such as the UNECE guidelines on cyber-security. With regards to privacy, the EU is working on the new ePrivacy Regulation[24] that will complement the General Data Protection Regulation (GDPR). The ePrivacy Regulation will replace the ePrivacy Directive and will add stronger protection to content and metadata of communications, which can occur among automated vehicles or vehicles and infrastructure. CONCLUSION Road safety, cyber-security, and data protection are currently at the top of the international regulatory agendas. These discussions are key for lawyers to make sense of the whole AV regulatory picture, as they may affect AV national regulations and their international commerce. In their regulatory efforts, governments are trying to find the right balance between acting faster at national level and acting harmonically at international level, so that they can bring AV benefits earlier to society. [1] SELF DRIVE Act (2017); https://www.congress.gov/bill/115th-congress/house-bill/3388/text. [2] AV START Act (2017); https://www.congress.gov/bill/115th-congress/senate-bill/1885. [3] Declaration of Amsterdam (2016); https://www.regjeringen.no/contentassets/ba7ab6e2a0e14e39baa77f5b76f59d14/2016-04-08-declaration-of-amsterdam---final1400661.pdf. [4] EP on Civil Law Rules on Robotics (2016); http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN. [5] KPMG; AV Readiness Index (2018); https://assets.kpmg.com/content/dam/kpmg/xx/pdf/2018/01/avri.pdf [6] Geneva Convention on Road Traffic (1949) https://treaties.un.org/doc/Publication/UNTS/Volume%20125/v125.pdf (22–101) [7] Vienna Convention on Road Traffic (1968) https://www.unece.org/fileadmin/DAM/trans/conventn/crt1968e.pdf. [8] EP Briefing in AV (2016); http://www.europarl.europa.eu/RegData/etudes/STUD/2016/573434/IPOL_STU(2016)573434_EN.pdf (87) [9] UNECE Design principles for control systems (2012); https://www.unece.org/fileadmin/DAM/trans/doc/2010/wp29/WP29-157-06e.pdf [10] EP Briefing in AV (2016); http://www.europarl.europa.eu/RegData/etudes/STUD/2016/573434/IPOL_STU(2016)573434_EN.pdf (55) [11] WP.1 Draft of Common Understanding on the use of automated driving functions (2017) https://www.unece.org/fileadmin/DAM/trans/doc/2017/wp1/ECE-TRANS-WP1-2017-Informal-2e.pdf [12] The amendment entered into force in March 2016, allowing automated functions to assist drivers if these functions followed international technical agreements or they could “be overridden or switched off by the driver.” (See Vienna convention article 8.5bis https://www.unece.org/fileadmin/DAM/trans/conventn/Conv_road_traffic_EN.pdf)[12] [13] The amendment may allow automated systems in levels 3 and even 4 of automation (terminology of the Society of Automotive Engineers’ International Standard J3016–2014), but not in level 5. See http://www.europarl.europa.eu/RegData/etudes/STUD/2016/573434/IPOL_STU(2016)573434_EN.pdf (55) [14] WP.1 Draft of Common Understanding on the use of automated driving functions (2017); https://www.unece.org/fileadmin/DAM/trans/doc/2017/wp1/ECE-TRANS-WP1-2017-Informal-2e.pdf [15] G7 Declaration on Automated and Connected Driving (2015); https://ec.europa.eu/commission/commissioners/2014-2019/bulc/announcements/g7-declaration-automated-and-connected-driving_en. [16] G7 Declaration (2016); http://www.mlit.go.jp/common/001146631.pdf [17] G7 Declaration (2017); http://www.g8.utoronto.ca/transport/170623-G7-Transport-Declaration.html. [18] UNECE Guidelines on Cyber-security https://www.unece.org/fileadmin/DAM/trans/doc/2017/wp29/ECE-TRANS-WP29-2017-046e.pdf [19] UNECE’s 12th AV informal group meeting (2017): https://wiki.unece.org/pages/viewpage.action?pageId=54427891&preview=/54427891/54428639/(ITS_AD-13-02)%20Major%20results%20and%20action%20items%20of%20the%2012th%20meeting%20of%20Informal%20Group.pdf [20] UNECE’s WP.29 Informal Working Group on Automated Driving (IWG ITS/AD) created the Task Force for Cyber-Security and Over-The-Air (TF-CS/OTA) on December 2017. [21] TF-CS-OTA status report (2017); https://wiki.unece.org/pages/worddav/preview.action?fileName=%28ITS_AD-13-04%29++%28Sec+TF-CS_OTA%29+Status+report++TF-CS_OTA.pdf&pageId=54427891 [22] UNECE Guidelines on Cyber-security (2017); https://www.unece.org/fileadmin/DAM/trans/doc/2017/wp29/ECE-TRANS-WP29-2017-046e.pdf [23] EU Cyber-security Package; https://ec.europa.eu/digital-single-market/en/eu-cybersecurity-certification-framework [24] EU ePrivacy Regulation proposal; https://ec.europa.eu/digital-single-market/en/proposal-eprivacy-regulation
Autonomous Vehicles
403
https-medium-com-aidajoa-autonomous-vehicles-149b68ec51a8
2018-07-12
2018-07-12 14:42:18
https://medium.com/s/story/https-medium-com-aidajoa-autonomous-vehicles-149b68ec51a8
false
1,729
Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.)
null
BKCHarvard
null
Berkman Klein Center Collection
berkman-klein-center
null
BKCHarvard
Self Driving Cars
self-driving-cars
Self Driving Cars
13,349
Aida Joaquin Acosta
Fulbright Scholar at Berkman Klein Center at Harvard University
b2fb0bcf784b
aidajoa
22
6
20,181,104
null
null
null
null
null
null
0
null
0
dc36b5ddf133
2018-07-10
2018-07-10 22:52:09
2018-07-12
2018-07-12 15:25:20
2
false
en
2018-07-12
2018-07-12 17:57:56
9
149cb29c92a3
3.541824
14
1
0
Building a Data-Driven Approach for Personalized Mental Healthcare
5
Announcing Our Investment in Spring Health Building a Data-Driven Approach for Personalized Mental Healthcare We’re excited to announce Work-Bench’s investment in Spring Health’s $6M raise alongside a top group of investors including Rethink Impact, BBG Ventures, and the Partnership Fund for New York City. A core part of our investment strategy at Work-Bench is engaging deeply with corporate buyers in our NYC network — across our Corporate Roundtables, Executive Briefings, and more — to hear and learn directly about pain points and priorities in the enterprise. One thing that continually stood out in the Future of Work Sessions we host with corporate executives across Johnson & Johnson, Morgan Stanley, Pfizer, Capital One, Conde Nast, and others is the war for talent, and how this translates into human capital strategies for companies to differentiate. I was fortunate to meet April Koh, CEO of Spring Health, at a conference, where I learned about her and her team’s incredible vision to completely transform mental health, by applying machine learning to cutting-edge behavioral research. When we come across those rare founders who have an extraordinary vision for how the world should be and who pair it with deep customer empathy and an ability to execute — we get excited. Mental health conditions are the most expensive medical condition in the U.S. due to their high prevalence rates and high costs of care, with 1 in 5 Americans struggling with a mental health issue.¹ Yet despite its prevalence, treatment for mental health continues to feel like a black box approach: a slow, painful, trial-and-error experience of diagnosis and treatment on top of what may be already a painful condition, whether it is selecting an effective antidepressant medication or prescribing a medication regimen for those suffering from clinical depression. What excites us about Spring Health is their application of machine learning paired with deep domain and clinical expertise in mental health, which led to the development of a highly technical and effective product that has taken over 3 years to commercialize. The platform is derived from the research and work of co-founder Adam Chekroud, Assistant Professor and PhD at Yale, whose research showed that algorithms can actually better predict a patient’s outcome than a human provider can. With papers published in The Lancet and JAMA, this leading behavioral health research serves as the bedrock of their impressive core IP and robust technical product. Spring Health’s data-driven approach to mental healthcare is based on “precision medicine” — this idea of a highly personalized psychiatry tool. After receiving their personalized health reports, users have two treatment options: They can connect with one of Spring Health’s internal network providers, or they can seek outside care. If they choose to meet with a Spring Health provider, they do so virtually. In so many ways, the time and market is now for Spring Health’s technology. Spring Health can apply the latest developments and advancements of machine learning to this massive and painful problem space of mental health, that 10+ years ago, would not have been possible. And when you pair this with our enterprise buyer perspective from the Fortune 1000 and other high growth corporations, the time is now because a focus on benefits demonstrates a corporations’ long term investment in their employee base. And in the war for talent, this empathetic approach will enable traditional companies to compete with webscale giants like Google, Amazon, and Facebook. Beyond “feel good” benefits for employees, critical to an enterprise buyer is that with better treatment, mental health costs will actually decrease for corporations. We have already seen impressive customer traction from leading enterprises such as Gap, Whole Foods, MongoDB and more. Spring Health doesn’t just stop with their leading technology. They wrap the technology into a seamless end-to-end platform, to ensure that patients are actually getting better care. This means a personalized employee diagnostic and onboarding experience, high-touch wellness plans, all the way to a highly-vetted, best-in-class provider network. Their platform also cuts down the average wait time to see a mental health provider to less than 2 days (compared to the 21 day national average) and engages 1 out of every 3 employees consistently across all enterprise customers. This makes Spring Health’s platform an incredibly valuable and evidence-based asset for the enterprise, where employees can benefit personally, and corporations save costs on health spend and boost employee retention. It’s clear that machine learning and AI will become increasingly important for applications in healthcare, and we could not think of a more important mission and vision than mental health, with the potential to touch the lives of millions of people. We are excited for the opportunity work side by side alongside April, Adam, and Abhi as they completely transform how we deliver care for mental health patients. See more coverage of the announcement in TechCrunch, VentureBeat, and AlleyWatch, and join us in welcoming Spring Health to our Work-Bench family. Teams Spring Health <> Work-Bench ¹ http://www.mentalhealthamerica.net/issues/state-mental-health-america
Announcing Our Investment in Spring Health
127
announcing-our-investment-in-spring-health-149cb29c92a3
2018-07-12
2018-07-12 17:57:56
https://medium.com/s/story/announcing-our-investment-in-spring-health-149cb29c92a3
false
837
Work-Bench is an enterprise technology VC fund in NYC. We support early go-to-market enterprise startups with community, workspace, and corporate engagement.
null
work.bench.vc
null
Work-Bench
work-bench
STARTUPS,ENTERPRISE TECHNOLOGY,ENTERPRISE SOFTWARE,VENTURE CAPITAL,TECHNOLOGY
work_bench
Mental Health
mental-health
Mental Health
75,731
Jessica Lin
co-founder & VC @Work_Bench | GED educator | rethinking work
c876a2575234
jerseejess
708
914
20,181,104
null
null
null
null
null
null
0
null
0
25a3c701bc91
2018-01-29
2018-01-29 00:49:08
2018-03-04
2018-03-04 17:09:26
6
false
en
2018-10-10
2018-10-10 18:05:25
5
149d341fc53e
5.666981
899
20
1
Fifteen years ago, there were only a few skills a software developer would need to know well, and he or she would have a decent shot at 95%…
5
How to rewrite your SQL queries in Pandas, and more Fifteen years ago, there were only a few skills a software developer would need to know well, and he or she would have a decent shot at 95% of the listed job positions. Those skills were: Object-oriented programming. Scripting languages. JavaScript, and… SQL. SQL was a go-to tool when you needed to get a quick-and-dirty look at some data, and draw preliminary conclusions that might, eventually, lead to a report or an application being written. This is called exploratory analysis. These days, data comes in many shapes and forms, and it’s not synonymous with “relational database” anymore. You may end up with CSV files, plain text, Parquet, HDF5, and who knows what else. This is where Pandas library shines. What is Pandas? Python Data Analysis Library, called Pandas, is a Python library built for data analysis and manipulation. It’s open-source and supported by Anaconda. It is particularly well suited for structured (tabular) data. For more information, see http://pandas.pydata.org/pandas-docs/stable/index.html. What can I do with it? All the queries that you were putting to the data before in SQL, and so many more things! Great! Where do I start? This is the part that can be intimidating for someone used to expressing data questions in SQL terms. SQL is a declarative programming language: https://en.wikipedia.org/wiki/List_of_programming_languages_by_type#Declarative_languages. With SQL, you declare what you want in a sentence that almost reads like English. Pandas’ syntax is quite different from SQL. In Pandas, you apply operations on the dataset, and chain them, in order to transform and reshape the data the way you want it. We’re going to need a phrasebook! The anatomy of a SQL query A SQL query consists of a few important keywords. Between those keywords, you add the specifics of what data, exactly, you want to see. Here is a skeleton query without the specifics: SELECT… FROM… WHERE… GROUP BY… HAVING… ORDER BY… LIMIT… OFFSET… There are other terms, but these are the most important ones. So how do we translate these terms into Pandas? First we need to load some data into Pandas, since it’s not already in database. Here is how: I got this data at http://ourairports.com/data/. SELECT, WHERE, DISTINCT, LIMIT Here are some SELECT statements. We truncate results with LIMIT, and filter them with WHERE. We use DISTINCT to remove duplicated results. SELECT with multiple conditions We join multiple conditions with an &. If we only want a subset of columns from the table, that subset is applied in another pair of square brackets. ORDER BY By default, Pandas will sort things in ascending order. To reverse that, provide ascending=False. IN… NOT IN We know how to filter on a value, but what about a list of values — IN condition? In pandas, .isin() operator works the same way. To negate any condition, use ~. GROUP BY, COUNT, ORDER BY Grouping is straightforward: use the .groupby() operator. There’s a subtle difference between semantics of a COUNT in SQL and Pandas. In Pandas, .count() will return the number of non-null/NaN values. To get the same result as the SQL COUNT, use .size(). Below, we group on more than one field. Pandas will sort things on the same list of fields by default, so there’s no need for a .sort_values() in the first example. If we want to use different fields for sorting, or DESC instead of ASC, like in the second example, we have to be explicit: What is this trickery with .to_frame() and .reset_index()? Because we want to sort by our calculated field (size), this field needs to become part of the DataFrame. After grouping in Pandas, we get back a different type, called a GroupByObject. So we need to convert it back to a DataFrame. With .reset_index(), we restart row numbering for our data frame. HAVING In SQL, you can additionally filter grouped data using a HAVING condition. In Pandas, you can use .filter() and provide a Python function (or a lambda) that will return True if the group should be included into the result. Top N records Let’s say we did some preliminary querying, and now have a dataframe called by_country, that contains the number of airports per country: In the next example, we order things by airport_count and only select the top 10 countries with the largest count. Second example is the more complicated case, in which we want “the next 10 after the top 10”: Aggregate functions (MIN, MAX, MEAN) Now, given this dataframe or runway data: Calculate min, max, mean, and median length of a runway: You will notice that with this SQL query, every statistic is a column. But with this Pandas aggregation, every statistic is a row: Nothing to worry about —simply transpose the dataframe with .T to get columns: JOIN Use .merge() to join Pandas dataframes. You need to provide which columns to join on (left_on and right_on), and join type: inner (default), left (corresponds to LEFT OUTER in SQL), right (RIGHT OUTER), or outer (FULL OUTER). UNION ALL and UNION Use pd.concat() to UNION ALL two dataframes: To deduplicate things (equivalent of UNION), you’d also have to add .drop_duplicates(). INSERT So far, we’ve been selecting things, but you may need to modify things as well, in the process of your exploratory analysis. What if you wanted to add some missing records? There’s no such thing as an INSERT in Pandas. Instead, you would create a new dataframe containing new records, and then concat the two: UPDATE Now we need to fix some bad data in the original dataframe: DELETE The easiest (and the most readable) way to “delete” things from a Pandas dataframe is to subset the dataframe to rows you want to keep. Alternatively, you can get the indices of rows to delete, and .drop() rows using those indices: Immutability I need to mention one important thing — immutability. By default, most operators applied to a Pandas dataframe return a new object. Some operators accept a parameter inplace=True, so you can work with the original dataframe instead. For example, here is how you would reset an index in-place: However, the .loc operator in the UPDATE example above simply locates indices of records to updates, and the values are changed in-place. Also, if you updated all values in a column: or added a new calculated column: these things would happen in-place. And more! The nice thing about Pandas is that it’s more than just a query engine. You can do other things with your data, such as: Export to a multitude of formats: Plot it: to see some really nice charts! Share it. The best medium to share Pandas query results, plots and things like this is Jupyter notebooks (http://jupyter.org/). In facts, some people (like Jake Vanderplas, who is amazing), publish the whole books in Jupyter notebooks: https://github.com/jakevdp/PythonDataScienceHandbook. It’s that easy to create a new notebook: After that: - navigate to localhost:8888 - click “New” and give your notebook a name - query and display the data - create a GitHub repository and add your notebook (the file with .ipynb extension). GitHub has a great built-in viewer to display Jupyter notebooks with Markdown formatting. And now, your Pandas journey begins! I hope you are now convinced that Pandas library can serve you as well as your old friend SQL for the purposes of exploratory data analysis — and in some cases, even better. It’s time to get your hands on some data to query!
How to rewrite your SQL queries in Pandas, and more
4,373
how-to-rewrite-your-sql-queries-in-pandas-and-more-149d341fc53e
2018-10-10
2018-10-10 18:05:25
https://medium.com/s/story/how-to-rewrite-your-sql-queries-in-pandas-and-more-149d341fc53e
false
1,250
On coding and data analysis, by Irina Truong (j-bennet@github)
null
null
null
j-bennet codes
jbennetcodes
SOFTWARE DEVELOPMENT,PYTHON,PANDAS
irinatruong
Data Science
data-science
Data Science
33,617
Irina Truong
I ❤ writing code.
8b68c6b6d0d5
itruong
475
180
20,181,104
null
null
null
null
null
null
0
null
0
481e2ac87376
2018-03-21
2018-03-21 09:17:41
2018-03-21
2018-03-21 09:20:24
1
false
en
2018-03-21
2018-03-21 23:15:56
13
149d95b60674
1.067925
0
0
0
If you are going to #HackTheHub this weekend, you’re probably considering using cloud AI/ML services to get a head start.
5
#HackTheHub: AI & ML — Use the cloud, Luke! If you are going to #HackTheHub this weekend, you’re probably considering using cloud AI/ML services to get a head start. Azure has a whole heap of services for Artificial Intelligence, Machine Learning , bots, and a pretty generous free plan to get you started. However, if you have grand plans that just aren’t covered by the free plan then just get in touch with me via the #HackTheHub Slack (Or find me on the day) and I’ll kit you out with Extra Free Processing to get you through the weekend. Here’s a small selection of some of the services that you could take advantage of: Custom Vision — Train an image recognition model and use it via the online API or download it for offline use on iOS (CoreML) & Android (Tensorflow) devices. Machine Learning Studio — Prepare data, process it using a variety of supervised and unsupervised ML algorithms and deploy the model for use through a REST API. Face API — Detect, identify, verify and match faces in images. Text Analytics — Perform natural language processing and sentiment analysis. Speaker Recognition — Identify who is speaking (and what they’re saying). But whatever you end up doing, good luck in the hack, and have fun! Thanks to the veteran #HackTheHub mentor Mark Allen @MarkXA for this blog! Originally posted here!
#HackTheHub: AI & ML — Use the cloud, Luke!
0
use-the-cloud-luke-for-hackthehub-ai-and-ml-149d95b60674
2018-03-21
2018-03-21 23:15:57
https://medium.com/s/story/use-the-cloud-luke-for-hackthehub-ai-and-ml-149d95b60674
false
230
Upskill | Talent | Startups
null
hackthehub
null
#HackTheHub
hackthehub
HACKATHONS,PROGRAMMING,CODING,STARTUP,SOFTWARE DEVELOPMENT
hackthehub
Machine Learning
machine-learning
Machine Learning
51,320
Gillian Colan-o'leary
Making hackathons @hackilygill
4de904ab472c
gilliancolanoleary
15
12
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-20
2018-05-20 05:44:56
2018-09-05
2018-09-05 07:51:17
6
false
en
2018-09-05
2018-09-05 07:51:17
17
149ec02eb0fd
10.806604
1
1
0
How do you make decisions?
5
The Art of Decision Making: Machines vs Humans You’re trapped. All the doors started shrinking and only one of them leads you out. Which door would you pick? How do you make decisions? How often do you flip a coin or roll a die to decide what your next action would be? Or do you leave it less to chance and instead use your intuition, emotions, and logical reasoning? While in the process of developing the neural network for Mirador Health’s prescriptive analytics engine, I was struck with an epiphany. Epiphany may not be the best use of the word here, but I started to ask myself if machine learning could help us make better decisions than our own human thought process (or lack thereof) would. Thus, I embarked on the journey to discover the human and machine limitations in decision making, guided by the following questions: How do we (humans and machines) make decisions? What is the goal of decision making and what is stopping us from making the best/better decisions? What can a human do that a machine can’t (and vice versa)? Genesis / Root As human beings, we are interested in progress, constantly searching for ways to improve and perfect our lives (if you disagree with this statement, would you have it any other way?). The study of decision making is relatively new in the field of social science, which started in the mid-1990s by looking at how organizations make decisions. Despite its young beginnings, prominent academics such as Daniel Kahneman and Richard Thaler have won Nobel prizes for their research on decision making and behavioral economics. For the uninitiated, this field of study is primarily interested in understanding how and why we make certain decisions, and how to improve our decision making processes. The Starting Point The basis of decision making depends upon the availability of information and how we experience and understand it. For the purposes of this article, ‘information’ includes our past experience, intuition, knowledge, and self-awareness. We can’t make “good” decisions without information because then we have to deal with unknowns and face uncertainty, which leads us to make wild guesses, flipping coins, or rolling dice. Having knowledge, experience, or core values given a certain situation help us have a clear vision of what the outcomes could be and how we can achieve/avoid those outcomes. However, making our decisions based on knowledge and experience from similar situations can be dangerous, as outlined in Daniel Kahneman’s Thinking, Fast and Slow, which we will discuss later. Since information helps us be better decision makers, does increasing the information available to us necessarily help us make better decisions? Source: Dilbert Big Data = Better Decisions? Does size matter? Companies are very much into big data (or so they seem to be), collecting as much information as they can about their customers with the goal of understanding and predicting their customers’ behavior to effectively achieve their business goals. More information does help us make better decisions, but only up to a certain point. For information to be useful in our decision making, they have to also be relevant, and develop relationships and insights. The quality of information is just as important as the quantity. But in reality, the biggest barrier to originality is not idea generation — it’s idea selection. The quote above is from the book Originals by Adam Grant, a non-fiction bestseller on how to generate, identify, and promote original ideas. Replace the word ‘idea’ with ‘data’ or ‘information’, and we can see how selecting and analyzing the right kind of data leads to insights. It is important to have the right information for the context that we are making our decisions, because as much as the world is interconnected, not all connections are correlated and produce a (statistically) significant relationship for a given situation. It would be a waste of time and energy looking at data that has no effect on the outcome of our decisions. Even with neural networks that are able to learn and detect patterns beyond the mental capabilities of the human brain, there are bound to be data points that produce no relationship at all. We also have to keep in mind that even a simple correlation between two data points does not imply causation. This then leads to the selection of data points. Big data before it was cool How Do We Pick the Right Information? At time of writing, no single AI system is able to select its own data points for decision making. At least not yet. They only process data that have been programmed by their human creators. When selecting data points, we need to consider how much variability is there in the context of decision making. Is there a pattern or do situations and outcomes happen randomly? Can we clearly establish the link of cause and effect of a decision? Some situations have a wide variance while others are somewhat rigid, constant, and predictable. When a situation doesn’t change very much from time to time, we can use assumptions to help us control for certain factors/data points, and be more time efficient with training our machine learning. I briefly mentioned patterns earlier, and for the human creators to pick the right information and data points and feed them into the AI, they need to recognize patterns that are sometimes beyond their own human comprehension. Most people only think in first-order consequences, as highlighted in Ray Dalio’s book, Principles, where by default, they only think of an action’s immediate effects. That is a limitation of the human mind. It is difficult for us to visualize how an action creates further impact, especially intangible impacts that are undetectable to our senses (read: unpredictable). A way to overcome this human limitation is to leverage computers’ processing powers and memory, which allows us to process large amounts of data. It is costly and time consuming to do this, but you can dump every single data point out there, set the coefficients, and let machine learning does its magic. Results are not guaranteed, but that is the advantage of computers over humans. People may argue that we don’t understand how an AI arrived at a decision, but are humans any better? I guess it is our behavior as humans to find logic and reasoning although we ourselves have biases and make assumptions to arrive at a decision. The Goal of Decision Making: Satisfaction How many utils do you get from splashing in the sea? The goal of decision making is to maximize our satisfaction (not to be confused with happiness), or utility, as economists would call them. I am borrowing the Utility Maximization Model in economics, which states: Consumers decide to allocate their money incomes [choices/free will] so that the last dollar [decision] spent on each product purchased [option] yields the same amount of extra marginal utility. If humans didn’t care about getting the best outcome from their decisions, everyone would just flip a coin and accept any utility dealt to them. Imagine a world where no one is responsible for any of their actions. Victory would taste less sweet, and failure extremely bitter (depends how you see it). However, like all models, there are a few assumptions. We assume that: a. Humans act based on rational behavior b. All our preferences [wants/needs] are known and measurable c. We have the prices [the costs of each option] d. We have a budget constraint [limited number of tries] Throughout most of our life, rarely do these four assumptions apply when we are making decisions. a. Humans are known to make irrational decisions based on emotions and cognitive biases. b. We don’t always know what we want, and when we do, we have a bad sense of judging how badly we want them and how much satisfaction/utility we’ll get from it (hedonic adaptation). c. We don’t always know the true cost of our actions until after the fact. d. One thing is for sure though, we do have a limited number of tries in the game of life. Why We are Not Making the Best Decisions Can you guess how many decisions you make in a day? . . . The Google consensus (this should be coined as a legit term when more than 10 sources quote the same fact) says that the average human makes 35,000 decisions a day. Out of that number, many decisions are made quickly with intuition or subconscious through many hours of practice or exposure, while the remaining few require careful and concentrated thought. Daniel Kahneman, categorized these two “modes” of thinking as System 1 and System 2. In his book, Thinking, Fast and Slow, Kahneman describes the role of System 1 and its relationship with System 2: System 1 [is] effortlessly originating impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2. The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps 188 cognitive biases of the human mind. Credit: John Manoogian III, Buster Benson Our System 1 is the reason why we are not making the best decisions. Nature always seeks the path of less resistance. We make mental shortcuts through biases and heuristics (as defined in behavioral economics and psychology) whenever possible and tend to let our emotions overrule our rationality. This is an inherited biological characteristic that helped our ancestors survive and escape danger, but is now a liability in today’s world where decisions require time-consuming rational thinking. How did we end up with these cognitive biases, all 188 of them? These biases develop based on our experience and understanding of the world (typically shaped by our parents, peers, the media, and education institutions) and can also be found in our genetic code (the release of neurochemicals such as endorphins, serotonin, dopamine, etc., when responding to stimuli). By that logic, shouldn’t we build machines that are devoid of cognitive biases and System 1 to make better decisions and increase our utility (satisfaction)? Perhaps. Based on the four assumptions in the Utility Maximization Model, machines can remove irrational behavior and emotions, is clear with what it’s trying to achieve, have practically unlimited number of tries through simulation, and even know the true cost of each decision. However, this all depends on the availability and quality of data used to train/build these machines. The showdown: Humans vs Machines Now that we have covered the basis of decision making, let’s explore and compare the limitations and capabilities of humans and machines in making decisions. What Makes Us Human It is inevitable that the question of what makes us human be asked when comparing ourselves against other species/machines. In my humblest opinion, what makes us uniquely human is the resilient spirit to adapt in the face of uncertainty and adversity. As much as we’d like to think of ourselves as “experts” in forecasting the future (weather, economy, etc.), the truth is this: We don’t know what we don’t know. — Donald Rumsfeld The only way we survived, and continue to survive, in this uncertain world where everything is changing at an increased pace, is by being quick to adapt. Survival of the fittest does not apply in the modern world, but rather survival of those quickest to adapt. Machines typically need more than a single event to learn and change its decision. There is a Chinese saying, “一朝被蛇咬,十年怕草绳”, which means humans will learn to avoid snakes and anything that resembles a snake (a cognitive bias) just after one snake bite. Humans don’t need (or have the luxury) to experience mistakes multiple times to learn and make better decisions. We need to adapt quick. It is not the strongest of the species that survives but the most adaptable — Leon C. Megginson Humans also have the intellectual capacity to develop ethics, morals, and values, which machines don’t (or not yet at least. Never rule out the impossible). The majority of machine learning algorithms are currently programmed to make decisions based on consequence, not ethics or values. Philosophers and psychologists (humans) are still needed to design ethical AI. In Originals, Adam Grant discovered that originals (creatives who reject the status quo), tend to make decisions based on the logic of appropriateness rather than the logic of consequence. Will machines be able to do what is right/appropriate and take the risks to stand up against an unjust authority? This then begs the question: How do we re-frame our values and principles into consequences so machines can process them? (I use the word process, because machines can’t understand and derive meaning) Is it even possible or necessary to do so? The Machine (Dis)Advantage Could machine learning help us make better decisions? Possibly, for any outcome that is normally distributed, such as height, weight, number of miles ran. Machine learning can predict (non-random) outcomes and prescribe solutions, but the question to ask is, how predictable is the world? We did not predict 9/11 or the dot com boom. Would algorithms be able to predict the next life-changing event and make a decision to change the course of history? There are factors that are beyond our control after a decision is made, i.e what happens between cause and effect, which will affect the outcome. Cause and effect is not a linear relationship unless we carry out experiments in a controlled environment, and reality is, the universe is random and chaotic (watch: entropy). I’m willing to go out on a limb and say that machine learning algorithms may not be able to predict the next random event more accurately than a human could. However, machines beat us humans hands down in terms of speed and accuracy when it comes to non-random, repetitive events. Additionally, machines are better (and faster) at repetitive games. Google’s Deep Mind was able to beat the top Go player by analyzing hundreds of thousands of games of Go, a strategy board game, to learn what move to make for any given situation. Humans will need years of study and practice (possibly around 10,000 hours, according to Malcom Gladwell in his book, Outliers) to go through that much material. This is by no means a comprehensive comparison and you’re welcomed to add more in the comments. What did I learn? The purpose of this article is not meant to draw a conclusion if humans or machines are superior in decision making, but rather an inquiry on the approaches to decision making. Writing this article, I encountered more questions than answers and have enjoyed the whole process of discovery of how the brain works. I can’t say that I will make good decisions after writing this article, but I have found a path to discover ways to make better decisions, and that is to frequently ask difficult questions that challenge my existing decision making framework. Challenging your thinking helps you adapt to different situations and suppress your cognitive biases (especially the confirmation bias). Of course, don’t challenge your thinking when being chased by a critically endangered Sumatran rhinoceros. Get to safety then reflect on your decision making process. Here’s how I plan to challenge my own thinking: be open to conflict and disagreements, seek out randomness and discomfort, be willing to be proven wrong, and find satisfaction from the process of decision making and not the outcome. Here are some books that I have read and would highly recommend it to anyone to improve their decision making: Thinking, Fast and Slow by Daniel Kahneman Factfulness by Hans Rosling The Black Swan by Nicholas Nassim Taleb The Art of Thinking Clearly by Rolf Dobelli Thank you reading all the way to the end! If you learned anything new, have a challenging question, or would like to leave your 2 cents, please share them with me in the comments. Always improving, Min Xiang Lee
The Art of Decision Making: Machines vs Humans
1
the-art-of-decision-making-machines-vs-humans-149ec02eb0fd
2018-09-06
2018-09-06 23:32:05
https://medium.com/s/story/the-art-of-decision-making-machines-vs-humans-149ec02eb0fd
false
2,612
null
null
null
null
null
null
null
null
null
Psychology
psychology
Psychology
49,261
Min Xiang Lee
Helping growth-stage enterprises scale their actions, 10x their revenue, and maximize impact through proven wisdom. Building the future of wellness.
369f845701b4
minxianglee
65
91
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-17
2018-07-17 14:24:41
2018-07-18
2018-07-18 13:34:22
4
false
en
2018-07-18
2018-07-18 13:38:43
0
14a084c5e43e
5.1
2
1
0
As you might already know, the world current human population is now 7.6 billion people, that is an astronomically large number considering…
5
AI, Automation, and Unemployment. As you might already know, the world current human population is now 7.6 billion people, that is an astronomically large number considering that our species is fairly new to the game. So, the question is, why is this possible? What on earth could make a handful of hunter in the Saharan desert into a sprawling modern civilization we all know and loved today? These question is hard to answer. So I’m going to ask you, dear reader, a much easier question. What is the graph of human population vs time looks like? Is it like these… Or is it like these… Oh no, dear reader, both of those answer is wrong. The real graph for the world population actually look like these, Source : EL T, Wikipedia “It took 200,000 years for our population to reach 1 billion, and only 200 years to reach 7 billion” Just what happened in those 200 years that makes our population skyrocket 7 fold? The answer, is Industrial Revolution. Industrial revolution makes basic goods — like food, clothes, and houses — available en masse to good portion of our society. This huge supply of goods, skyrocketed life expectancy and standard of living. which in turn, tremendously increasing our capacity to have a bigger and bigger population. Historically, there have been 3 industrial revolution, The first Industrial Revolution — which happened around the year 1765 — revolves around the invention of steam enggine and rolling machine, which allows for mass production of clothing, steel, and food while also brings lot of jobs to the community. The Second Industrial Revolution happened around the year 1870, it revolves around the invention of asembly lines and elecrification of machine. Bringing more food, clothing, and this time, medicine. The Third Industrial Revolution happened in the 20th century (1900–2000) revolves around the boom of computing power, it increase our intelectual capabilities while still decreasing the need of our muscle. Alot of today’s expert argue that we are currenly living in the doorstep of a new industrial revolution, a great event expert seems to agree to call as the Fourth Industrial Revolution. There is another thing that they agree, that this current industrial revolution will revolve around AI. AI is the abreviation of Artificial Intelegence. Tecnically speaking, it is a field inside Computer Science which study the behavior of computing agent. Or in english, the study of how we can make our computer more and more intelegent. Our computer might be smart, in the way that it can calculate thousand of computation in a second, but it can never do those calculation without being told how to. An intelegent computers might, say, produce a music or learn how to diagnose cancer on its own, without human intervention at all. The promise of intelegent computing might be aluring, but there is a catch. Many people think that the catch is that the system might go rouge and decide to kill all human. Just like in somekind terminator movie. But there is a far more greater danger than that. All three of the previously stated industrial revolution have one thing in common, they all give a ton of job to the society. That sems to not be the case for the next industrial revolution, The Fourth. Almost all invention up to this point always have one single purpose, to reduce the demand for human muscle (Eg : tractor, crane, or even car). Historically, it was a good thing. Advancement in technology cut demand for low skill worker (Eg : Factory worker), so the worker move to more demanded, and also (usually) better job. Usually, we are only getting rid of that worse, dangerous or underpaid job. The advent of intelegent computing, change it all. It enables the once dumb computers of our to do work that only human can do. Suddenly, high skilled job, white collar job, or even profession doesn’t seems to be safe either. Transportation Industry You heard it in the tv, you heard it from newsletter, and you will hear it from me. The Transportation Industry is HUGE, worldwide, transportation industry employs more than 70 million jobs at a minimum. And those jobs, are over. Self driving car is not the future, its already here. Self-driving cars already travelled hundreds of thousands of miles in total without any human intervention. And given that they dont blink, dont text while driving, and dont get stupid, its easy to see why self driving cars are superior to human driver. It is not the fault of the driver though, human is just not made to drive efectively while the bot are designed just for that. Lawyer Bot When you talk about lawyer, you must think about the trials itself. But actually, the bulk of lawyering is actually processing a lot data, usually in the form of email, legal document, or any historical data about the subject to find pattern in those document, to predict the likely result of lawsuit. And that is just what the bot are good at doing, bot can skim with great detail thousands of email in just fraction of second, bot can have database of all the law in one particular country, it can’t forget, it doesn’t get sleepy, and have tiny chances of making error, crushing human lawyer in both speed and accuracy. Doctor Bot Remember a few minutes earlier when I mention that AI can learn how to diagnose cancer? Well, I mean it with literal meaning. Bots can learn how to diagnose almost any type of illness provided it have data about that illness and the patient medical data. And believe me, we have a LOT of that kind of data. Research data, medical book, patient medical record, or even your credit card history that you eat fast food once every day can be included in those data and feeded into the AI system, making it way more sophisticated than even human doctor in term of diagnosing illness. I don’t even have to tell you about the frequency and severity of human doctor miss-diagnosis to tell you about that. (Just google it) Those are a few example of jobs that can easily be automated (or partialy automated) by AI. The last two example are actualy one of the hard ones to automate, which is why there is a LOT of other job that can easily be automated by AI. Moshe Vardi, professor of computer science at Rice University even predicted that machine could take 50% of our job in the next 30 years. Yes, you heard it right, half of our job and in our lifetime! To put you into perspective, even the great depression only see unlemployment rate of 25% at a maximum. 50% is just crazy. Surely, the economic and social impact of this dawn of new era will be huge, and there’s one thing that is certain. We need solution, right now. This work is heavily influenced by a video by CGP Grey — Human Need Not Apply
AI, Automation, and Unemployment.
53
ai-automation-and-unemployment-14a084c5e43e
2018-07-18
2018-07-18 13:38:43
https://medium.com/s/story/ai-automation-and-unemployment-14a084c5e43e
false
1,166
null
null
null
null
null
null
null
null
null
Technology
technology
Technology
166,125
Hafidh Rendyanto
16517164–13517061
33af1b59106a
hafidh.rendyanto
4
3
20,181,104
null
null
null
null
null
null
0
null
0
ae27dbcad639
2017-12-14
2017-12-14 22:01:57
2017-12-14
2017-12-14 22:23:35
5
false
en
2017-12-15
2017-12-15 19:20:52
38
14a132d5f7be
9.603145
1
0
0
A year-end reflection on the biggest headlines and market trends of 2017
5
Where We Are Now — 2017 In Review A year-end reflection on the biggest headlines and market trends of 2017 We’re nearing the end of the year, which means it’s time to take a look back on the year and examine how the market trends we predicted in our 2017 Outlook report played out over the course of the past 12 months. Photo by Brigitte Tohm on Unsplash Advanced Interfaces Consumer interfaces continued to evolve as voice interface and AR interface both saw significant adoption growth over the past year. This trend of unbundling smartphone components (such as the camera, microphone, etc.) and integrating them into other gadgets matters for brand marketers because the advanced interfaces alter the way consumers interact with digital devices and create new ways for brands to reach their audience. On the voice side, smart speakers are taking U.S. households by storm. 32% of U.S. households have a voice-controlled smart speaker product, according to a survey from ThinkNow Research, with over 60 million people in the U.S. now using voice-enabled assistants on a monthly basis. Among the top products, Amazon Echo continues to lead the pack, taking up nearly 70% of the U.S. smart speaker market as of October. 22 million Amazon Echo are estimated to be sold in 2017, according to a Forrester Research forecast. Photo by Andres Urena on Unsplash Both Amazon and Google made efforts to diversify their smart speaker lineups, introducing new products such as the Echo Show, Echo Look, as well as the Google Home Max and Mimi. Both companies also refreshed their flagship smart speaker with new designs and functional improvements. Amazon even started rolling out to 28 new countries across Europe and South America, after landing in Germany, Japan, and Canada earlier this year. With the holiday shopping still underway as low-end smart speakers like the Echo Dot and Google Home Mimi dipping below $30, we expect many new smart-speaker customers to come online in 2018. IKEA Places uses mobile AR for furniture preview. Source: IKEA This year also marked a true breakout year for augmented reality (AR), especially mobile AR. While Pokemon GO already gave mainstream users a taste of AR’s potential in gaming the year prior, this year saw the debut of several mobile AR platforms from Apple’s ARKit, Google’s ARCore, to Facebook’s AR Studio and Camera Effects platform, all of which make it easier for developers to create AR experience to engage with mobile users. Even Snapchat, a pioneer in mobile AR, finally opened the floodgate to allow people to create AR lenses with the launch of Lens Studio on Thursday. All together, AR-compatible hardware could deliver an installed base of over 400 million users by 2021. Already, we are seeing brands such as IKEA and BMW making good use of it to enrich their mobile experiences. The developments in mobile AR neatly dovetail the rise of the camera as a platform. iPhone X and iOS 11 brought facial recognition and native QR code support to Apple users, while Google Lens and Pinterest Lens enabled visual search on select handsets to put the camera front and center in the mobile experience. Of course, not everything can be a big success. For example, Snapchat’s camera-embedded sunglasses, Spectacles, failed to take off due to an inefficient roll-out strategy and limited use cases. With hundreds of thousands of Spectacles unsold, this miss is estimated to have cost the company $40 million. The release of Apple Watch Series 3 with cellular connectivity has the potential to put wearables back on the map, as it finally becomes fully functional when independent of an iPhone. Demand for Apple Watch Series 3 with cellular was stronger than expected, taking over 80% of the preorder for Apple Watches, and supply-chain report predicts 20% growth in its shipments in 2018. As wearable devices continue to come into their own and expand their use cases, user adoption should continue to pick up as well. Gartner’s latest report predicts sales revenue of wearable devices will hit $30.5 billion in 2017, a 16.7% increase over last year. Read more: The Rise of Interactive Visual Culture & What It Means for Brands Apple’s ARKit vs. Google’s ARCore: What Brands Need to Know Global Culture With more and more media channels getting online and going over-the-top, most of them become global-reaching instead of regionally focused, thanks to the openness and borderless nature of the internet in most parts of the world. Global culture refers to the audience aggregated by those emerging digital platforms. It’s a trend that has been developing for years, but one that has really coming into focus in 2017 as more apps, and platforms, and streaming services becoming media channels that marketers can tap to reach either a mass audience or a targeted, niche demographic on a global scale. Even mainland China, a region largely cut off from accessing the global platforms, is not immune to the forces of global culture. The incident of Gigi Hadid — a high-profile Victoria’s Secret model who got banned from entering China for the lingerie brand’s fashion show in Shanghai following a fierce online backlash against her racially insensitive mimicry of Asians earlier this year– stands out as a perfect example of the amplifying influence of global platforms and the consequences of brands failing to realize their global-reaching scale. In 2017, esports continues to grow at an impressive rate and has successfully crossed over into mainstream media channels. The Rocket League Championship Serie stood out among this year’s plethora of esports event with about 1.4 million hours of viewership for its S4 World Championship event. Another popular online game Overwatch amassed over 35 million players worldwide as the game’s publisher Activision make it easier for traditional advertisers to spend on esports. On the digital media side, Hulu made its entry into esports with a exclusive content deal with ESL for 4 gaming shows, while Facebook signed on as exclusive livestream partner for Paladins Premier League. Esports have grown at such a head-turning rate that traditional sports leagues such as the NBA, the MLB, and the NFL have all started or doubled down investing in esports teams this year. Then there is the breakout hit of HQ Live, a live trivia game app that has become a viral sensation among iPhone users in recent months. Twice a day, around 350k players log on to participate in one or two rounds of trivia game that consists of 12 questions, each one more arcane and esoteric than the last, hoping to win the bragging rights and share a portion of the prize money with other winners. Although only available in English, with a majority of the questions focuses on western pop culture, the app is available globally, meaning that anyone in the world could participate. In the coming years, we expect to see more types of live games to pop up and aggregate massive user attention in real time. Photo: Apple/Intermedia Labs Inc. Netflix continued to internationalize television as it ramped up its global content production. Dozens of regional-specific shows are developed and found an global audience thanks to its global-reach service. “Narcos” swept both U.S. and Latin America, and UK-based “Black Mirror” is a hit throughout the English-speaking world. Even some foreign-language shows, such as Dark from Germany and 3% from Brazil, are gaining traction among U.S. audience. Partially inspired by Netflix’s continuing success with the global audience, Disney also announced that it will launch two OTT streaming services in 2018 to carry the entertainment and sports content it owns, while should further diminish the global release windows and further boost the global culture. Buying Fox’s TV and movie assets will also bring a huge boon to the international appeal of its upcoming streaming service. Brands will need to contend with this new global-as-default reality of media distribution and consumption. Read more: A Marketer’s Guide to Esports Why Gigi Hadid Is Not Welcome In China Augmented Intelligence How to apply advances in machine learning and artificial intelligence to optimize marketing campaigns and enhance customer experience has been a hot topic in our industry throughout 2017, and the discussion is only likely to intensify in the years to come as AI-powered solutions continue to infiltrate every aspect of consumers’ daily lives. This year’s advances in those augmented intelligence tools mainly manifested in four domains — cloud computing services, maturing voice assistants, autonomous vehicles, and the iPhone X. First up, AI-powered cloud services are making it easier-than-ever for brands to incorporate machine learning solutions into their enterprise operations and customer experiences. The set of new cloud-based machine learning tools that Amazon introduced at its AWS re:Invent two weeks ago is a good example of the kind of AI tools that brands can incorporate today to upgrade the backend of their websites and apps so as to provide a more customized experience for their customers. Hotels.com is currently using Translate to power automated translations of customer reviews for the site’s listings around the world and improve customer experience via localization. The advances in AI also mean major improvements to the voice assistants powering the aforementioned voice interfaces. Alexa and Google Assistant both received significant updates that enabled voice identification for multi-user support, payment support for completing transactions via voice, and better conversational skills at large. Alexa, in particular, received backend updates that leverage deeper contextual learning to help Alexa better understand contexts (such as which Echo device it is operating on) and differentiate its answers accordingly. Siri is poised to receive a significant update as Apple confirms its acquisition of the music-recognition app Shazam. Developments in driverless cars throughout the year also highlighted the growing application of AI, specifically machine learning and computer vision. As of today, a total of 53 cities in the world are piloting autonomous vehicles programs. 14 of them are in the US, with 16 more US cities preparing to launch similar programs, according to The Global Atlas of Autonomous Vehicles in Cities, a joint effort between Bloomberg Philanthropies and the Aspen Institute, Lyft is offering rides in self-driving car in Boston and SF, while Uber’s fleet of self-driving cars in Pittsburgh back on the road after an accident is resolved. With many automakers aiming to roll out market-ready autonomous vehicles by 2020, the AI-powered self-driving software will quickly improve with the increasing number of road tests in the coming years. Source: Apple Last but certainly not least, the release of iPhone X, which carries Apple’s first “AI-chip” marked the first time that AI is built into the mobile hardware. A neural engine, part of Apple’s A11 Bionic chip blends AI-smarts with iPhone X to power FaceID’s facial recognition, image-processing, as well as improving Siri performance. According to analysts at Rosenblatt, Apple reportedly sold 6 million iPhone X units over Black Friday weekend, out of a total of 15 million iPhones sold. As our smartphones get smarter and more intuitive, so should the overall mobile experience. A recent survey conducted by Salesforce Research found that 51% of marketing leaders are already using AI, with more than a quarter planning to pilot it in the next two years. For brands that are still on the fence about whether to incorporate machine learning and AI solutions into your business, 2017 has responded with a resounding “yes.” Read more: A Marketer’s Guide To Artificial Intelligence A Marketer’s Guide To Autonomous Vehicles Retail Disruption The brick-and-mortar retail industry has had a tough year, with a record number of stores closing and mounting disruption from tech-savvy ecommerce competitors. While not a trend that we highlighted in this year’s Outlook report, this is very much a result of the Boundless Retail trend that we highlighted in our 2015 Outlook. Thanks to the growing influence of ecommerce, rising shopper expectations are outpacing retailer’s investments in revamping the in-store experience. When 53% of millennial consumers say they are “better connected” and able to find information more quickly than retail associates, it is no wonder that they would prefer to spend their money on digital channels. The biggest retail news of the year is clearly Amazon’s 13.7 billion acquisition of Whole Foods, which sent the entire U.S. grocery industry into a tailspin. Buying Whole Foods instantly gives Amazon access to the premium grocery customers they have been chasing after and the mass scale it needs to run its grocery business as a modularized service. It is no wonder that many retailers are feeling the pressure from Amazon and resorting to their own acquisitions, such as Walmart buying menswear brand Bonobos and CVS acquiring health insurer Aetna for $69 billion, as well as pulling their vendors off Amazon Web Services. Looking ahead, retail will continue to go through a rough transition period as shopper behavior continues to shift. Omnichannel retail concepts and services such as mobile payments, in-store pickup, dynamic shelf displays, and same-day delivery are catching on among retailers. As more and more shoppers start to choose to shop online and on their mobile devices, retailers will have to continue to explore new store formats and experiment with digital tools in order to differentiate their in-store experiences and win back customers. Read more: Updates On Boundless Retail: From Omni-Channel To Customer-Centric Updates on Boundless Retail: Diversified Shopping Experiences As Differentiator As we close the book on 2017, it is clear that this year has seen tremendous lift-off for mobile AR, voice interfaces, global-reaching media channels, and AI-powered software services, each with their own significant implications for brands and advertisers. As the Lab continues to keep a close eye on the development of all things related to consumer technology and media futures, we expect to see more amazing advances in the coming year. Please remember to check back in January for our coverage of the 2018 CES from Las Vegas, and enjoy your holiday break!
Where We Are Now — 2017 In Review
1
where-we-are-now-2017-in-review-14a132d5f7be
2018-03-21
2018-03-21 02:47:02
https://medium.com/s/story/where-we-are-now-2017-in-review-14a132d5f7be
false
2,324
The media futures agency of IPG Mediabrands
null
IPGMediaLab
null
IPG Media Lab
ipg-media-lab
TECHNOLOGY,TECHNOLOGY STRATEGY,DISRUPTION,ADVERTISING,MARKETING
ipglab
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Richard Yao
Senior Associate of Strategy and Content, IPG Media Lab
b6f606a80313
richardyaoipg
129
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-06
2018-01-06 10:38:13
2018-01-16
2018-01-16 14:59:43
1
false
sk
2018-09-25
2018-09-25 12:27:25
10
14a1c6dcbe09
1.207547
24
1
0
Umelú inteligenciu trénujú na rozpoznávanie ľudskej reči, či riadenie áut. My vo verejne.digital sa snažíme aplikovať podobné inteligentné…
4
O čo ide verejne.digital? Umelú inteligenciu trénujú na rozpoznávanie ľudskej reči, či riadenie áut. My vo verejne.digital sa snažíme aplikovať podobné inteligentné funkcie na dáta zverejňované slovenskými verejnými inštitúciami. V prvom kroku sme sa zamerali na ľahké prezeranie dát a ich vzájomné prepojenie. Každý kto sa zaujíma o svoje okolie a o nakladanie s verejnými zdrojmi, vie ľahko zistiť o aktivitách vo svojom okolí, alebo zodpovedať otázky ako: Obchodujú moji susedia so štátom? Nemá môj obchodný partner vlastníkov v Monaku, na Cypre alebo v Paname? Prispel niekto politickej strane a zároveň dostal od štátu zákazku? Na týchto dátach teraz začíname stavať inteligentné funkcie. Naším cieľom je ľahko a rýchlo zodpovedať otázky ako: Do ktorých verejných obstarávaní by sme sa mali zapojiť na základe podobnosti s tými čo sme vyhrali v minulosti? Vyhráva štátne zákazky firma, ktorá je v strate alebo bola založená len pár dní vopred? Ktoré z miliónov zverejnených zmlúv sú naozaj zaujímavé / podobné? Každý deň štátne inštitúcie zverejňujú nové verejné obstarávania. Našou ambíciou je využiť umelú inteligenciu na aktívne oslovenie tých, o ktorých si myslíme, že majú šancu v aktuálnych súťažiach vyhrať — a tým zvýšiť konkurenciu. Projekt verejne.digital je projekt občianskeho združenia Chcemvediet.sk. Je to hobby projekt do ktorého hľadáme ďalších nadšencov. Jeho zdrojový kód je prístupný pre všetkých na GitHube. Chceme, aby mal každý možnosť jednoducho pridať novú funkcionalitu bez toho, aby sa musel starať o dáta alebo server. Máte pripomienky, nápady, alebo záujem pridať sa? Kontaktujte nás na Facebooku. Chceli by sme poďakovať našim partnerom Nadácii Zastavme Korupciu, Ekosystem.Slovensko.Digital, Uvostat.sk, Finstat.sk a Vacuumlabs za pomoc s vývojom, poskytovanie dát a konzultácie.
O čo ide verejne.digital?
313
o-čo-ide-verejne-digital-14a1c6dcbe09
2018-09-25
2018-09-25 12:27:25
https://medium.com/s/story/o-čo-ide-verejne-digital-14a1c6dcbe09
false
267
null
null
null
null
null
null
null
null
null
Open Data
open-data
Open Data
5,306
Verejne.Digital
Počítače dokážu rozoznať ľudskú reč, či riadiť auto. My chceme aplikovať podobné inteligentné funkcie na dáta zverejňované slovenskými verejnými inštitúciami.
f4a8c3093e3c
verejne.digital
39
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-08
2018-01-08 10:01:16
2018-01-08
2018-01-08 10:04:45
13
false
en
2018-01-08
2018-01-08 10:04:45
13
14a362db1009
5.003774
1
0
0
Originally published at stagraph.com.
5
Screenshots from coming Stagraph 2.0 Originally published at stagraph.com. I plan to release the new Stagraph version very soon. The difference in features, functionality and workflow is amazing. If you used one of the previous versions, you will find that it is actually a new software. The basic logic of its use remains the same, it has only been extended. The following text briefly describes the basic news. Excepting functionality, the licensing model will also be changed (more at the end of post). It is difficult to describe briefly all the innovations included in the new version of the Stagraph. Therefore, I will show you just a few screenshots with a description in this post. Later, we’re going to discuss the news in detail. Redesigned was the basic application interface within the ribbon toolbar tab Home. This tab is divided into three sections. Using functions from the first section, you can add individual visual objects to your project. In previous versions, only one object type was available — Plot. Now you can add to your project up to 8 types of visual objects, such as Pairs Plot, Table, Text, Text Paragraph, Image, Grid Panel and Annotation. These objects can be used individually, or by its combination you may create compound, complex types of statistical graphics. In the second section you will find functions for Dataset, Plot and R Console display. Finally, the last section provides data import tools and functions. As in the previous version, the project is divided into two sections. The first section shows imported datasets. After clicking on the selected one, at the bottom of the panel are displayed its variables. Their color defines a variable type (e.g. brown for character, blue for date-time). The second section contains a list of created visual objects — visuals. In the previous version, you could work only with Plot objects. The 2.0 version allows you to work with several visual objects. If you click on the selected dataset in the Project panel, a new ribbon toolbar tab appears — Data, where you will find functions for data cleaning, wrangling and statistical preprocessing. Functions are divided into logical groups. If you are working with the R language, from the picture you can recognize on which R packages the data wrangling in Stagraph is based on. These are packages from the tidyverse collection — dplyr, tidyr, lubridate, forcats and stringr The data wrangling workflow was redesigned and visually is controllable in the same way as the plot definition. After clicking on the selected dataset in the Project panel, this is displayed in the second panel Properties, along with the individual functions that are performed on the dataset. Each function includes several arguments and these arguments can be edited at the bottom of the panel. The result may be displayed in a document called Data Preview during processing. A pre-processed dataset can then be used for visualization. The ribbon toolbar tab Layers has also been restored. A number of new geometric and statistical layers have been added. These geometries come from external packages such as ggalt or ggforce. Using these packages, you can create additional types of graphs such as mosaic plots, treemaps, horizon plots or joyplots. Along with these geometric layers, new statistical layers also come. The statistical layers were divided into two groups: basic — they come from the basic ggplot2 package and special — from external packages. In addition, Stagraph 2.0 introduces annotation layers. Here you can find functions for plots annotating, such as a North arrow or a scale-bar. Functions of the Scales group have also been updated. New scales such as viridis color / fill scale or new categorical scales have been added. New functions are available under the ribbon toolbar tab Other, where you will find new options for plot faceting and themes. An example of a new geometric layer you can use in Stagraph 2.0 is geom_horizon. Using this geometry, you can create impressive data visualizations based on the horizon plot. From other visuals we can describe, for example, a pairs plot. This object serves to create pairs plots, which contains the same options of definition and adjustments as the basic plot. Their purpose is to compare multiple variables of one dataset. Well, in the end, all visuals (listed in the Project panel) can be combined together to create data visualizations that are composed of several plots. A simple example is shown in the following illustration. The final visualization consist of two objects — pairs plot (left) and simple plot (right). There is more and more novelties, changes and updates in Stagraph 2.0 and it will be described in the following posts. As an example, the possibility of exporting and publishing the plot into online vector form using the plotly.js package can be mentioned. This library also allows you to create animated plots. The latest important innovation is the change in licensing options. From version 2.0, the program will be released in a different form. Previous versions allow you to use (as a Demo) all the functions, but only with a limited amount of data (max 200 records / rows of the dataset). From 2.0 version, the program will be released under the freemium licensing model. Under this model, you will not be limited by the amount of input data you can use. Also, the data wrangling options — functions from the Data ribbon toolbar tab — are absolutely not limited. The only feature that will be limited (in the free version) is the type of available geometric layers. Only some basic geometries will be available in the free version, such as geom_point, geom_bar, geom_col, geom_text, geom_line and geom_path. Others will be available only in the Professional version.
Screenshots from coming Stagraph 2.0
1
screenshots-from-coming-stagraph-2-0-14a362db1009
2018-01-08
2018-01-08 11:03:37
https://medium.com/s/story/screenshots-from-coming-stagraph-2-0-14a362db1009
false
955
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Milos Gregor
Stagraph.com, ReshapeXL.com, GisXL.com, HydroOffice.org
a9bba177651e
milos.gregor
4
8
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-22
2018-01-22 16:48:55
2018-01-22
2018-01-22 16:54:20
0
false
en
2018-01-22
2018-01-22 16:54:20
0
14a3c62a216c
2.09434
0
0
0
I am writing a list of data science practice to be followed while getting started with data science project. I hope it helps in getting…
4
Best Data Science Practices I am writing a list of data science practice to be followed while getting started with data science project. I hope it helps in getting started. Understanding the Use Case and Business Goal: It is the most vital step for all Data Science project i.e. defining the business goal that needs to be achieved from data science. It requires clear understanding of business and what is the expected outcome from the new project. It helps Data Scientist to prepare themselves for the possible challenges and go ahead with right methods and inputs required for gaining the insight as required from business ends. Identify the Data: Around (60–70)% of Data Science project’s time is spent on data preparation and cleaning. Data comes in various forms broadly classified as structured, unstructured and semi structured. We need to identify the data which is relevant or identify the anomalies present in the data and understand whether the data is sufficient to make a useful insight for the business goal required. Brain Storming: Most of the successful Data Science project has one thing in common i.e. collaboration among the team members. A team consists of people coming from various back ground who has faced various and mostly unique challenges be it in terms of modeling or Data preparation / cleaning or domain aspects of an individual. It is always useful to have brain storming session conducted among the team member to arrive at a solution. Data is value: Set the expectation of the results i.e. keep everyone aware the results are based on the data. It may be contrasting to the expectation of business goals but it always good to keep your business stakeholder aware of the insight/findings and it may open rather greater prospects than expected insight from business people. Finding the Right tools: Depending on business goal, if it’s a highly computational tasks like providing insight from image, video or audio data requires high performing systems with GPUs and depending on the velocity with which the data is generated is also affects in setting the right set of tools. Insight into Report cum Action: After finding great insights from a messed up data, the value of all insight remains less until and unless it is transformed into visualization of business value. Better the visualization of business value, better is the Action plan for business end people, who can customize their business operation based on the Data Visualized and accustomed to business needs of the customer they are trying to attract. Insight is a raw carbon which turns into polished diamond after visualization techniques. Verify and Validate at regular intervals: A Model is developed over a set of data at different parameters but data various over a period of time. If we use the same model over new data which is captured after a period of time, possibilities are that model may collapse in terms of insight it used to provide. It is always recommended to pursue a testing strategy to test the model over the new data and verify / validate the results at regular intervals and modify the model if the performance of the model worsens. Mentioned above is few of the best practices to be followed while pursuing a data science project. Please suggest points if missed anything important.Thanks for your time.
Best Data Science Practices
0
best-data-science-practices-14a3c62a216c
2018-01-22
2018-01-22 16:54:21
https://medium.com/s/story/best-data-science-practices-14a3c62a216c
false
555
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Mayur jain
null
68a67c881724
mayur87545
4
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-01
2018-03-01 14:49:06
2018-03-01
2018-03-01 16:45:04
0
false
en
2018-03-01
2018-03-01 16:45:04
0
14a4d8094579
2.54717
2
0
0
So you’re either :
1
Info 201 and R: What Should I Expect? So you’re either : an underclass student still in the gut wrenching process of searching for the perfect major an upperclass student looking to explore outside your major, but like not too much because you’ve already gotten a solid a** kicking in MATH 124, CSE 142, PHYS 121 back when you were a hopeful freshman Either way, you’re wondering what to expect if you take Info 201, what “R” actually is, why its named “R”, and how large of a GPA drop you should expect. INFO 201 in a Nutshell #1: THIS IS A PROGRAMMING CLASS THIS IS A PROGRAMMING CLASS THIS. IS. A. PROGRAMMING. CLASS Maybe it’s because the course code is “INFO” so people discount it, compared to if the course code was “CSE,” but this is a programming class and you should expect to be coding a lot. Since it is a coding class, you should expect assignments to be due every week. There are a total of 8 assignments that are due every week, with no late days, but you get to drop your lowest assignment score. #2: Yes, R is the language you are learning, but that is not as simple as it sounds. While the main language we are learning is R, there are so many components required to utilize it effectively and the course covers a lot of it. Here’s a pretty comprehensive list of overall knowledge I had to pick up: GitHub: all assignments are stored/submitted via GitHub, and you will also need to learn how to collaborate with others on it. Harder than it sounds. Command Line: You know that weird black-hacky looking window that you sometimes accidentally open on your laptop? Yeah, you’ll need to learn that too. Everything you do on your laptop regarding file manipulation (moving files, deleting, creating new folders, etc.), you will have to be able to do via Command Line and more. R Packages: Just like how CSE 142/143 tells you to sometimes import java.util.*, R also has different ‘packages’ that can be installed. For example, dplyr is a very popular package that allows for efficient data table manipulation. Each of these packages have particular functions/commands that you will need to know how to use, and I find that this class in particular demands knowledge of quite a few different packages. #3: The learning curve is pretty steep Even if you’ve taken CSE 142/143 or have some coding experiences in other languages, expect to be putting in a good amount time to teach yourself. If you have no coding experience, this class will be difficult and you should allocate extra time for it. One of the hardest parts of starting out with coding is gaining a basic understanding of hardware/software, how computers operate, and what coding “languages” even are. Without this understanding, coding won’t be as enjoyable and while you might be able to brute force it via Googling Stack Overflow, you won’t gain an understanding of the concepts. R Basics. Like VERY Basic. Writing functions: FunctionName ← function(parameter_name) { some magical lines of code } *note the difference in format between the function name and the parameter name Calling functions: function_name(parameter) Commenting: #this is now a comment. Not code. It will not be executed. Accessing Specific Column in Data Table: table_name$column_name → this returns everything in the specific column of that table Other Things We Learned To get a better idea of this course, it would be useful for you to search up a bit about each of these topics so you know what sort of content to expect. R markdown GitHub dplyr: this is a library for data manipulation that we used a TON. It allows you to access, sort, & change data tables knitr: this is for R markdown so you ggplot: make pretty charts/graphs! Shiny: build interactive applications in R Hopefully this helps you gain a better idea of what the class structure is like and help you decide whether or not to take it. Good luck!
Info 201 and R: What Should I Expect?
2
info-201-and-r-what-should-i-expect-14a4d8094579
2018-03-09
2018-03-09 10:53:20
https://medium.com/s/story/info-201-and-r-what-should-i-expect-14a4d8094579
false
675
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Abby Liu
null
2eb6639fcff
abbyliu
39
43
20,181,104
null
null
null
null
null
null
0
null
0
d162644efe2a
2018-03-19
2018-03-19 23:55:09
2018-03-19
2018-03-19 23:59:49
1
false
en
2018-03-19
2018-03-19 23:59:49
0
14a6f7aaec18
2.569811
7
0
0
Contemporary people, who are very interested in science and latest developments, have several popular themes to discuss with each other…
5
Teaching machines to see! Contemporary people, who are very interested in science and latest developments, have several popular themes to discuss with each other: Elon Musk and his quite ambitious views and ideas, Penrose–Hawking singularity theorems which undergo the second wave of popularity, blockchain technologies, and artificial intelligence and neural networks. We have chosen possibilities of the last topic mentioned above to disclose in this article so that you may also join any scientific conversation and express your own opinion related to the topic. Make me see! The concept and essence of artificial intelligence cover rather big number of spheres and continue to surprise people with different discoveries and developments based on neural networks. By all means, one of the most incredible inventions of our days is teaching computers to “see” the environment and distinguish different objects on a digital picture with a shocking accuracy. This development is called computer vision. Just imagine how far our technologies have come if machines that have no eyes learned to see an image and distinguish what is in the picture. All this became possible due to AI. Still, we can’t say that there was no computer vision before the invention of AI. The idea and attempts to teach a computer to recognize various objects on a picture appeared much earlier, however, only with mainstreaming of AI, computer vision got a real chance to advance. Contemporary computer vision is able to distinguish different objects on a picture and classify them according to some input parameters, find people on pictures, and even analyse their emotions and produce a result of this process due to which we can learn whether a person is happy or sad, angry or joyful. Although neural networks bring incredible results, such feature is not a breakthrough. It can be rather called a weak side of the development as AI and neural networks can do better than that. How Does it work: general explanation Have you ever thought of the computer vision operation? What do you think it looks like? Is it similar to our vision principle? Notwithstanding the name of the technology (computer vision), there is still little in common between our vision and the one of a machine. Developers and programmers use parameters based on geometrical principles due to which this technology can differentiate one image from another. Still, we need some perceptual means that will transfer external features of an object to the “mind” of a neural network. Such unit is called perceptron and is equal to human’s brain cell — neuron. Perceptron has input data elements and output data elements that are similar to neuron, which makes it possible to build a net and transfer information from one neuron to another, hence analyzing some data. The same scheme exists in our brains, continuously generating new thoughts, emotions, and impressions. Just think about the complexity of such a process! This is quite astonishing. But it is not the final stage of neural networks development. Besides, this feature is only one of the numerous possibilities provided by AI. Millions of developers worldwide understand the practical and scientific value of this invention. The OL PORTAL team is also convinced of the great profit of implementation AI technologies to our product. You may observe this development in the creation of chatbots (AI assistants generating answers to messages instead of you). Moreover, OL-account will have image search feature that has also neural networks inherently. We know that the future belongs to smart applications and do our best to provide you with convenient functions today. Stay tuned to social media pages of the OL PORTAL project and learn more about our latest news and upgrades. Nobody knows what a wonderful invention we will decide to apply to our app next time!
Teaching machines to see!
285
teaching-machines-to-see-14a6f7aaec18
2018-06-19
2018-06-19 06:58:32
https://medium.com/s/story/teaching-machines-to-see-14a6f7aaec18
false
628
The world's first decentralized messenger on neural networks with the Artificial Intelligence dialogue function.
null
OLPortal23
null
OLPORTAL
ol-portal-steps-forward-to-the-future-communicatio
MOBILE APP DEVELOPMENT,SOCIAL NETWORK,ARTIFICIAL INTELLIGENCE,NEURAL NETWORKS,BLOCKCHAIN
olportal
Machine Learning
machine-learning
Machine Learning
51,320
OLPORTAL
The world's first decentralized messenger on neural networks with the Artificial Intelligence dialogue function.
b4a225970600
olportal.ai
960
2
20,181,104
null
null
null
null
null
null
0
null
0
97199cc52c61
2018-08-10
2018-08-10 16:32:28
2018-08-13
2018-08-13 16:15:29
2
false
en
2018-08-13
2018-08-13 16:19:15
7
14ab3878b2dd
3.330503
1
0
0
An intro to some key authors in the field of data bias and Christian's own perspective on this incredibly important area
5
Intro to Data Bias — A Conversation with Christian Beedgen In this episode of the Masters of Data podcast host I sat down with my colleague, and long-time friend, Co-Founder and CTO (Chief Technology Officer) at Sumo Logic, Christian Beedgen. Christian has a long history in the world of data and he brings his years of experience and expertise to the discussion, which centers on the ideas of bias, data and analytics in today’s world. It is undeniable that all people live with the realities of biases, which impact how we see things and interpret the world around us. The question in the machine data analytics realm becomes how to embrace the realities of these biases and their impact on the data being collected, while still recognizing the inherent value in the data and the need to analyze it purposefully. To start Christian shares about his personal venture into the technology spectrum from a young age through his now current role with Sumo Logic. As he shares, his fascination with technology started when he became interested in computers by reading programming books, which engaged him and pushed him to discover more. From there his interests adapted into tinkering with computers and programming as a hobby, something he did while in school, which in turn prepared him for his career ventures. Early in his career Christian had an opportunity at Amazon, which opened up a series of roles at different startups, which centered his focus on data analytics, the experience that fueled his venture into co-founding Sumo Logic. The reason behind founding Sumo Logic as he identifies is that he saw analysis of data not just as a tool but also a sellable service with real good. The focus of the conversation between Christian and I was around this idea of how the data that’s gathered in our world is impacted by biases. While many people in the data spectrum see data as simply cold, hard facts, which can easily be analyzed, crunched and quantified, the reality is much greater than that. As we see it, the data collected actually points back to more than just facts, but rather to the people the data represents, something that is much harder to quantify through machines. The challenge also becomes realizing that even though data is being gathered, people who also have biases and preferences are the ones who gather it. When that data is compiled the people gathering the details inherently influence it in some way. This is where the role of context comes into play as there is context surrounding all data that is gathered in our world. While pointing to multiple sources which have helped their thinking, Christian and I proposed that there is need to be asking questions - not just about the data itself, but about the context in which it was gathered before it is analyzed. Pointing back to Weapons of Math Destruction, Christian refers to the theory that data isn’t necessarily truth itself, but you have to review the context of how data was gathered in order to fully understand it, something they try to adapt at their work with Sumo Logic. As the discussion continues it’s clear that the challenge surrounding biases is that we often convince ourselves into thinking a certain way about things. These biases are impacted by what are called anchor biases, which are our initial exposure to something, which inevitably impacts the way we interpret things related to that ideology. Similarly, a confirmation bias is our focusing on information that supports our beliefs and paying less attention to information that contradicts it and the assumption that ambiguous information automatically supports your perspective. To land the plane Christian and I discussed the responsibility for companies who deal with data and analytics to address the issues raised. In their opinion it comes down to having a clear set of ethics, dialoguing in open reflection on biases and looking for a workable solution. One way of doing this is by simply assuming that data is representative of people, not simply impersonal or factual content. And in the world of building products and features for machine data analytics, the reality of context needs to be embraced. Resources Learn more about Christian: Check out this episode on iTunes or on our website at mastersofdata.com Learn more about Sumo Logic Follow Christian on Twitter Learn more about Sensemaking: The Power of the Humanities in the Age of the Algorithm by Christian Madsbjerg Learn more about Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil Learn more about Ten Simple Rules for Responsible Big Data Research co-written by Kate Crawford
Intro to Data Bias - A Conversation with Christian Beedgen
1
intro-to-data-bias-a-conversation-with-christian-beedgen-14ab3878b2dd
2018-08-13
2018-08-13 16:19:15
https://medium.com/s/story/intro-to-data-bias-a-conversation-with-christian-beedgen-14ab3878b2dd
false
781
Thoughts on what's going on in technology, data, analytics, culture and other nerdy topics
null
mastersofdata
null
Newtonian Nuggets
newtonian-nuggets
ANALYTICS,DEVOPS,BIG DATA ANALYTICS,THOUGHT LEADERSHIP,ARTIFICIAL INTELLIGENCE
benoitnewton
Big Data
big-data
Big Data
24,602
Ben Newton
Proud Father, Avid Reader, Musician, Host of the Masters of Data Podcast, Product Evangelist @Sumologic
2814fd09f883
BenNewton
110
94
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-24
2018-08-24 13:34:36
2018-08-24
2018-08-24 18:09:17
12
false
en
2018-08-24
2018-08-24 18:09:17
1
14ab45ef896e
5.489623
0
0
0
A Visual Guide to Connectionist Temporal Classification(CTC)
3
Modeling Sequences with CTC (Part 2) A Visual Guide to Connectionist Temporal Classification(CTC) In the last part I talked about how CTC can be used to map speech input to its corresponding transcript and discussed about the CTC model and algorithm and how it calculates the probability of an output sequence given an input sequence. At last I discussed about the CTC score and here I’ll discuss how to compute the same. As mentioned in Part 1, there are two cases to compute α(s,t): Case 1: Here ϵ is between repeating characters In this case, we can’t jump over Z[s-1], the previous token in Z. The first reason is that the previous token can be an element of Y, and we can’t skip elements of Y. Since every element of Y in Z is followed by an ϵ, we can identify this when Z[s]=ϵ. The second reason is that we must have an ϵ between repeat characters in Y We can identify this when Z[s]=Z[s]−2. To ensure we don’t skip Z[s-1]​, we can either be there at the previous time-step or have already passed through at some earlier time-step. As a result there are two positions we can transition from. Case 2: Here ϵ is between non-repeating characters In the second case, we’re allowed to skip the previous token in Z. We have this case whenever Z[s-1]​ is an ϵ between unique characters. As a result there are three positions we could have come from at the previous step. Let’s take a more detailed look on how the CTC score is being computed via dynamic programming algorithm. Every valid alignment has a path in this graph There are two valid starting nodes and two valid final nodes since the ϵ at the beginning and end of the sequence are optional. The complete probability is the sum of the two final nodes. Now that we can efficiently compute the loss function, the next step is to compute a gradient and train the model. The CTC loss function is differentiable with respect to the per time-step output probabilities since it’s just sums and products of them. Given this, we can analytically compute the gradient of the loss function with respect to the non-normalized output probabilities and from there run backpropagation as usual. For a training set D, the model’s parameters are tuned to minimize the negative log-likelihood instead of maximizing the likelihood directly. Inference We have learned to train the model, now, we’d like to use it to find a likely output for a given input. More precisely, we need to solve: One heuristic is to take the most likely output at each time-step. This gives us the alignment with the highest probability: We can then collapse repeats and remove ϵ tokens to get Y. For many applications this heuristic works well, especially when most of the probability mass is allotted to a single alignment. However, this approach can sometimes miss easy to find outputs with much higher probability. The problem is, it doesn’t take into account the fact that a single output can have many alignments. Here’s an example. Assume the alignments [a, a, ϵ] and [a, a, a] individually have lower probability than [b, b, b]. But the sum of their probabilities is actually greater than that of [b, b, b]. The naive heuristic will incorrectly propose Y= [b] as the most likely hypothesis. It should have chosen Y= [a]. To fix this, the algorithm needs to account for the fact that [a, a, a] and [a, a, ϵ] collapse to the same output. We can use a modified beam search to solve this. Given limited computation, the modified beam search won’t necessarily find the most likely Y. It does, at least, have the nice property that we can trade-off more computation (a larger beam-size) for an asymptotically better solution. A regular beam search computes a new set of hypotheses at each input step. The new set of hypotheses is generated from the previous set by extending each hypothesis with all possible output characters and keeping only the top candidates. We can modify the vanilla beam search to handle multiple alignments mapping to the same output. In this case instead of keeping a list of alignments in the beam, we store the output prefixes after collapsing repeats and removing ϵ characters. At each step of the search we accumulate scores for a given prefix based on all the alignments which map to it. A proposed extension can map to two output prefixes if the character is a repeat. This is shown at T=3 in the figure above where ‘a’ is proposed as an extension to the prefix [a]. Both [a] and [a, a] are valid outputs for this proposed extension. When we extend [a] to produce [a,a], we only want include the part of the previous score for alignments which end in ϵ. Remember, the ϵ is required between repeat characters. Similarly, when we don’t extend the prefix and produce [a], we should only include the part of the previous score for alignments which don’t end in ϵ. Given this, we have to keep track of two probabilities for each prefix in the beam. The probability of all alignments which end in ϵ and the probability of all alignments which don’t end in ϵ. When we rank the hypotheses at each step before pruning the beam, we’ll use their combined scores. The implementation of this algorithm doesn’t require much code, but it is dense and tricky to get right. In some problems, such as speech recognition, incorporating a language model over the outputs significantly improves accuracy. We can include the language model as a factor in the inference problem. The function L(Y) computes the length of Y in terms of the language model tokens and acts as a word insertion bonus. With a word-based language model L(Y) counts the number of words in Y. If we use a character-based language model then L(Y) counts the number of characters in Y. The language model scores are only included when a prefix is extended by a character (or word) and not at every step of the algorithm. This causes the search to favor shorter prefixes, as measured by L(Y), since they don’t include as many language model updates. The word insertion bonus helps with this. The parameters α and β are usually set by cross-validation. The language model scores and word insertion term can be included in the beam search. Whenever we propose to extend a prefix by a character, we can include the language model score for the new character given the prefix so far.
Modeling Sequences with CTC (Part 2)
0
modeling-sequences-with-ctc-part-2-14ab45ef896e
2018-08-24
2018-08-24 18:09:18
https://medium.com/s/story/modeling-sequences-with-ctc-part-2-14ab45ef896e
false
1,097
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Kushagra Bhatnagar
Deep Learning and Computer Vision Enthusiast
f1dfc211d763
kushagrabh13
1
2
20,181,104
null
null
null
null
null
null
0
null
0
58a0bcb859d2
2018-01-16
2018-01-16 16:30:13
2018-01-18
2018-01-18 18:24:50
5
false
en
2018-02-05
2018-02-05 18:55:16
13
14ab75a15210
2.738994
10
0
0
Difficult problems require sophisticated solutions for business, industry, and organizations using data to drive decisions. New challenges…
4
Multiparadigm Data Science: 5 Things You Need to Know Difficult problems require sophisticated solutions for business, industry, and organizations using data to drive decisions. New challenges and new demands across the enterprise require cutting edge solutions to remain competitive and innovative in today’s marketplace. Multiparadigm data science is one such solution that uses modern analytical techniques, automation, and human-data interfaces to arrive at better answers with flexibility and scale, whether it’s automated machine learning and report generation, natural language queries of data for instant visualizations, or implementing neural networks with ease and efficiency. What Makes Data Science Multiparadigm? Data science is a constantly evolving field. Here are five key points to get you on the road to better understanding a multiparadigm approach to taking data science to new levels of analysis and understanding. Harnessing the full power of computation. Traditional data science techniques are rooted in statistical analysis, hypothesis testing, and a range of regression techniques that limit the questions that can be asked of your data. However, as data become more complex, and with new techniques developed at breakneck speed, it is often the case that more advanced means of solving problems through automation exist but are underutilized, including advanced machine learning, signal processing, computer vision, and neural networks. A process led by questions rather than techniques. Using automated machine learning, natural language queries, and triggered reporting frees up time and resources. Having a vast range of algorithms, models, optimization, and domain expertise built in to your system gives you the ability to ask more questions of your data in a beginning-to-end workflow in one environment. Fully customizable reporting. Different parts of organizations require different reports from data analysis. Having the ability to generate meaningful reports across the enterprise with automated visualization and dynamic content gives everyone from the C-Suite to the sales team access to the tools they need to make informed decisions and forecasts. Natural language understanding. Advances in semantic search and natural language processing are only the beginning. Document tagging and probabilistic text queries are useful, but in order to fully unleash the power of your data, the ability for anyone across an organization to use free-form input to answer a question from your data requires a system built to understand human input with confidence. Multiple types of data and computation. Real-world data is messy and requires a system that can intuitively process, model, and visualize everything from textual data to images beyond dataframes of strings and numerical data. New to AI, machine learning, and advanced analytical techniques? We have you covered. Check out this video playlist featuring Jon McLoone, Director of Technical Communication and Strategy at Wolfram Research Europe, to get up to speed and go from zero to AI in less than an hour. How will you make your data science multiparadigm? Let us know in the comments! For updates and more information on multiparadigm data science, visit our website. You can also follow us on Twitter @Wolfram and find us on Facebook.
Multiparadigm Data Science: 5 Things You Need to Know
73
multiparadigm-data-science-5-things-you-need-to-know-14ab75a15210
2018-05-13
2018-05-13 12:27:39
https://medium.com/s/story/multiparadigm-data-science-5-things-you-need-to-know-14ab75a15210
false
505
Official source for updates about Wolfram Research public relations and events from @Wolfram_Events.
null
null
null
Wolfram News & Events
wolfram-events
WOLFRAM LANGUAGE
wolfram_events
Machine Learning
machine-learning
Machine Learning
51,320
Swede White
Lead Communications Strategist at Wolfram Research
9479e00df95b
swedewhite
133
228
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-23
2017-10-23 07:23:28
2017-10-23
2017-10-23 07:24:01
0
false
en
2017-10-23
2017-10-23 07:24:01
0
14abe467f0ef
0.966038
1
0
0
Solution: Initialize hidden layers using unsupervised learning.
2
Unsupervised Pre-training (Deep Neural Nets) Solution: Initialize hidden layers using unsupervised learning. Force Network to represent latent structure of input distribution. Encourage hidden layers to encode that structure. This is harder task than supervised learning. Hence we expect less overfitting. The procedure which is most popular and quite simple is greedy, layer-wise procedure: Train one layer at a time, from first to last with unsupervised criterion. Fix the parameters of previous hidden layers. Previous layers are viewed as feature extraction. First Layer: Find hidden unit features that are more common in training inputs than in random inputs. Second Layer: Find combinations of hidden unit features that are more common than random hidden features. Third Layer: find combinations of combinations…. etc Pre-training initializes the parameters in a region such that the near local optima overfit less. Fine Tuning: Once all layers are pre-trained.. Add output layer Train the whole network using supervised learning Supervised learning is performed as in a regular feed-forward network. (Fprop, Bprop and update) We call this last phase of fine tuning: All paramters are “tuned” for the supervised task at hand. Representation is adjusted to be more descriminative. This method is most suitable if we have less unlabelled data. What kind of Unsupervised Learning to use? Well, there are many, here are some of them : Stacked Restricted Boltzmann Machines Stacked Autoencoders Stacked denoising autoencoders Stacked Semi-supervised embeddings Stacked Kernel PCA Stacked Independent Subspace analysis. Alright that’s it for now! Thank you for spending your time. Cheers!
Unsupervised Pre-training (Deep Neural Nets)
2
unsupervised-pre-training-deep-neural-nets-14abe467f0ef
2018-05-21
2018-05-21 10:27:33
https://medium.com/s/story/unsupervised-pre-training-deep-neural-nets-14abe467f0ef
false
256
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Akhilesh
Machine Learning and Blockchain. Bengaluru, India.
d5135b7a4d7d
Akhilesh_k_r
22
67
20,181,104
null
null
null
null
null
null
0
null
0
d8f3f6ad9c31
2018-08-18
2018-08-18 18:42:24
2018-08-18
2018-08-18 20:21:24
0
false
en
2018-08-21
2018-08-21 06:02:09
3
14acdebf9816
7.969811
3
0
0
Most of what you read or watch about artificial intelligence, including from some very prominent field experts, is completely wrong. One of…
5
AI is a Revolution in What and How we KNOW Most of what you read or watch about artificial intelligence, including from some very prominent field experts, is completely wrong. One of the most frustrating downsides of the decline of institutions and the Age of Individual Empowerment that we are going through is the incredibly low signal to noise ratio in any kind of expertise. The dominant voice on any topic is often just the loudest, or the savviest promoted, and not the most substantive. This is especially disappointing for topics as complex and critical as AI, where commentary ranges from silly and utterly misinformed fear-mongering (looking at you, Elon) to a more educated but extremely partial view of what it is and what it means. Lots of prominent AI researcher like Yann LeCun or Geoff Hinton, for example, are too all-in on scaling computational models up to multi-system and multi-model adaptive intelligence, which is only half of the work. It’s especially unfortunate that computer scientists have earned somewhat of a monopoly on the field, making intelligence all about learning (which it is not) and that no prominent philosopher or mathematician (except my friend Ben Goertzel, who I sincerely believe history will remember as the true intellectual forefather of Artificial General Intelligence) has developed any kind of influential voice in the domain. To fully grasp what artificial intelligence means, and start thinking about its impact on our society, it’s worth looking back to its philosophical First Principles. Representations and the Human Brain At a broad, philosophical level, artificial intelligence starts with the notion that, as powerful as it is, the human mind isn’t powerful enough to represent and understand the reality it lives in in its complete mathematical layout. Representation is a key concept here (and “knowledge representation” is a critical -yet misunderstood- area of AI). There’s reality (let’s call it N, for the statistically minded) and the entirety of data about it. Humans and machines can’t measure all the position of all the atoms in the universe every single millisecond, so they have to summarize that reality. Humans can only look locally (around them) and in a very abstract manner (symbolic representation). As we’ll see, representational machine learning does almost the opposite (builds n from a lot of atomic-level data about N, as opposed to sampling N and creating symbolic representations of it). Even then, the complexity of what constitutes just my office, for example, or the physical state of my body, are both very high. So the human mind (and even machines) need to compress that reality into a simpler model, or represent N as a simpler statistical model that — hopefully- is a high fidelity representation (often it is not) or N. While reality is the function N, a representation is a compressed, sampled version of N called n. A representation n can have high or low fidelity to N, just like the summary of a book can be a good, high fidelity representation of the book, or a poor representation of the book. n, in this case, is our human knowledge (basically a bunch of statistical models) about itself and its environment. And that knowledge is a very compressed representation of reality, for which we never have a lot of feedback on, except when we make decisions from it. Sometimes that knowledge is good (and then the decisions we make based on it generate intended outcomes), and sometimes it’s bad (they generate detrimental or unexpected outcomes). “Good n” leads to good decisions, “Bad n” leads to poor decisions. This is important, because both at an individual and an organizational level, evolutionary “fit” (and success, performance in a competitive environment, etc.) is essentially a higher ratio of good to bad decisions. Now, for many reasons linked to the structure of the mammalian brain and to how Western education has been shaped by engineers for centuries, we’ve been building mental representations of reality (building a bunch of n models) that are very mechanistic. Medicine is a great example. Look at how it’s organized by arbitrary functions, like the brain, the gut, hands and feet. Is this a high fidelity representation of the human body? Of course not. The body is a much more complex system, and we’ve been going way too long without representing and analyzing it that way. Systems of Systems The human body, society, the Universe itself, is a complex system of systems. Imagine a Russian nested-doll structure of graphs. A graph is a topological structure made of nodes (entities such as people, cars, atoms, molecules) and relationships between them (called edges). A molecule is a graph (atoms and relationships between them). A cell is a graph of that graph, where molecules become nodes in a higher level graph. That graph, in turn, becomes a node in a higher level, more complex graph that can be an organ or a muscle. That graph of graphs then is a node in a higher level graph that is the human body, which itself is a node in higher level nested graphs of family, community, society, country, mankind, etc. You get the sense of how complex this is: it’s mind-bogglingly complex (remember all of these entities and the relationships between them are constantly in flux), and computing all of these relationships together to represent and predict the state of even any of these sub-systems is, for now, virtually impossible (that’s the whole challenge of AI). This data structure is called a “hypergraph” (a graph or graphs), and it’s perhaps one of the most profound and fundamental concepts in mathematics. Because it is a direct reflection of how the Universe is organized, a hypergraph is by far the most high-fidelity type of knowledge representation, and one of the most promising areas of research in AI (as you can imagine, it’s incredibly hard to compute such a complex graph on large datasets). To create a much higher fidelity representation of our reality, humans should be thinking in systems (N). But because it’s far too complex for us to do so (the amount of computation is completely insane), we’re thinking very mechanistically trying to build a very compressed representation of N as a bunch of simple systems interacting together in a very linear fashion (still a graph, but a very rough and inadequate one), vs a more granular look at how certain cluster of nodes in a system might impact other cluster of nodes in another. Enter AI In 1992, an IBM engineer named Gerry Tesauro created a backgammon-playing machine learning application (based on a shallow neural network) called “Neurogammon”. The application would play at and even above human level, but in a way that didn’t quite make sense to even the best players in the world (see Gerry’s paper here). Fast forward to 2016: DeepMind (owned by Google) extended the work of Tesauro -and many others- to create a Go-playing application that performed above the level of even the best human players at the time. Here again, AlphaGo played the game in ways that often didn’t resonate with experts (a really good blog post on this is here). More recently, an OpenAI team created a very interesting hierarchical neural net application, called “OpenAI Five”, which beat some of the best players of Dota 2 at a very simplified version of the game. Once again, the application played the game in ways that were very unusual to human players. You get the idea. In each of these examples, AI applications (thanks to clever architectures and gigantic amounts of computation, which many AI purists take issue with) we’re able to build vastly more complex representations of the system of these games than even the best human minds could. All of these applications performed what’s called “representational learning”, which is a type of machine learning (of which neural networks are a part of) where machines build their own hierarchical representations of the “system”, which is made of all the complex patterns in the data they are fed. Representation learning is impressive specifically because it takes the entirety of the data about a system and learns to build compressed representations of it. It build n from N. This isn’t how the human mind works. Our brain has evolved over millions of years to promote efficiency, which means we don’t have the ability to be fed all the possible information about a system (except relatively simple ones) to create an optimal n representation of that system. Instead, we have to strategically sample the data and build the best possible n representation of N, which we’ll iterate probabilistically through trial and error, and pass on these optimized models to future generations (new brains) so that — hopefully- the learning that happens in a machine in a few hours can happen over a century or more. So the human brain is not great a knowing in and of itself over a single lifetime, because it’s built to optimize its representations of the system of systems it lives in a collective manner (transfer learning) and over generations of agents and thousands of years. Sure, it’s worked pretty well so far, but it’s very slow. And for any of us, it doesn’t feel great knowing that we’re just a link in a long chain of evolutionary algorithmic optimization. Or to borrow an engineering term, just one “sprint” in a much larger development project. A Revolution in Knowledge At the most fundamental and philosophical level, AI promises to considerably accelerate this process by having powerful and -hopefully- highly intelligent machines to derive the best n from N. Armed with large datasets and such a capacity to create high-fidelity representations of complex systems, humans could become exponentially more knowledgeable about themselves and their environment. What happens when were able to think about complex phenomena like cancer, economic development, or human performance, with hundreds of thousands of variables instead of a handful, which is how human knowledge has been developed? We’re suddenly able to create much more accurate and high-fidelity representations of these complex systems, and we’re able to develop better knowledge of them. It’s a lot more complicated, of course. This hybrid man-machine superintelligence presupposes the existence and availability of large, properly curated, and accurate datasets (the machine learning community is awakening to how hard that is), organizations and societal normative frameworks that are ready to accept it (a whole new bag of scorpions right here), and a radically new mindset: systems thinking. Equally as important is a giant technical step, which my team and I focus a lot of our efforts on, which is the ability to semantically and probabilistically represent all of this data across formats and ontologies. As seen previously, representational learning is very powerful at creating high fidelity representations of whole complex systems. But that only goes so far, for several reasons: (1) it needs large and well curated datasets, (2) it needs lots of experts to tune, and (3) while it can represent very complex systems (with a large state space, like Go) it isn’t capable of scaling beyond bound, stable (they don’t evolve) and fairly coherent (from a data diversity standpoint) systems, where most systems we are trying to better understand (like the human brain, cancer, or human performance) are multi-dimensional, multi-ontological (they involve different types of data), and fast-evolving. So while neural networks have shown impressive results in representing fairly complex systems (far above human abilities), they have not yet been able to cross into a higher level field of systems representation, which would make them very useful in helping humans understand more complex systems such as disease. This step is where my team and I, both at ETC and Corto, have focused our efforts. We are working on extending hypergraph-like probabilistic knowledge representation applications to allow for more complex systems representations across large and heterogeneous datasets in a way that could make them more reliable, more autonomous, and more easy to compute. After all -and we don’t hear this enough- the AI research community has for long known how to build supintelligence (probabilistic graph representation + information theoretic methods+ Bayesian/Reinforcement learning on complex adaptive systems), it just hasn’t been able to do so in a way that can be computed. 100% of the debate on AI today can be understood this way. It’s worth reminding that, a few years ago, AI researcher Marcus Hutter wrote a complete and definitive Artificial General Intelligence application made of just 50 lines of LISP code … but which, to be implemented, would need more computational power than exists in the whole Universe.
AI is a Revolution in What and How we KNOW
22
ai-is-a-revolution-in-what-and-how-we-know-14acdebf9816
2018-08-21
2018-08-21 06:02:09
https://medium.com/s/story/ai-is-a-revolution-in-what-and-how-we-know-14acdebf9816
false
2,112
Perspectives on Visual Storytelling
null
vantageonmedium
null
Vantage
vantage
DOCUMENTARY,PHOTOGRAPHY,PHOTOJOURNALISM,CULTURE,VISUAL STORYTELLING
vantageeditors
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Yves Bergquist
Machine Intelligence Warlord. Founder & CEO of AI Startup Corto. Director of the AI & Neuroscience Project at USC’s Entertainment Technology Center
1600f8fa2199
ybergquist
681
25
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-07
2018-04-07 23:54:58
2018-04-08
2018-04-08 00:03:10
2
false
en
2018-05-31
2018-05-31 15:23:02
15
14adb2ca3790
2.443711
7
1
0
The case-study is the Bay of Gibraltar which is a British Overseas Territory located geographically south of Spain. I got the Sentinel-2…
5
Clustering a satellite image with Scikit-learn The case-study is the Bay of Gibraltar which is a British Overseas Territory located geographically south of Spain. I got the Sentinel-2 multispectral optical image from the Sentinel Hub, and took a subset around the area-of-interest with SNAP. The Near-Infrared band (NIR) in the studied Sentinel-2 image is quite adequate to detect water. Band 8 is the NIR band in Sentinel-2 products and has a 10m resolution. This band is known to absorb water well. Therefore, we propose to discriminate water from land by clustering this band into two classes using the K-means clustering. From this link you can download the subset of the NIR band available in the GeoTiff format, which was processed in the example below. K-means clustering is one of the most basic unsupervised classification algorithms out there. By unsupervised it means that this classifier doesn’t require a training dataset that was labelled beforehand. In a nutshell K-means in its initial implementation works as explained in this blog post: The K-means algorithms starts by initializing randomly as much centroids as the number of clusters we want to eventually obtain. Each point in the dataset is assigned to the cluster whose centroid is the closest (e.g. Euclidean distance). At the end of every iteration, the centroid in each cluster is updated to the average of the points classified in that cluster. The stopping condition is when the clusters aren’t changed. There is no need to worry about implementing K-means in this tutorial, since we are going to use Scikit-learn which includes many machine learning algorithms, among them the K-means clustering. Before getting into the heart of the matter, we need to import GDAL and the clustering module from Scikit-learn: First off, the satellite image is read with GDAL python wrapper, and from it we extract the band we are interested in classifying: Python-gdal makes our lives much easier by reading the data into a NumPy array which facilitate performing different array operations on it. This will prove useful later when Scikit-learn comes into play to classify the Numpy array: The classification is performed on the pixel level (i.e. each pixel represents a statistical individual to classify). So prior to the clustering, we first need to preprocess the dataset by reshaping the input image from its original 2D dimensions (“width x height”) to a vector of individuals ([[x1] [x2]….[xn]] where xi is the intensity of each pixel): Afterwards, we initialize the classifier by providing the number of clusters as input, and we fit it to the preprocessed dataset to cluster it (no training is needed as explained above): The classified image can retrieved from the labels that were assigned to each pixel. However this labels array is shaped as a vector and need to be reshaped as an image (width x height): We return to GDAL to save the image as a GeoTiff. Similarly to when the original NIR image was opened, we start by creating a dataset with the same dimensions as the input image. Then we save the clustered image array as an individual band in it: Don’t forget to clap if this story has been useful to you.
Clustering a satellite image with Scikit-learn
12
clustering-a-satellite-image-with-scikit-learn-14adb2ca3790
2018-05-31
2018-05-31 15:23:03
https://medium.com/s/story/clustering-a-satellite-image-with-scikit-learn-14adb2ca3790
false
546
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Hakim
Student in Remote Sensing
d36166f8de27
h4k1m0u
24
10
20,181,104
null
null
null
null
null
null
0
null
0
f94d6f4c2941
2018-07-22
2018-07-22 17:06:58
2018-07-22
2018-07-22 17:24:42
3
false
en
2018-08-29
2018-08-29 13:25:50
4
14ae59f65ca7
4.451887
1
0
0
Single Key To The Mysteries Of The World (Non-Fiction)
5
UNICUM ORGANUM Single Key To The Mysteries Of The World (Non-Fiction) Previous Parts is here Part III. HOMO SAPIENCE — ALGO CREATURE BASED on the CHEMICAL PROCESSOR Section 2. The Human anatomy from the standpoint of the presence in his body a chemical processor. III.2 The claim that our body is based on a chemical processor somewhat changes the view of human anatomy. We do not mean the physical structure of human organs, but their functional activity and a role in the body. Imagine a robot created from today’s famous electric processors, literally CREATED from the PROCESSORS. That is, a part of the processors is used as control elements of the robot, a part of the processors is the body of the robot (used simply as “bricks” of which the body is composed), a part of the processors is used as converters of various external influences in the information electrical signals, a part is used as the actuators. You’ll say it’s IMPOSSIBLE. That’s right, it’s impossible with the use of electrical processors, in which the information is torn away from the carrier-body. But with using a chemical processor can create a whole robot or a man from chemical processors. Take any human organ. It’s made of cells, which are the processors. However, the internal structure of the cells of different organs differs significantly from each other in the same transfer and processing of chemical information (i.e. information in the form of chemicals). It is known, for example, that the heart itself without control signals from the brain works when it receives all the necessary nutrients. This shows that it contains its own, independent processor body, possibly not designed as a separate “block”-organ. The latter is possible, since chemical processors are able to connect to each other in series/parallel, carrying significantly different management and executive functions. Chemical substances are not entering at the right in the processor of the reaction, easily pass through the processor to the processor where they are necessary for the reaction. Also, it is impossible to deny the importance of the side effect of a number of chemical reactions, in which of that the electrical potential changes significantly, as a result so-called bio-currents passes through the processor cells, they may carry to the appropriate place of the body that will respond to some their electrical information (that is, information in the form of an electric signal, as in a technical computer). Therefore, it is often possible to start the heart with using an electric shock. But the analysis of the chemical processor shows that the basis of its work is not the processing of electrical signals, and the processing of chemical signals. I must say that in general, the passion of mankind for the study of various bio-currents is only a tribute to the fact that an electricity has been recently discovered in historical terms, and the introduction of its capabilities into our lives has given striking results from the past life. But! our body does not work on electricity. Therefore, the study, for example, of the brain from the point of view of any of the bio-fields emitted by them can give nothing fundamentally significant for understanding the method of processing information. However, such phenomenological methods, as we understand, will never be able to reveal the algorithmic work of a human-created computer. The study of the fields emitted by it cannot relate to digital or analog methods of processing of electrical signals coming to the input of human-created processors. This is also with a chemical processor. Now let’s consider fundamentally how the brain works. The brain is certainly a computer based on the chemical processors. This clearly proves the fact of our life that, for example, anesthesia, is really possible only with the use of chemical anesthetics. In our time there is no doubt that the effect of pain is produced in the brain. So, only chemical methods of anesthesia really “deceive” brain-processor. He “thinks” that the body is all right, but an outside observer see a person has lost whole part of body there. Now we can tell exactly where the border of our body is. The boundary of our body is there where the transmission of information by means of chemical reactions in sealed cells between processors ends. Transmission of information by chemical reactions ends at the border with air or water. Let’s pay attention to the fact that due to the excellent way of our body, that is, as a chemical processor, from the surrounding space of the information projection of the Universe, our body is so fantastically resistant to various physical influences. No yet been created by human hands the apparatus, devices, mechanisms, etc. in comparison with human organism on reliability and stability in really the most difficult external conditions of our World. The technique has long opened a way of “hot” reserving of the individual parts of the mechanisms that make up any complex unit. This refers to the method of parallel operation of the same mechanisms, providing the most important functions of the device. If one of the mechanisms fails, the second immediately takes on an additional load. But none of the broken mechanism is fundamentally unable to repair itself or replace. And in the body, the processor cells are fundamentally not stable throughout human life. The life of any cell, except a neuron, is much shorter than human life. Therefore, the body is provided with “super-hot” reservation. The “mechanism” which failed not only can be repaired, but also completely regenerated, anew created. All three basic systems shown in Schemes 1–3 do not in any way broken, but, on the contrary, thanks to the identity based on common chemical structures of the processor, all three systems have complex interactions, which gives us a complete Person. Scheme 1. Human information system. Scheme 2. The life support system of the body. Scheme 3. The system of executive parts of the body. References [1] — P. D. Ouspensky, Tertium Organum, Publishing house Andreev and sons, St.Petersburg, 1992. [2] — I. Kant, Gesammelte Schriften, Bd.1–23, Berlin, 1910 –1955. [3] — V. Vernadsky, The Biosphere, first published in Russian in 1926. English translations: Oracle, AZ, Synergetic Press, 1986. tr. David B. Langmuir, ed. Mark A. S. McMenamin, New York, Copernicus, 1997. continuation…
UNICUM ORGANUM
12
unicum-organum-14ae59f65ca7
2018-08-29
2018-08-29 13:25:50
https://medium.com/s/story/unicum-organum-14ae59f65ca7
false
1,034
UNICUM ORGANUM is single key to the mysteries of the World. What is this? Read onward, please.
null
Vladimir
null
UNICUM ORGANUM
unicum-organum
PHILOSOPHY,SELF-AWARENESS,NEXT LEVEL,ARTIFICIAL INTELLIGENCE,LIFE
vladimiranissim
Neuroscience
neuroscience
Neuroscience
6,742
Vladimir Anisimoff
I'm a scientist-physicist, composer, philosopher-agnostic, writer. Now I'm retired and more of a writer than anything else.
eb0d567c1beb
vladimiranisimoff
9
11
20,181,104
null
null
null
null
null
null
0
null
0
ad56e607a685
2017-12-21
2017-12-21 13:55:54
2017-12-21
2017-12-21 14:18:12
1
false
en
2017-12-21
2017-12-21 14:18:12
4
14aeed009ef7
1.109434
1
0
0
We’ve already discussed residual vs. fitted plots, normal QQ plots, and Scale-Location plots. Next up is the Residuals vs. Leverage plot.
3
Residual Plots Part 4— Residuals vs. Leverage Plot We’ve already discussed residual vs. fitted plots, normal QQ plots, and Scale-Location plots. Next up is the Residuals vs. Leverage plot. The Residuals vs. Leverage plots helps you identify influential data points on your model. Outliers can be influential, though they don’t necessarily have to it and some points within a normal range in your model could be very influential. The points we’re looking for(or not looking for) are values in the upper right or lower right corners, which are outside the red dashed Cook’s distance line. These are points that would be influential in the model and removing them would likely noticeably alter the regression results. Let’s return to our code from yesterday and generate a simple example in R to demonstrate: # linear model — distance as a function of speed from base R cars dataset model <- glm(dist ~ speed, data = cars, family = gaussian) # setup plot grid par(mfrow=c(2,2)) # we’re going to use the generic R plotting function which has a built-in Residuals vs. Leverage plot plot(model) Our plot doesn’t show any influential cases as all of the cases are within the the dashed Cook’s distance line, although point 49 is close. If we had any cases outside of the Cook’s distance line we’d want to further evaluate those data points. Additional resources if you’d like to explore further http://data.library.virginia.edu/diagnostic-plots/ — more detailed overview
Residual Plots Part 4— Residuals vs. Leverage Plot
1
residual-plots-part-4-residuals-vs-leverage-plot-14aeed009ef7
2018-05-30
2018-05-30 05:01:24
https://medium.com/s/story/residual-plots-part-4-residuals-vs-leverage-plot-14aeed009ef7
false
241
Short-form summaries of essential aspects of data analysis and data science topics
null
null
null
Data Distilled
data-distilled
DATA SCIENCE,TOWARDS DATA SCIENCE,R,TABLEAU,DATA ANALYSIS
ianhagerman
Data Science
data-science
Data Science
33,617
Ian Hagerman
null
fbee0e0396b9
ianhagerman
3
5
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-15
2017-12-15 02:19:41
2017-12-15
2017-12-15 11:52:17
1
false
en
2017-12-15
2017-12-15 11:52:17
5
14b07ec3f626
3.618868
10
0
0
In some countries, at the end of the year people discard things that are old, ugly, broken, useless, out of fashion. In some cases they…
5
Out with the old! In some countries, at the end of the year people discard things that are old, ugly, broken, useless, out of fashion. In some cases they literally toss them out the window. So they can start the new year afresh, unburdened. It’s a tradition, a cleansing ritual. This New Year’s Eve, let’s toss out a few catchy nonsensical economic concepts. They are old, ugly, broken, useless and should at long last be ruled out of fashion: Escape velocity, and its companion stall speed. This is the hilariously absurd idea that an economy is like an airplane: unless it reaches a high enough speed, it will crash. (It will also crash if your cellphone is on, and you will only be able to use your laptop once the economy has reached a cruising altitude of about $35,000 of per capita income…). In 2012, economists worried that the US economy was unable to reach ‘escape velocity’. GDP growth of 2% was ‘stall speed’; unless we could accelerate to 3–3 ½ %, a recession was imminent. They expressed the same concern in 2013; in 2014; in 2015; in 2016; …in 2017. And here we are: the economy has grown at a steady clip for the last seven years, averaging about 2%. No recession. The stock market has rocketed into the stratosphere. Economists are shocked. Shocked. Let’s throw escape velocity and stall speed out the window and see if they fly. Secular stagnation. Secular stagnation is a zombie theory believed dead in the 1930s but resurrected by Harvard’s Larry Summer in a 2013 speech. In a nutshell: people save too much and consume and invest too little; the economy stagnates. This is happening for structural reasons: demographics, inequality, accumulation of assets by central banks and sovereign wealth funds, tighter credit standards — it will not go away by itself. Central banks are powerless: excess savings are so large that you need deeply negative real interest rates to reduce them. The solution: governments should spend more and more, running up more debt to stimulate demand. (In case you are wondering, Larry Summers is still sticking to his guns on secular stagnation — though he seems less enthusiastic about the new US Administration’s plans to embrace his fiscal stimulus idea.) Such a well-structured and elegantly articulated hypothesis, why throw it away? First, because the argument rests on global trends, and there is no global secular stagnation. Global growth averaged 3.5% between 1980 and 2007; in this recovery, 2010–17, it actually grew faster, at 3.8%. Advanced economies decelerated (from about 3% to about 2%) but emerging markets made up for it (from 4 ½ % to 5 ½ %), generating plenty of investment opportunities. Second, because some of the best post-crisis growth stories among advanced economies have come from countries that focused on reforms rather than on government spending (Spain, pre-Brexit Britain). Boosting government spending is the easy answer, not the right answer. Let’s toss it out. New normal. If Secular Stagnation is a zombie, the New Normal is a goblin: it comes in many guises. The best-known version, put forward in 2009 by fund manager Bill Gross, argues that interest rates will forever be low, because economies will grow at a slower pace, people will save more and consume less, companies will make less profits, governments will regulate more. Since ‘New Normal’ is beautifully generic, it has surfaced in different guises; whatever your question, the answer can be ‘this is the New Normal, it will be like this for as far as the eye can see’. The New Normal should go the way of Secular Stagnation, for many of the same reasons: the world economy grows at a robust pace, profits have risen, and some governments are deregulating (The US, and even France!). We should toss them both out for the same reason we tossed out “The Great Moderation” at the end of 2007: ‘this time’ is never different. UBI. Universal Basic Income. Because robots and Artificial Intelligence (AI) are about to take all the jobs, yielding infinite productivity (high output with zero labor input), every citizen should be provided with a basic income that allows her/him to pursue their dreams and ambitions, or play videogames. Otherwise we face an imminent dystopia of mass unemployment and civil unrest. The reality is (1) employment is higher than ever — we all slave away with pitifully low productivity while the AI overlords play chess and Go; (2) we don’t have the money to fund UBI — no country does; (3) we will have even less money if we pay people not to work. Discussing UBI today is like agonizing on how we will handle waste disposal as our Mars colony grows beyond one billion people. At some point it might become a relevant concern; for now, we have more urgent priorities. You will hear that experiments of UBI have already proved successful. But in all these experiments the income is either targeted to a small section of the population (hence not ‘universal’) or so low that the recipients still need to work (hence not ‘basic’). Universal basic income is where you give every single person in the country the ‘cruising altitude’ $35,000. Show me an example — and get me a passport… On December 31st, toss them all out. You will feel a lot better, you will think a lot clearer. Out with the old! Happy Holidays!
Out with the old!
20
out-with-the-old-14b07ec3f626
2018-05-09
2018-05-09 23:58:22
https://medium.com/s/story/out-with-the-old-14b07ec3f626
false
906
null
null
null
null
null
null
null
null
null
Economics
economics
Economics
36,686
Marco Annunziata
Economics & innovation at Annunziata + Desai Advisors; Fellow in Residence at Autodesk. Former Chief Economist & head of business innovation strategy at GE.
9e6d1ac6ff23
marcoannunziata
1,673
138
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-30
2017-11-30 03:18:28
2017-11-30
2017-11-30 03:20:50
1
false
en
2017-11-30
2017-11-30 03:20:50
0
14b098db1d5d
2.728302
0
0
0
Every business spends time contemplating how they can improve customer service. It is an essential part of selling your product. This is…
5
4 Ways Machines Improve Customer Service Every business spends time contemplating how they can improve customer service. It is an essential part of selling your product. This is especially true in a world where one upset customer can spend 15 minutes on their social media account thrashing your company, meaning you’ll lose their business as well as their friends and anyone else who may end up seeing the post. And while working with a person is still necessary in many situations when someone seeks out customer service, there have been extremely positive results from changing to a machine. With machine learning and artificial intelligence startups taking charge, machines are becoming quicker and more efficient for communication with customers. Here are 4 ways a machine can improve customer service at your company. 1. Automate repetitive tasks The first way a machine can improve your customer service experience is by automating tasks that are necessary but don’t need an actual human to do them. For example, banks have machines that count out the money for them, so they don’t have to waste time doing that, and many pharmacies are beginning to install machines that dispense the most common prescriptions they have quickly and easily to customers. These types of machines will help improve the customer experience by decreasing the amount of time an interaction takes and by allowing the actual employees to focus more on the customer’s individual needs instead of wasting time counting pills or money. 2. Customers communicate on their own time With the right artificial intelligence in place, you can allow customers to contact your business at any time and through any means while still getting an immediate response. An artificial intelligence platform would allow a computer to receive questions from a customer, recognize what the person is asking, and then formulate a response either asking for clarification or just answering the customer question. And while there are times a person may still prefer to speak to a representative directly, having an offline system is still extremely valuable to letting customers reach out on their own time and schedule. 3. Reduce wait times One common complaint that customer service departments deal with on a regular basis is about the wait times. Either they had to wait to long to teach a customer service agent or it took too long for that agent to get an answer. As a business, reducing wait times often just means increasing the number of support positions, but that isn’t always realistic for a specific budget and doesn’t actually reduce the time a customer must wait in many cases. Machines, however, can help drastically improve the amount of time a customer has to wait for help. For example, T-Mobile has a service where a customer can put their name on a list to get called back later instead of having to wait on the phone for the next representative. The machine can also communicate with more customers to resolve problems using AI making it easier for real agents to focus on bigger issues that need more human and complex resolutions. 4. Increase accuracy Another way machines are helping with customer service issues is that they are creating more accuracy. Human error is one of the biggest reasons customer service agents receive calls and texts from customers with questions about their experience with your company. However, if things are done correctly in the first place, there’s less likely to be an error. For example, a machine will copy the address a customer gives and post it directly onto a shipping label to ensure an accurate shipment. However, a human copying it over might mistakenly put one of the wrong numbers or even the wrong address in some cases. By reducing human error and increasing accuracy through machines, you are taking preventative steps to ensure your customers are happy in the beginning. The best customer service is preventative. Mobile Technology News brought to you by biztexter.com Source: vashonbeachcomber.com/news/new-machine-at-pharmacy-expected-to-improve-customer-service/
4 Ways Machines Improve Customer Service
0
4-ways-machines-improve-customer-service-14b098db1d5d
2017-11-30
2017-11-30 03:20:51
https://medium.com/s/story/4-ways-machines-improve-customer-service-14b098db1d5d
false
670
null
null
null
null
null
null
null
null
null
Customer Service
customer-service
Customer Service
18,984
Lianne Rhymes
IT Manager. Blogger. Tech Buff
c5fa507773f5
miss_lianne
9
46
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-20
2018-04-20 14:49:32
2018-04-20
2018-04-20 14:53:04
2
false
en
2018-04-20
2018-04-20 15:04:52
1
14b2b203a57b
3.175786
1
0
0
“Pivoting is not the end of the disruption process, but the beginning of the next leg of your journey.”
5
Artificial Intelligence and a Pivoting Workforce “Pivoting is not the end of the disruption process, but the beginning of the next leg of your journey.” ― Jay Samit, It is clear Artificial intelligence is on the rise. There are examples in all walks of life from automated cars, drone deliveries, chat bot run customer service centres, warehouse management, e-trading and even in the use of products such as Alexa. As a result we are now truly living in a time where we can and will see the impact that AI has on the workforce. But is that impact good or bad? Expert opinion really is split. Whilst it cannot be denied that some jobs are being impacted, does the rise of AI necessarily mean doom and gloom for today’s workforce? You don’t really have to go far to find people proclaiming the rise of AI means the end of jobs as we know it. In many ways, we can think of the rise of AI and other emerging technologies in the same way we think of the Industrial Revolution. The Industrial Revolution changed the way many industries worked and it brought about change on a huge level. For example the Luddites were textiles workers who protested against automation. They resorted to attacking factories because they feared machines were robbing them of their jobs and their livelihood. This occurred all the way back in the 1800’s, so concerns about job losses due to automation are not exactly exclusively a modern phenomenon. The reality in this example was that jobs didn’t just disappear, the reality was the creation of more jobs, different jobs, jobs just changed The impact the internet had and continues to have on the workforce is enormous. We can see this in the decline of high street stores and the huge increase in ecommerce. The introduction of cheap smartphones gave a huge percentage of the population internet access. As a result online purchases continue to eliminate a vast number of retail jobs both in the reduction of high street stores but also in the increase in automation and robotics in fulfilment centres. Some believe that worldwide technological change could easily lead to the loss of 5 million jobs each year. However what history has taught us is that the economy is incredibly flexible in pivoting and creating other jobs thereby absorbing the impact of these changes. I believe the increase in use of automation and AI technologies will have a similar impact on the shape of the workforce. Just like in the industrial revolution some of the jobs we know today will become obsolete. You could say these jobs are not the best jobs, they are the more physical jobs or the more repetitive jobs. May be even the less interesting jobs but just like the industrial revolution created new jobs, so will AI and automation. These new positions will undoubtably require a different skill. The internet has already shown how adaptable we are by moving into jobs with skills and titles that didn’t exist prior to its existence. In fact the skills continue to change and adapt at an ever increasing rate. When people think about AI and automation the first thought is to consider what jobs will be replaced. Personally I like to think about the job’s that will be created, the jobs’ that we don’t even know will exist. History has a habit of repeating itself. We adapt, we pivot. In the same way Odeo became Twitter and Tote became Pinterest, the global workforce will become something unrecognisable. Hopefully we won’t all be wearing headsets earning our living in the OASIS, but who knows you may need to become the next Parzival or Art3mis. Organisations are constantly creating technology to change the way people work and the way the world works. I suppose my role and the role of Opus Talent Solutions is to provide the people and help to drive this change in the workforce. We do this by providing our clients the best people to create change. Technology changes the way people work. At Opus our candidates change the way technology works. ………. I suppose my big question is whether one of our candidates will ultimately create something that makes my own job and the company I work for redundant ……… Oh well If that happens, I guess we will just have to pivot. Jobs will always be there — just different ones.
Artificial Intelligence and a Pivoting Workforce
5
artificial-intelligence-and-a-pivoting-workforce-14b2b203a57b
2018-04-20
2018-04-20 15:04:53
https://medium.com/s/story/artificial-intelligence-and-a-pivoting-workforce-14b2b203a57b
false
740
null
null
null
null
null
null
null
null
null
Technology
technology
Technology
166,125
Sam Jenkinson
I am a keen Tech enthusiast and work for a talent consultancy focusing on People in the Tech world
529b43fe461c
sam.jenkinson
1
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-12
2018-03-12 14:54:51
2018-03-12
2018-03-12 14:56:02
1
false
en
2018-03-12
2018-03-12 14:56:02
1
14b4038a0e21
1.622642
0
0
0
In the One Million by One Million online curriculum, almost eight years ago, I decided to put in a line as a joke: “We’re working on a chip…
3
Man vs. Superman and the Future of Education In the One Million by One Million online curriculum, almost eight years ago, I decided to put in a line as a joke: “We’re working on a chip that can be implanted in your brain and it will transfer ALL the entrepreneurial knowledge from my brain to yours. However, this chip is not quite ready yet. So, in the meantime, please study the curriculum and learn the methodology of entrepreneurship that we have designed.” Well, little did I know that this would cease to be a joke by 2018. In fact, in the next decade or two, perhaps, this sort of implant will become the future of education. Today, if you want to become a Computer Scientist, you go to university and get trained in Algorithms, Data Structures, Computer Architecture, Artificial Intelligence, etc. You learn a set of programming languages. You learn how to debug your code. You learn how to architect a system. So on and so forth. Or, you can also take those courses from a MOOC like EdX or Coursera, and take advantage of the possibilities offered by online learning. To become a doctor, you go to Medical School. To learn business, you go to Business School. But in the world that we are resolutely marching towards, this may very well cease to be how education is imparted. Education may become a surgical implant, rather than something acquired through many years of rigorous toil. Education, thus, may neutralize the advantages enjoyed today by high IQ individuals, and make it possible for everyone to be extraordinarily knowledgeable in his or her chosen field or fields. With one custom, highly personalized implant, you could be speaking five languages, have mastery over the entire field of medicine (not just specialized disciplines like Endocrinology or Pediatrics), and be able to play concert level violin by the age of eighteen. Yes, I am talking about a world where Man and Machine fuse to create Superman out of every individual. That, I think, is where we are headed. Looking For Some Hands-On Advice? For entrepreneurs who want to discuss their specific businesses with me, I’m very happy to assess your situation during my free online 1Mby1M Roundtables, held almost every week.
Man vs. Superman and the Future of Education
0
man-vs-superman-and-the-future-of-education-14b4038a0e21
2018-03-12
2018-03-12 14:56:03
https://medium.com/s/story/man-vs-superman-and-the-future-of-education-14b4038a0e21
false
377
null
null
null
null
null
null
null
null
null
Education
education
Education
211,342
Sramana Mitra
Founder of the 1M/1M global virtual incubator
ae22de73ea89
sramana
4,211
3,565
20,181,104
null
null
null
null
null
null
0
null
0
3a054cf431f7
2017-12-26
2017-12-26 13:26:54
2017-12-26
2017-12-26 13:29:26
1
false
en
2017-12-30
2017-12-30 02:45:28
4
14b4c0227a97
1.34717
12
1
0
Participation Link
5
How to Participate in the Bottos Token (BTO) Crowdsale Participation Link http://bottos.org/ Crowd Sale Time Period USA & Canada EST: 12/31/2017 7AM — 1/28/2018 7AM GMT: 12/31/2017 12PM — 1/28/2018 12PM Beijing Time: 12/31/2017 8PM — 1/28/2018 8PM Price and Payment BTO Regular Price: 1 ETH=6580 BTO (The exchange rate to ETH will be announced on 12/29/2017) Early Bird: 12/31/2017 7AM EST — 1/7/2018 7AM EST with 15% bonus (1 ETH=7567 BTO) Regular Price: 1/7/2017 7:01AM EST — 1/28/2017 7AM EST (No bonus) Payment: only ETH is acceptable Minimal Purchase Amount: 0.2ETH Token Distribution BTO tokens will be distributed normally within 72 hours but latest within 2 weeks following the conclusion of the crowd sale. Token Sale Metrics Total Token Created: 1,000,000,000 with the following allocation: 12% — The Founding Team 37% — Foundation/Community 36% — Crowd Sale 15% — Private placement (Presale) Hard Cap: $32,000,000 Soft Cap: None Preparation The copy of government issued ID such as passport or driver`s license may be needed for identity verification. ERC-20 compliant wallet (such as MetaMask or MyEtherWallet click the links and you will be directed to instruction articles) needed to receive BTO tokens. Please DO NOT use exchange wallet address. Target Audience Any individuals or institutions who need the data service from Bottos are encouraged to purchase. Chinese citizens from Mainland China are not encouraged to participate in the sale. We are very excited for our crowdsale starting on December 31st. If you have any questions about the information above, please feel free to give us a shout on Telegram; our team is working around the clock to ensure a smooth sale with the maximum security, and is answering all questions as quickly as possible. Look for our emails, we will send more updates about Bottos soon! Happy Holidays! Kind regards, Bottos Support Team
How to Participate in the Bottos Token (BTO) Crowdsale
170
how-to-participate-in-the-bottos-token-bto-crowdsale-14b4c0227a97
2018-06-20
2018-06-20 01:54:42
https://medium.com/s/story/how-to-participate-in-the-bottos-token-bto-crowdsale-14b4c0227a97
false
304
Official Bottos blog
null
bottos.org
null
Bottos
bottos
BLOCKCHAIN TECHNOLOGY,ARTIFICIAL INTELLIGENCE,BIG DATA
bottos_ai
Blockchain
blockchain
Blockchain
265,164
Bottos AI
Bottos - A Decentralized AI Data Sharing Network
58b32476bec4
bottos_ai
374
8
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-01
2018-02-01 11:24:42
2018-02-01
2018-02-01 11:32:33
0
false
en
2018-02-01
2018-02-01 11:32:33
13
14b5f60d0a6d
3.788679
2
0
0
I’ve set out to change the world, and there are two lines to the story.
5
Startup life: Go Big, or go home I’ve set out to change the world, and there are two lines to the story. The first is my background as a developer/consultant. As of today I’ve been employed by Futurice for 8 years. That’s over twice as long as any of my previous jobs. Working as a consultant has many upsides. You get to work with, live and feel various organizations. You get to work with a diverse set of people and skills. You can pivot every so often into a new direction, with a new customer and often also with some new technology. This is a blessing, especially for someone fresh out of school, as it really helps finding what is the most interesting tech or domain in an expanding array of choices. For me this domain has been server-side development and architectures. Database design and integration. The Cloud. I’ve tried other domains as well. I’ve dealt with 3MB heap restrictions when writing a map rendering engine for a low-end feature phone. That might actually still be in use somewhere, some 5 years later. The other end goes to dealing with massive scale backends for that next big social media, or to building an analytics backend with a data warehouse. Most of these consulting gigs have been both challenging and rewarding in many ways. Still, for me personally there’s always been an itch for something more. Not quantitative more, but qualitative. When working for clients, I’ve always felt the most effective after getting more intimate with the domain and the company. I feel the urge to be a true expert in the field and domain I work with. It takes time to build the knowledge and bump into enough roof and walls to really feel at home. Usually this only happens at the end of the project, when we’re about to wrap up and set sail for some new customer challenge. The other form of toil with consultancy comes from having multiple, often conflicting, sets of priorities. There is always something urgent on the client-side. At the same time there are the employer things to take care of. Recruitment, sales, on-the-side-consulting, mentoring, to list a few. They’re all interesting and rewarding as well, but often at odds with the client work. These two buckets ultimately compete for the same limited resource, i.e. my time. For some years already I’ve talked about my aim to set these priorities straight for once. The clear way to do so is to try something else than consulting, and thus I’ve been on the lookout for something of a new longer term commitment. Enter Aito.ai. The other storyline is more tech-related. AI is hot, sizzling, at the moment. In a endless list of tech buzzwords, AI seems to be by far the most pervasive in quite a while. It does have the potential to be a tool to apply on an almost endless variety of problems in our domain. It’s much more of a Silver Bullet than say, IoT, blockchain, or containerization, all of which have also been hot in the recent few years. The appeal of AI is that it’s conceptually an easy thing to explain to any layman, as opposed to any of the aforementioned techs. And since the tech is so pervasive, also the projections for the future range from anything from the end of mankind, to immortality for everyone. Wait But Why had an insightful take on this already some years ago. Big figures like Stephen Hawking, Bill Gates and Elon Musk are also on the cautious side of things. On the other hand, it’s clear that predicting the future is way harder than looking at things from hindsight perspective. The whole field is still in flux, and as I somewhat philosophically reasoned, the future is actually made right now. If I want to be part of what this ends up to be, I’d better saddle up and start moving in the direction of the heat. Enter Aito.ai. Aito is at the intersection of the two storylines I described. We, the three founders, Antti, Vesku and me, come from quite different backgrounds, but share both the curiosity for the topic, and the goal to make the field more approachable. As with most technological advancement the near future is often overestimated, as people rush to make headlines with extravagant claims. IBM’s Watson actually won Jeopardy as early as in 2011. That’s really a lifetime ago when talking about computer’s and in the context of Moore’s law. People seem to be losing their patience in waiting for something more to happen. Deep Blue’s win over Kasparov goes as far back as 1997. Some of the people I work with weren’t even born back then! On the other hand there are now almost daily news on advancements in the field of machine learning (ML) and AI. Automated cars are on the apparent verge of become reality within a few years. Natural language processing and speech interfaces are common place in quite a few homes, and in almost all mobile phones already. So, ML, AI and statistical tools combined with cheap storage and big data are powerful tools already today. More and more companies use these to run their everyday business. Our vision and mission statement with Aito is to make AI/ML easy to approach and understand. We want to allow people to use the algorithms and tools we create to solve their own custom problems. We also want to take away the effort in doing so, and allowing people to solve a broader set of problems than with single-purpose AI-models, without having to get a PhD in the field, hire loads of people already having one, or in general by just throwing loads of money on the problem. This we plan to achieve by creating an AI-optimized database (for the lack of a better word), and our own query language for it. Go big, or go home, I guess.
Startup life: Go Big, or go home
16
startup-life-go-big-or-go-home-14b5f60d0a6d
2018-04-03
2018-04-03 10:08:55
https://medium.com/s/story/startup-life-go-big-or-go-home-14b5f60d0a6d
false
1,004
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Kai Inkinen
Developer, architect, and very recently, co-founder of Aito.ai. Writing about my newfound startup-life. And, naturally, rambling randomly on tech. And stuff.
1082bd8ef95e
kaiinkinen
10
1
20,181,104
null
null
null
null
null
null
0
null
0
f702855ffe47
2017-10-12
2017-10-12 00:00:23
2017-10-12
2017-10-12 00:00:23
7
false
en
2017-10-12
2017-10-12 00:00:23
13
14b638d98877
2.517925
0
0
0
null
3
AI: a Beginning of the End? # medium.com Many years ago, while working with a computer operating with miniature nixie tubes, a drum memory and requir… The Rise of Artificial Intelligence # medium.com “Hey Alexa, play my favorite song.” Today, we see AI quickly seeping into our lives, transforming the many i… Thanks Jed — the rise of AI and its hopeless outreach towards immortality is fascinating, and even… # medium.com Thanks Jed — the rise of AI and its hopeless outreach towards immortality is fascinating, and even Stephen H… Self-Driving Car Nanodegree Project 2: German Traffic Sign Classification # medium.com Traffic sign classification is one of the core tasks performed by a self-driving car. As part of this projec… How To Empower Artificial Intelligence To Take On Racist Trolls # rantt.com Social media is overwhelmed with toxic trolls and humans are failing to keep them at bay. It’s time for AI t… Training an Architectural Classifier # medium.com Motivations How do we recognize a room? source[1] Can a computer learn to distinguish the different “places”… How to Become Data Scientist in One Day? # hackernoon.com Honestly, I don’t know. It’s probably a decade too early for me to become a Data ‘Scientist’. I am writing t… Why do physicists need to study consciousness /artificial general intelligence? # medium.com This post concerns a discussion about why physicists need to study consciousness, and probably involve thems… Learning to walk with evolutionary algorithms applied to a bio-mechanical model # medium.com Using real muscles to walk on a human-like model. The code for this post can be found in this GitHub reposit… What Dirty Word Makes Office Managers Cringe? # blog.token.ai “Corporate gift” — usually bringing to mind mediocre food baskets and disposable pens, this term has lost its… A sober view of AI will lead to more effective innovation # venturebeat.com GUEST: As we create increasingly smarter machines, we’ve had to rewrite convictions we’ve held since the ind… Amazon’s Alexa now delivers personalized results for up to 10 voices # venturebeat.com Amazon today announced that Alexa-enabled devices can now recognize unique human voices, giving the assistan… Vincent AI Sketch Demo Draws In Throngs at GTC Europe # blogs.nvidia.com Cambridge Consultants showed off an deep-learning driven application this week at GTC Europe in Munich that …
13 new things to read in AI
0
13-new-things-to-read-in-ai-14b638d98877
2017-10-12
2017-10-12 00:00:25
https://medium.com/s/story/13-new-things-to-read-in-ai-14b638d98877
false
389
AI Developments around and worlds
null
null
null
AI Hawk
ai-hawk
DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
null
Deep Learning
deep-learning
Deep Learning
12,189
AI Hawk
null
a9a7e4d2b403
aihawk1089
15
6
20,181,104
null
null
null
null
null
null
0
null
0
6bebd10d910e
2017-10-29
2017-10-29 15:45:55
2017-10-29
2017-10-29 15:47:02
0
false
en
2017-10-29
2017-10-29 15:47:02
1
14b6dfb33fb8
0.098113
0
0
0
null
5
Introduction to Deep Learning and Neural Networks Introduction to Deep Learning and Neural Networks | Quick KT Introduction to Deep Learning , Neural Networks, Machine Learningquickkt.com
Introduction to Deep Learning and Neural Networks
0
introduction-to-deep-learning-and-neural-networks-14b6dfb33fb8
2017-10-29
2017-10-29 15:47:04
https://medium.com/s/story/introduction-to-deep-learning-and-neural-networks-14b6dfb33fb8
false
26
Learn Programming and Artificial Intelligence
null
null
null
Quick KT-Learn Programming and Artificial Intelligence
null
quick-kt-learn-programming-and-artificial
ARTIFICIAL INTELLIGENCE,PROGRAMMING,MACHINE LEARNING,DEEP LEARNING,JAVASCRIPT
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Vinay Kumar
Author at quickkt.com
4e29d87abe22
vkrvinayr
3
14
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-06
2018-06-06 17:51:34
2018-06-06
2018-06-06 17:53:18
2
false
en
2018-06-06
2018-06-06 17:53:18
12
14b926461b78
2.839937
0
0
0
When having discussions over AI, they’re usually in reference to the ominous impact it could have on humans. How it’ll eliminate jobs and…
3
How AI Is Affecting Your SEO Strategy When having discussions over AI, they’re usually in reference to the ominous impact it could have on humans. How it’ll eliminate jobs and replace the need for people to be part of the equation in certain industries. From industrial manufacturing to food production, HR to stock trading, it seems that no one is safe from technology’s effect on our labor force. AI is undoubtedly affecting the way certain jobs operate and impacting the technology around us, as well. For brand marketers, this is especially true. We’ve already begun to see the ways in which AI is changing content marketing itself through predictive intelligence and machine learning. But what happens in relation to your content creation when SEO begins to change, as well? What is AI? AI, at its simplest, is an intelligence produced and demonstrated by machines. When given enough data, it processes and learns over time, similar to that of the human brain but at a much higher, more efficient rate. There are three types of AI: Artificial Narrow Intelligence (ANI): A very simplified version of AI that focuses on one specific task (i.e. spam filters, online games like chess). Artificial General Intelligence (AGI): When AI can perform all things, like that of a human, it’s considered AGI. Artificial Superintelligence (ASI): AI can exceed the power of a human brain and operate on a much higher level for all things. At that point, it becomes ASI. What is Google RankBrain? Google RankBrain is a search engine system that uses ANI to help Google sort through search results. It is not a new way in which Google ranks results but rather a part of their overall search “algorithm.” Previously, these algorithms were designed by humans, but with RankBrain’s continuous learning capabilities, most of those algorithms have since fallen to the wayside. Now, AI that discovers and produces more relevant responses to every search query is favored. How has Google RankBrain changed SEO strategy? To be clear, the algorithms Google once exclusively operated on for search still exist. It is simply the ways in which they’re applied through a variety of combinations that has changed. And all of that is determined by RankBrain. The technology isn’t on a mission to determine the best algorithmic formula, but instead to determine which sites are of the highest quality. Over time, RankBrain learns the websites that repeatedly answer search queries best based on the aforementioned algorithms. From there, it analyzes those sites for commonalities. In general, it’s looking at keyword density, website structure and backlinks. Emphasis on Quality Now more than ever, when up against incredible volume of content online, quality in your SEO strategy matters. It’s not just about the technicalities — headings, keywords, tags, etc. — but your content’s ability to engage audiences. Increase engagement and you’ll find that people spend more time on site, bounce at a lower rate, become more likely to backlink with your content as the source, and more. All of which, RankBrain takes into consideration for search results. Be Mindful of User Experience The usability of your site should maintain a level of quality as well. Navigation, imagery, mobile friendliness and load time — among other things — make a huge difference in how RankBrains gauges your site’s validity. If you’re unsure how your site measures up, start by looking at the competitors in your industry for benchmarks. Make Sure Your Content Is Relevant to Audiences It should go without saying that any content that lives on your website should have some correlation to the types of audiences you’re looking to attract. If you’re creating material just to drive traffic, any traffic, the ranking rewards you may reap will surely be short-lived. In the long haul, Google’s search AI is looking for websites that cater to their customers and serve their needs. Your content strategy should aim to reflect that.
How AI Is Affecting Your SEO Strategy
0
how-ai-is-affecting-your-seo-strategy-14b926461b78
2018-06-06
2018-06-06 17:53:19
https://medium.com/s/story/how-ai-is-affecting-your-seo-strategy-14b926461b78
false
651
null
null
null
null
null
null
null
null
null
Content Marketing
content-marketing
Content Marketing
34,905
PowerPost Social
Enterprise-level publishing platform that streamlines content marketing and turns brands into Power Publishers.
8b457906b2fc
social_29385
178
500
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-28
2018-06-28 05:23:22
2018-06-28
2018-06-28 05:23:56
10
false
en
2018-07-15
2018-07-15 21:25:10
1
14ba45e64ed2
3.363208
0
0
0
Google Brain released TensorFlow three years ago and it rapidly became the go-to library for machine-learning based on neural networks, it…
4
Getting Started with Tensorlfow.js Google Brain released TensorFlow three years ago and it rapidly became the go-to library for machine-learning based on neural networks, it was only available for Python but still widely used with web and mobile applications. Google’s evolution of deeplearn.js’ WebGL accelerated machine-learning and the Tensorflow core are at the heart of Tensorflow.js, bringing the machine-learning highest standard to the browser. We’ll be creating and training a machine-learning model with that will recognize an Iris flower species based on its petal and sepal dimensions. The first thing we’re going to do is require TensorFlow and its required Node bindings: Then we’re going to require both training and testing datasets: TensorFlow requires one input axis tensor that contains the data that our model will be learning from and one output axis that will contain the expected result of every input sample. The only that TensorFlow understands the data is by containing it inside of a tensor, these can be initialized as Scalar, 1D, 2D, 3D and 4D tensors. In this case, we put every flower values in an array resulting on a two-dimensional array that will be initialized as a tensor2d: This tensor is the input axis tensor. Our output axis will be created in a similar way, also mapping the values to an array and initialized as a tensor2d: The output axis is supposed to return a weight close to 1 on each index depending on the species of the flower. The testing tensor is created in the same way the input axis tensor was created: After we have all the required tensors to train our model we’ll create our model by using tf.sequential, that creates a model where the outputs of one layer are the inputs to the next layer. Then dense layers will be added to the model to start creating a neural network meaning that every layer is fully connected to the next layer. Every layer will use a “sigmoid” function for activation, this function calculates a “weighted sum” of its input, adds a bias based on found patterns and then decides whether it should be active or not. Our model gets compiled to prepare it for training by specifying the loss and the optimizer. Now that our model is ready to train, we can run model.fit on our model taking our input axis, output axis and amount of epochs (the number of times to iterate over the training dataset). The fitting/training function returns a promise with the loss history as its resolved value. After the model is trained, we call the model.predict function on our testing data. Looking at our testing data… …we should expect our first prediction array to have a value close to 1 on the first index, a value close to 1 on the second index of the second array, and a value close to 1 on the third index of the third array. The first index indicates how certain is the model about the input flower being a “Setosa”, the second index would correspond to “Virginica”, and the third index to “Versicolor” because of how we set up our output array based on species. Hope you enjoyed the brief introduction to Tensorflow.js. If you want to try this yourself or look at more descriptive code and output see the project’s repo on GitHub.
Getting Started with Tensorlfow.js
0
getting-started-with-tensorlfow-js-14ba45e64ed2
2018-07-15
2018-07-15 21:25:10
https://medium.com/s/story/getting-started-with-tensorlfow-js-14ba45e64ed2
false
560
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Ivan Felix
null
323fca2d33ec
ivaneduardo68
0
1
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-04-16
2018-04-16 21:28:11
2018-04-16
2018-04-16 22:05:23
5
false
en
2018-08-24
2018-08-24 20:28:50
3
14ba62466e96
4.210692
2
0
0
Bitcoin is a new currency that was created in 2009 by an unknown person using the alias Satoshi Nakamoto. Transactions are made with no…
5
Could Blockchain Be Every Data Scientist’s Dream? Bitcoin is a new currency that was created in 2009 by an unknown person using the alias Satoshi Nakamoto. Transactions are made with no middle men — meaning, no banks! Bitcoin can be used to book hotels on Expedia, shop for furniture on Overstock and buy Xbox games. But much of the hype is about getting rich by trading it. The price of bitcoin skyrocketed into the thousands in 2017. “The blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value.” Don & Alex Tapscott, authors Blockchain Revolution (2016) Blockchain is a distributed database system that serves as an “open ledger” to record and manage transactions. Each record in the database is called a block and contains details like the transaction timestamp as well as a link to the previous block. HOW BLOCKCHAINS CHANGE FINTECH At present, digital transactions take place with the help of tokens. This is a unique code generated by a third party (such as Visa or Mastercard, for example) and is shared with the token requestor (the retailer you are shopping from) and the account issuer (the customer’s bank). Tokens make online transactions more secure by concealing actual customer-identifying data. Since the token is generated by a third party which by itself does not have information regarding the transaction, there is no scope for any sort of data for a data scientist to play with. To be fair to cryptocurrencies like bitcoins, they were designed on the exact opposite premise of providing a secure and confidential transaction mechanism. While that hasn’t changed, blockchains provide banks and financial institutions with the technology needed to mine more useful data from their customer transaction history. Recently, a consortium of 47 Japanese banks signed up with a company called Ripple to allow money transfers between bank accounts using blockchain. The main reason behind the move is to allow real-time transfers at a significantly low cost. One of the reasons traditional real-time transfers were expensive was because of the potential risk factors. Double-spending (a form of transaction failure where the same security token gets used twice) is a real problem with real-time transfers. With blockchains, that risk is largely avoided. Big data analytics makes it possible to identify patterns with consumer spending and identify risky transactions a lot quicker than they can be done with current day technology. This reduces the cost with real-time transactions. Outside of banking too, the main drive for blockchain adoption has been security. Across healthcare, retail and public administration, establishments have started using blockchain to handle data to prevent hacking and data leaks. In healthcare, a technology like blockchain can make sure that multiple “signatures” are sought at every level of data access. This can prevent a repeat of the 2015 attack that led to the theft of over 100 million patient records. But businesses expect to see other benefits from this adoption as well. And like with healthcare, in gambling too, blockchain analytics tools are viewed as handy tools to define gambling patterns and identify patterns of frauds that casinos can use to dig out loopholes. POSSIBILITIES IN REAL-TIME ANALYTICS Until now, real-time fraud detection has only been a pipe dream and banking institutions have always relied on using technologies to identify fraudulent transactions retrospectively. Since blockchain has a database record for every transaction, it provides a way for institutions to check for patterns in real-time, if need be. But all of these possibilities also raise questions about privacy and this in direct contradiction to the reason why blockchain and bitcoins became popular in the first place. We all know now about Facebook leak privacy scandal. But to look at this from another perspective, blockchains improve transparency in data analytics. Unlike previous algorithms, the blockchain technology rejects any input that it can’t verify and is deemed suspicious. As a result, analysts in retail industries only deal with data that is completely transparent. In other words, the customer behavior patterns that blockchain systems identify are likely to be a lot more accurate than it is today. Although blockchain offers great promise for data science, the truth is that we do not have too many blockchain-based technology systems deployed at industrial scale in the first place. As a result, the real dangers and threats with blockchain may not be apparent for at least a few more years until blockchain becomes more mainstream. Also really interesting situation is in Belgrade, Serbia, were we have a lot of startup companies who are developing blockchain technology. We will see the future of the blockchain in the years to come. For data scientists, this means two things. One, it is still going to be a while before the treasure trove of data that blockchain promises to offer is made available to them across various industries. But more importantly though, as the flaws in the technology become more visible, blockchain is at threat of being regulated or being replaced with traditional systems. That is something that data scientists may not want, but what can we do for now? You can see below How a Bitcoin transaction works: Until next time, Happy analyzing Manja You can follow me on Linkedin and Instagram https://www.toptal.com/insights/innovation/blockchain-applications-create-enterprise-solutions
Could Blockchain Be Every Data Scientist’s Dream?
67
could-blockchains-be-every-data-scientists-dream-14ba62466e96
2018-08-24
2018-08-24 20:28:50
https://medium.com/s/story/could-blockchains-be-every-data-scientists-dream-14ba62466e96
false
895
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Blockchain
blockchain
Blockchain
265,164
Manja Bogicevic
Data Scientist|People Mastery|Public speaker |Learning ML & Deep Learning I
30e3af8c9bfc
manjabogicevic
246
53
20,181,104
null
null
null
null
null
null
0
x_trn, x_val = x[:40], x[40:] y_trn, y_val = y[:40], y[40:] m = RandomForestRegressor().fit(x, y)
3
null
2018-08-14
2018-08-14 01:38:01
2018-09-14
2018-09-14 20:19:57
40
false
en
2018-10-20
2018-10-20 20:34:59
63
14bbb8180d49
48.493396
22
0
1
My personal notes from machine learning class. These notes will continue to be updated and improved as I continue to review the course to…
5
Machine Learning 1: Lesson 6 My personal notes from machine learning class. These notes will continue to be updated and improved as I continue to review the course to “really” understand it. Much appreciation to Jeremy and Rachel who gave me this opportunity to learn. Lessons: 1 ・ 2 ・ 3 ・ 4 ・ 5 ・ 6 ・ 7 ・ 8 ・ 9 ・ 10 ・ 11 ・ 12 Video / Powerpoint We’ve looked at a lot of different random forest interpretation techniques and a question that has come up a little bit on the forum is what are these for really? How do these help me get a better score on Kaggle, and my answer has been “they don’t necessarily”. So I wanted to talk more about why we do machine learning. What’s the point? To answer this question, I want to show you something really important which is examples of how people have used machine learning mainly in business because that’s where most of you are probably going to end up after this is working for some company. I’m going to show you applications of machine learning which are either based on things that I’ve been personally involved in myself or know of people who are doing them directly so none of these are going to be hypotheticals — these are all actual things that people are doing and I’ve got direct or secondhand knowledge of. Two Groups of Applications [1:26] Horizontal: In business, horizontal means something that you do across different kinds of business. i.e. everything involving marketing. Vertical: Something you do within a business or within a supply chain or a process. Horizontal Applications Pretty much every company has to try to sell more products to its customers so therefore does marketing. So each of these boxes are examples of some of the things that people are using machine learning for in marketing: Let’s take an example — Churn. Churn refers to a model which attempts to predict who’s going to leave. I’ve done some churn modeling fairly recently in telecommunications. We were trying to figure out for this big cellphone company which customers are going to leave. That is not of itself that interesting. Building a highly predictive model that says Jeremy Howard is almost certainly going to leave next month is probably not that helpful because if I’m almost certainly going to leave net month, there’s probably nothing you can do about it — it’s too late and it’s going to cost you too much to keep me. So in order to understand why we would do churn modeling, I’ve got a little framework that you might find helpful: Designing great data products. I wrote it with a couple of colleagues a few years ago and in it, I describe my experience of actually turning machine learning models into stuff that makes money. The basic trick is what I call the Drivetrain Approach which is these four steps: Defined Objective [3:48] The starting point to actually turn a machine learning project into something that’s actually useful is to know what I am trying to achieve and that does mean I’m trying to achieve a high area under the ROC curve or trying to achieve a large difference between classes. It would be I’m trying to sell more books or I’m trying to reduce the number of customers that leave next month or I’m trying to detect lung cancer earlier. These are objectives. So the objective is something that absolutely directly is the thing that the company or the organization actually wants. No company or organization lives in order to create a more accurate predictive model. There are some reason. So that’s your objective. That’s obviously the most important thing. If you don’t know the purpose of what you are modeling for then you can’t possibly do a good job of it. And hopefully people are starting to pick that up out there in the world of data science, but interestingly what very few people are talking about but it’s just as important is the next thing which is levers. Levers [5:04] A lever is a thing that the organization can do to actually drive the objective. So let’s take the example of churn modeling. What is a lever that an organization could use to reduce the number of customers that are leaving? They could call someone and say “Are you happy? Anything we could do?” They could give them a free pen or something if they buy $20 worth of product next month. You could give them specials. So these are levers. Whenever you are working as a data scientists, keep coming back and thinking what are we trying to achieve (we being the organization) and how we are trying to achieve it being what are the actual things we can do to make that objective happen. So building a model is never ever a lever, but it could help you with the lever. Data [7:01] So then the next step is what data does the organization have that could possibly help them to set that lever to achieve that objective. So this is not what data did they give you when you started the project. But think about it from a first principle’s point of view — okay, I’m working for a telecommunications company, they gave me some certain set of data, but I’m sure they must know where their customers live, how many phone calls they made last month, how many times they called customer service, etc. So have a think about okay if we are trying to decide who should we give a special offer to proactively, then we want to figure out what information do we have that might help us to identify who’s going to react well or badly to that. Perhaps more interestingly would be what if we were doing a fraud algorithm. So we are trying to figure out who’s going to not pay for the phone that they take out of the store, they are on some 12-month payment plan, and we never see them again. Now in that case, the data we have available , it doesn’t matter what’s in the database, what matters is what’s the data that we can get when the customer is in the shop. So there’s often constraints around the data that we can actually use. So we need to know what am I trying to achieve, what can this organization actually do specifically to change the outcome, and at the point that the decision is being made, what data do they have or could they collect. Models [8:45] So then the way I put that all together is with a model. This is not a model in the sense of a predictive model but it’s a model in the sense of a simulation model. So one of the main example I gave in this paper is when I spent many years building which is if an insurance company changes their prices, how does that impact their profitability. So generally your simulation model contains a number of predictive models. So I had, for example, a predictive model called an elasticity model that said for a specific customer, if we charge them a specific price for a specific product, what’s the probability that they would say yes both when it’s new business and then a year later what’s the probability that they’ll renew. Then there’s another predictive model which is what’s the probability that they are going to make a claim and how much is that claim going to be. You can then combine these models together then to say all right, if we changed our pricing by reducing it by 10% for everybody between 18 and 25 and we can run it through these models that combined together into a simulation then the overall impact on our market share in 10 years time is X and our cost is Y and our profit is Z and so forth. In practice, most of the time, you really are going to care more about the results of that simulation than you do about the predictive model directly. But most people are not doing this effectively at the moment. For example, when I go to Amazon, I read all of Douglas Adams’ books, and so having read all Douglas Adams’ books, the next time I went to Amazon they said would you like to buy the collected works of Douglas Adams. This is after I had bought every one of his books. So from a machine learning point of view, some data scientist had said oh people that buy one of Douglas Adams’ books often go on to buy the collected works. But recommending to me that I buy the collected works of Douglas Adams isn’t smart. It’s actually not smart at a number of levels. Not only is unlikely to buy a box set of something of which I have every one individually but furthermore it’s not going to change my buying behavior. I already know about Douglas Adams. I already know I like him, so taking up your valuable web space to tell me hey maybe you should buy more of the author who you’re already familiar with and bought lots of times isn’t actually going to change my behavior. So what if instead of creating a predictive model, Amazon had built an optimization model that could simulate and said if we show Jeremy this ad, how likely is he then to go on to buy this book and if I don’t show him this ad, how likely is he to go on to buy this book. So that’s the counterfactual. The counter factual is what would have happened otherwise, and then you can take the difference and say what should we recommend him that is going to maximally change his behavior. So maximally result in more books and so you’d probably say oh he’s never bought any Terry Pratchett book, he probably doesn’t know about Terry Pratchett but lots of people that liked Douglas Adams did turn out to like Terry Pratchett so let’s introduce him to a new author. So it’s the difference between a predictive model on the one hand versus an optimization model on the other hand. So the two tend to go hand in hand. First of all we have a simulation model. The simulation model is saying in the world where we put Terry Pratchett’s book on the front page of Amazon for Jeremy Howard, this is what would have happened. He would have bought it with a 94% probability. That then tells us with this lever of what do I put on my homepage for Jeremy today, we say okay the different settings of that lever that put Terry Pratchett on the homepage has the highest simulated outcome. Then that’s the thing which maximizes our profit from Jeremy’s visit to amazon.com today. Generally speaking, your predictive models feed into this simulation model but you kind of have to think about how they all work together. For example, let’s go back to churn. So it turned out that Jeremy Howard is very likely to leave his cell phone company next month. What are we going to about it? Let’s call him. And I can tell you if my cell phone company calls me right now and says “just calling to say we love you” I’d be like I’m cancelling right now. That would be a terrible idea. So again, you would want a simulation model that says what’s the probability that Jeremy is going to change his behavior as a result of calling him right now. So one of the levers I have is call him. On the other hand, if I got a piece of mail tomorrow that said for each month you stay with us, we’re going to give you a hundred thousand dollars. Then that’s going to definitely change my behavior, right? But then feeding that into the simulation model, it turns out that overall that would be an unprofitable choice to make. Do you see how this fits in together? So when we look at something like churn, we want to be thinking what are the levers we can pull [14:33]. What are the kinds of models that we could build with what kinds of data to help us pull those levers better to achieve our objectives. When you think about it that way, you realize that the vast majority of these applications are not largely about a predictive model at all. They are about interpretation. They are about understanding what happens if. So if we take the intersection between on the one hand, here are all the levers that we could pull (here are all the things we can do) and then here are all of the features from our random forest feature importance that turn out to be strong drivers of the outcome. So then the intersection of those is here are the levers we could pull that actually matter. Because if you can’t change the thing, that is not very interesting. And if it’s not actually a significant driver, it’s not very interesting. So we can actually use our random forest feature importance to tell us what can we actually do to make a difference. Then we can use the partial dependence to actually build this kind of simulation model to say okay if we did change that, what would happen. So there are lots of examples and what I want you to think about as you think about the machine learning problems you are working on is why does somebody care about this [16:02]. What would a good answer to them look like and how could you actually positively impact this business. So if you are creating a Kaggle kernel, try to think about from the point of view of the competition organizer. What would they want to know and how can you give them that information. So something like fraud detection on the other hand, you probably just basically want to know whose fraudulent. So you probably do just care about the predictive model. But then you do have to think carefully about the data availability here. So okay, we need to know who is fraudulent at the point that we are about to deliver them a product. So it’s no point looking at data that’s available a month later, for instance. So you have this key issue of thinking about the actual operational constraints that you are working under. Human Resources Applications [17:17] Lots of interesting application in human resources but like employee churn, it’s another kind of churn model where finding out that Jeremy Howard is sick of lecturing, he’s going to leave tomorrow. What are you going to do about it? Well, knowing that wouldn’t actually be helpful. It would be too late. You would actually want a model that said what kinds of people are leaving USF and it turns out that everybody that goes to the downstairs cafe leaves USF. I guess their food is awful or whatever. Or everybody that we are paying less than half a million dollars a year is leaving USF because they can’t afford basic housing in San Francisco. So you could use your employee churn model not so much to say which employees hate us but why do employees leave. Again it’s really the interpretation there that matters. Question: For churn model, it sounds like there are two predictors that you need to predict for — one being churn and the other you need to optimize your profit. So how does it work [18:30]? Yes, exactly. So this is what the simulation model is all about. You figure out this objective we are trying to maximize which is company profitability. You can create a pretty simple Excel model or something that says here is the revenue and here is the costs and the cost is equal to the number of people we employ multiplied by their salary, etc. Inside that Excel model, there are certain cells/inputs that are kind of stochastic or uncertain. But we could predict it with a model and so that’s what I do then is to say okay we need a predictive model for how likely somebody is to stay if we change their salary, how likely they are to leave with the current salary, how likely they are to leave next year if I increased their salary now, etc. So you a bunch of different models and then you can bind them together with simple business logic and then you can optimize that. You can then say okay if I pay Jeremy Howard half a million dollars, that’s probably a really good idea and if I pay him less then it’s probably not or whatever. You can figure out the overall impact. So it’s really shocking to me how few people do this. But most people in industry measure their models using AUC or RMSE or whatever which is never actually what you really want. More Horizontal Applications…[22:04] Lead prioritization is a really interesting one. Every one of these boxes I’m showing, you can generally find a company or many companies whose sole job in life is to build models of that thing. So there are lots of companies that sell lead prioritization systems but again the question is how would we use that information. So if it’s like our best lead is Jeremy, he is a highest probability of buying. Does that mean I should send a salesperson out to Jeremy or I shouldn’t? If he’s highly probable to buy, why I waste my time with him. So again, you really want some kind of simulation that says what’s the likely change in Jeremy’s behavior if I send my best salesperson out to go and encourage him to sign. I think there are many many opportunities for data scientists in the world today to move beyond predictive modeling to actually bringing it all together. Vertical Applications [23:29] As well as these horizontal applications that basically apply to every company, there’s a whole bunch of applications that are specific to every part of the world. For those of you that end up in healthcare, some of you will become experts in one or more of these areas. Like readmission risk. So what’s the probability that this patient is going to come back to the hospital. Depending on the details of the jurisdiction, it can be a disaster for hospitals when somebody is readmitted. If you find out that this patient has a high probability of readmission, what do you do about it? Again, the predictive model is helpful of itself. It rather suggests we shouldn’t send them home yet because they are going to come back. But wouldn’t it be nice if we had the tree interpreter and it said to us the reason that they are at high risk is because we don’t have a recent EKG/ECG for them. Without a recent EKG, we can’t have a high confidence about their cardiac health. In which case, it wouldn’t be like let’s keep them in the hospital for two weeks, it’ll be let’s give them an EKG. So this is interaction between interpretation and predictive accuracy. Question: So what I’m understanding you are saying is that the predictive models are a really great but in order to actually answer these questions, we really need to focus on the interpretability of these models [24:59]? Yeah, I think so. More specifically I’m saying we just learnt a whole raft of random forest interpretation techniques and so I’m trying to justify why. The reason why is because I’d say most of the time the interpretation is the thing we care about. You can create a chart or a table without machine learning and indeed that’s how most of the world works. Most managers build all kinds of tables and charts without any machine learning behind them. But they often make terrible decisions because they don’t know the feature importance of the objective they are interested in and so the table they create is of things that actually are the least important things anyway. Or they just do a univariate chart rather than a partial dependence plot, so they don’t actually realize that the relationship they thought they are looking at is due entirely to something else. So I’m kind of arguing for data scientists getting much more deeply involved in strategy and in trying to use machine learning to really help a business with all of its objectives. There are companies like dunnhumby which is a huge company that does nothing but retail application with machine learning. I believe there’s like a dunnhumby product you can buy which will help you figure out if I put my new store in this location versus that location, how many people are going to shop there. Or if I put my diapers in this part of the shop versus that part of the shop, how is that going to impact purchasing behavior, etc. So it’s also good to realize that the subset of machine learning applications you tend to hear about in the tech press or whatever is this massively biased tiny subset of stuff which Google and Facebook do. Where else the vast majority of stuff that actually makes the world go around is these kinds of applications that actually help people make things, buy things, sell things, build things, so forth. Question: About tree interpretation, we looked at which feature was more important for a particular observation. For businesses, they have a huge amount of data and they want this interpretation for a lot of observations so how do they automate it? Do they set threshold [27:50]? The vast majority of machine learning models don’t automate anything. They are designed to provide information to humans. So for example, if you are a customer service phone operator for an insurance company and your customer asks you why is my renewal $500 more expensive than last time, then hopefully the insurance company provides in your terminal those little screen that shows the result of the tree interpreter or whatever. So you can jump there and tell the customer that last year you were in this different zip code which has lower amounts of car theft, and this year also you’ve actually changed your vehicle to more expensive one. So it’s not so much about thresholds and automation, but about making these model outputs available to the decision makers in the organization whether they be at the top strategic level of like are we going to shutdown this whole product or not, all the way to the operational level of that individual discussion with a customer. So another example is aircraft scheduling and gate management. There’s lots of companies that do that and basically what happens is that there are people at an airport whose job it is to basically tell each aircraft what gate to go to, to figure out when to close the doors, stuff like that. So the idea is you’re giving them software which has the information they need to make good decisions. So the machine learning models end up embedded in that software to say okay that plane that’s currently coming in from Miami, there’s a 48% chance that it’s going to be over 5 minutes late and if it does then this is going to be the knock-on impact through the rest of the terminal, for instance. So that’s how these things fit together. Other applications [31:02] There are lots of applications, and what I want you to do is to spend some time thinking about them. Sit down with one of your friends and talk about a few examples. For example, how would we go about doing failure analysis in manufacturing, who would be doing that, why would they be doing it, what kind of models might they use, what kind of data might they use. Start to practice and get a sense. Then when you’re at the workplace and talking to managers, you want to be straightaway able to recognize that the person you are talking to — what are they trying to achieve, what are the levers they have to pull, what are the data they have available to pull those levers to achieve that thing, and therefore how could we build models to help them do that and what kind of predictions would they have to be making. So then you can have this really thoughtful empathetic conversation with those people and then saying “in order to reduce the number of customers that are leaving, I guess you are trying to figure out who should you be providing better pricing to” and so forth. Question: Are explanatory problems people are faced with in social sciences something machine learning can be useful for or is used for or is that nor really the realm that’s in [32:29]? I’ve had a lot of conversations about this with people in social sciences and currently machine learning is not well applied in economics or psychology or whatever on the whole. But I’m convinced it can be for the exact reasons we are talking about. So if you are going to try to do some kind of behavioral economics and you’re trying to understand why some people behave differently to other people, a random forest with a feature importance plot would be a great way to start. More interestingly, if you are trying to do some kind of sociology experiment or analysis based on a large social network dataset where you have an observational study, you really want to try and pull out all of the sources of exogenous variables (i.e. all the stuff that’s going on outside) so if you use a partial dependence plot with a random forest that happens automatically. I actually gave a talk at MIT a couple of years ago for the first conference on digital experimentation which was really talking about how do we experiment in things like social networks in these digital environments and economists all do things with classic statistical tests but in this case, the economists I talked to were absolutely fascinated by this and they actually asked me to give an introduction to machine learning session at MIT to these various faculty and graduate folks in the economics department. And some of those folks have gone on to write some pretty famous books and so hopefully it’s been useful. It’s definitely early days but it’s a big opportunity. But as Yannet says, there’s plenty of skepticism still out there. The skepticism comes from unfamiliarity basically with this totally different approach. So if you spent 20 years studying econometrics and somebody comes along and says here is a totally different approach to all the stuff econometricians do, naturally your first reaction will be “prove it”. So that’s fair enough but I think over time the next generation of people who are growing up with machine learning, some of them will move into the social sciences, they’ll make huge impacts that nobody has ever managed to make before and people will start going wow. Just like happened in computer vision. When computer vision spent a long time of people saying “maybe you should use deep learning for computer vision” and everybody in computer vision said “Prove it. We have decades of work on amazing feature detectors for computer vision.” And then finally in 2012, Hinton and Kryzanski came along and said “our model is twice as good as yours and we’ve only just started on this” and everybody was convinced. Nowadays every computer vision researchers basically uses deep learning. So I think that time will come in this area too. Different random forest interpretation methods [37:17] Having talked about why they are important, let’s now remind ourselves what they are. Confidence based on tree variance What does it tell us? Why would be interested in that? How is it calculated? The variance of the predictions of the trees. Normally the prediction is just the average, this is variance of the trees. Just to fill in a detail here, what we generally do here is we take just one row/observation often and find out how confident we are about that (i.e. how much variance there are in the trees for that) or we can do as we did here for different groups [39:34]. What we’ve done here is to say if there are any groups that we are very unconfident (which could be due to very little observations). Something that I think is even more important would be when you are using this operationally. Let’s say you are doing a credit decisioning algorithm. So we are trying to determine whether Jeremy is a good risk or a bad risk. Should we loan him a million dollars. And the random forest says “I think he’s a good risk but I’m not at all confident.” And in which case, we might say okay maybe I shouldn’t give him a million dollars. Where else, if the random forest said “I think he’s a good risk and I’m very sure of that” then we are much more comfortable giving him a million dollars. And I’m a very good risk. So feel free to give me a million dollars. I checked the random forest before — a different notebook. Not in the repo 😆 It’s quite hard for me to give you folks direct experience with this kind of single observation interpretation because it’s really the kind of stuff that you actually need to be putting out to the front line [41:30]. It’s not something which you can really use so much in a Kaggle context but it’s more like if you are actually putting out some algorithm which is making big decisions that could cost a lot of money, you probably don’t so much care about the average prediction of the random forest but maybe you actually care about the average minus a couple standard deviations (i.e. what’s the worst-case prediction). Maybe there is a whole group that we are unconfident about, so that’s confidence based on tree variance. Feature importance [42:36] Student: It’s basically to find out which features are important. You take each feature and shuffle the values in the feature and check how the predictions change. If it’s very different, it means that the feature was actually important; otherwise it is not that important. Jeremy: That was terrific. That was all exactly right. There were some details that were skimmed over a little bit. Anybody else wants to jump into a more detailed description of how it’s calculated? How exactly do we calculate feature importance for a particular feature? Student: After you are done building a random forest model, you take each column and randomly shuffle it. And you run a prediction and check the validation score. If it gets bad after shuffling one of the columns, that means that column was important, so it has a higher importance. I’m not exactly sure how we quantify the feature importance. Jeremy: Ok, great. Do you know how we quantify the feature importance? That was a great description. To quantify, we can take the difference in R² or score of some sort. So let’s say we’ve got our dependent variable which is price, and there’s a bunch of independent variables including year made [44:22]. We use the whole lot to build a random forest and that gives us our predictions. The we can compare that to get R², RMSE, whatever you are interested in from the model. Now the key thing here is I don’t want to have to retrain my whole random forest. That’s slow and boring, so using the existing random forests. How can I figure out how important year made was? So the suggestion was, let’s randomly shuffle the whole column. Now that column is totally useless. it’s got the same mean, same distribution. Everything about it is the same, but there’s no connection at all between actual year made and what’s now in that column. I’ve randomly shuffled it. So now I put that new version through the same random forest (so there is no retraining done) to get some new ŷ (ym). Then I can compare that to my actuals to get RMSE (ym). So now I can start to create a little table where I got the original RMSE (3, for example), with YearMade scrambled with RMSE of 2. Enclosure scrambled had RMSE of 2.5. Then I just take these differences. For YearMade, the importance is 1, Enclosure is 0.5, and so forth. How much worse did my model get after I shuffled that variable. Question: Would all importances sum to one [46:52]? Honestly, I’ve never actually looked at what the units are, so I’m not quite sure. We can check it out during the week if somebody’s interested. Have a look at sklearn code and see exactly what those units of measures are because I’ve never bothered to check. Although I don’t check like the units of measure specifically, what I do check is the relative importance. Here is an example. So . rather than just saying what are the top ten, yesterday one of the practicum students asked me about a feature importance where they said “oh, I think these three are important” and I pointed out that the top one was thousand times more important than the second one. So look at the relative numbers here. So in that case, it’s like “no, don’t look at the top three, look at the one that’s a thousand times more important and ignore all the rest.” Your natural tendency is to want to be precise and careful, but this is where you need to override that and be very practical. This thing is a thousand times more important. Don’t spend any time on anything else. Then you can go and talk to your manager of your project and say this thing is a thousand times more important. And then they might say “oh, that was a mistake. It shouldn’t have been in there. We don’t actually have that information at the decision time or for whatever reason we can’t actually use that variable.” So then you could remove it and have a look. Or they might say “gosh, I had no idea that was by far more important than everything else put together. So let’s forget this random forest thing and just focus on understanding how we can better collect that one variable and better use that one variable.” So that’s something which comes up quite a lot and actually another place that came up just yesterday. Another practicum student asked me “I’m doing this medical diagnostics project and my R² is 0.95 for a disease which I was told is very hard to diagnose. Is this random forest genius or is something going wrong?” And I said remember, the second thing you do after you build a random forest is to do feature importance, so do feature importance and what you’ll probably find is that the top column is something that shouldn’t be there. So that’s what happened. He came back to me half an hour later, he said “yeah, I did the feature importance and you were right. The top column was basically a something that was another encoding of the dependent variable. I’ve removed it and now my R² is -0.1 so that’s an improvement.” The other thing I like to look at is this chart [50:03]: Basically it says where things flatten off in terms of which ones I should be really focusing on. So that’s the most important one. When I did credit scoring in telecommunications, I found there were nine variables that basically predicted very accurately who was going to end up paying for their phone and who wasn’t. Apart from ending up with a model that saved them three billion dollars a year in fraud and credit costs, it also let them basically rejig their process so they focused on collecting those nine variables much better. Partial dependence [50:46] This is an interesting one. Very important but in some ways kind of tricky to think about. Let’s come back to how we calculate this in a moment, but the first thing to realize is that the vast majority of the time, when somebody shows you a chart , it will be like a univariate chart that’ll just grab the data from the database and they’ll plot X against Y. Then managers have a tendency to want to make a decision. So it would be “oh, there’s this drop-off here, so we should stop dealing in equipment made between 1990 and 1995. This is a big problem because real world data has lots of these interactions going on. So maybe there was a recession going on around the time that those things are being sold or maybe around that time, people were buying more of a different type of equipment. So generally what we actually want to know is all other things being equal, what’s the relationship between YearMade and SalePrice. Because if you think about the drivetrain approach idea of the levers, you really want a model that says if I change this lever, how will it change my objective. It’s by pulling them apart using partial dependence that you can say actually this is the relationship between YearMade and SalePrice all other things being equal: So how do we calculate that? Student: For the variable YearMade, for example, you keep all other variables constant. Then you are going to pass every single value of the YearMade, train the model after that. So for every model you’ll have light blue lines and the median is going to be the yellow line. Jeremy: So let’s try and draw that. By “leave everything else constant”, what she means is leave them at whatever they are in the dataset. So just like when we did feature importance, we are going to leave the rest of the dataset as it is. And we’re going to do partial dependence plot for YearMade. So we’ve got all of these other rows of data that we will just leave as they are. Instead of randomly shuffling YearMade, what we are going to do is replace every single value with exactly the same thing — 1960. Just like before, we now pass that through our existing random forests which we have not retrained or changed in any way to get back out a set of predictions y1960. Then we can plot that on a chart — YearMade against partial dependence. Now we can do that for 1961, 1962, 1963, and so forth. We can do that on average for all of them, or we could do it just for one of them. So when we do it for just one of them and we change its YearMade and pass that single thing through our model, that gives us one of these blue lines. So each one of these blue lines is a single row as we change its YearMade from 1960 up to 2008. So then we can just take the median of all of these blue lines to say on average what’s the relationship between YearMade and price all other things being equal. Why is it that it works? Why is it that this process tells us the relationship between YearMade and price all other things being equal? Maybe it’s good to think about a really simplified approach [56:03]. A really simplified would say what’s the average auction? What’s the average sale date, what’s the most common type of machine we well? Which location we mostly sell things? And we could come up with a single row that represents the average auction and then we could say okay, let’s run that row through the random forest but replace its YearMade with 1960 and then do it again with 1961 and we could plot those on our little chart. That would give us a version of the relationship between YearMade and sale price all other things being equal. But what if tractors looked like that and backhoe loaders looked like a flat line: Then taking the average one would hide the fact that there are these totally different relationships. So instead, we basically say, okay our data tells us what kinds of things we tend to sell, who we tend to sell them, and when we tend to sell them, so let’s use that. Then we actually find out for every blue line, here are actual examples of these relationships. So then what we can do is as well as plotting the median, we can do a cluster analysis to find out a few different shapes. In this case, they all look pretty much the different versions of the same thing with different slopes, so my main takeaway from this would be that the relationship between sale price and year made is basically a straight line. And remember, this was a log of sale price so this is actually showing us an exponential. So this is where I would then bring in the domain expertise which is like “okay, things depreciate over time by a constant ratio so therefore, I would expect older stuff year made to have this exponential shape.” So this is where, as I mentioned, the very start of of my machine learning project, I generally try to avoid using as much domain expertise as I can and let the data do the talking. So one of the questions I got this morning was “there’s like a sale ID and model ID, I should throw those away, right? Because they are just IDs.” No. Don’t assume anything about your data. Leave them in and if they turn out to be super important predictors, you want to find out why that is. But then, now I’m at the other end of my project. I’ve done my feature importance, I’ve pulled out the stuff which is from that dendrogram (i.e. redundant features), I’m looking at the partial dependence and now I’m thinking okay is this shape what I expected? So even better, before you plot this, first of all think what shape would I expect this to be. Because it’s always easy to justify to yourself after the fact, oh, I knew it would look like this. So what shape you expect and then is it that shape? In this case, I’d say this is what I would expect. Where else the previous plot is not what I’d expect. So the partial dependence plot has really pulled out the underlying truth. Question: Say you have 20 features that are important, are you going to measure the partial dependence for every single one of them [1:00:05]? If there are twenty features that are important, then I will do the partial dependence for all of them where important means like it’s a lever I can actually pull, the magnitude of its size is not much smaller than the other nineteen, you know, based on all these things it’s a feature I ought to care about then I will want to know how it’s related. It’s pretty unusual to have that many features that are important both operationally and from a modeling point of view in my experience. Question: How do you define importance [1:00:58]? Important means it’s a lever (i.e. something I can change) and it’s on the spiky end of this tail (left): Or maybe it’s not a lever directly. Maybe it’s like zip code and I can’t actually tell my customers where to live but I could focus my new marketing attention on a different zip code. Question: Would it make sense to do pairwise shuffling for every combination of two features and hold everything else constant in feature importance to see interactions and compare scores [1:01:45]? You wouldn’t do that so much for partial dependence. I think your question is really getting to the question of could we do that for feature importance. I think interaction feature importance is a very important and interesting question. But doing it by randomly shuffling every pair of columns, if you’ve got a hundred columns, it sounds computationally intensive, possibly infeasible. So what I’m going to do is after we talk about tree interpreter, I’ll talk about interesting but largely unexplored approach that will probably work. Tree interpreter [1:02:43] Prince: I was thinking this to be more like feature importance, but feature importance is for complete random forest model, and this tree interpreter is for feature importance for particular observation. So let’s say it’a about hospital readmission. If a patient A is going to be readmitted to a hospital, which feature for that particular patient is going to impact and how can we change that. It is calculated starting from the prediction of mean then seeing how each feature is changing the behavior of that particular patient. Jeremy: I’m smiling because that was one of the best examples of technical communication I’ve heard in a long time, so it’s really good to think about why was that effective. So what Prince did there was, he used as specific an example as possible. Humans are much less good at understanding abstractions. So if you say “it takes some kind of feature, and then there’s an observation in that feature” whereas it’s the hospital readmission. So we take a specific example. The other thing he did that was very effective was to take an analogy to something we already understand. So we already understand the idea of feature importance across all of the rows in a dataset. So now we are going to do it for a single row. So one of the things I was really hoping we would learn from this experience is how to become effective technical communicators. So that was a really great role model from Prince of using all the tricks we have at our disposal for effective technical communication. So hopefully you found that useful explanation. I don’t have a lot to add to that other than to show you what that looks like. With the tree interpreter, we picked out a row [1:04:56]: Remember when we talked about the confidence intervals at the very start (i.e. the confidence based on tree variance). We said you mainly use that for a row. So this would also be for a row. So it’s like “why is this patient likely to be readmitted?” Here is all the information we have about that patient or in this case this auction. Why is this auction so expensive? So then we call ti.predict and we get back the prediction of the price, the bias (i.e. the root of the tree — so this is just the average price for everybody so this is always going to be the same), and then the contributions which is how important is each of these things: The way we calculated that was to say at the very start, the average price was 10. Then we split on enclosure. For those with this enclosure, the average was 9.5. Then we split on year made less than 1990 and for those with that year made, the average price was 9.7. Then we split on the number of hours on the meter, and with this branch, we got 9.4. We then have a particular auction which we pass it through the tree. It just so happens that it takes the top most path. One row can only have one path through the tree. So we ended up at 9.4. Then we can create a little table. As we go through, we start at the top and we start with 10 — that’s our bias. And we said enclosure resulted in a change from 10 to 9.5 (i.e. -0.5). Year made changed it from 9.5 to 9.7 (i.e. +0.2), then meter changed it from 9.7 down to 9.4 (-0.3). Then if we add all that together (10–0.5+0.2–0.3), lo and behold that’s the prediction. Which takes us to our Excel spreadsheet [1:08:07]: Last week, we have use Excel for this because there wasn’t a good Python library for doing waterfall charts. So we saw we got our starting point this is the bias, and then we had each of our contributions and we ended up with our total. The world is now a better place because Chris has created a Python waterfall chart module for us and put it on pip. So never again where we have to use Excel for this. I wanted to point out that waterfall charts have been very important in business communications at least as long as I’ve been in business — so that’s about 25 years. Python is maybe a couple of decades old. But despite that, no one in the Python world ever got to the point where they actually thought “you know, I’m gonna make a waterfall chart” so they didn’t exist until two days ago which is to say the world is full of stuff which ought to exist and doesn’t. And doesn’t necessarily take a heck a lot of time to build. It took Chris about 8 hours, so a hefty amount but not unreasonable. And now forevermore people when they want the Python waterfall chart will end up at Chris’ Github repo and hopefully find lots of other USF contributors who have made it even better. In order for you to help improve Chris’ Python waterfall, you need to know how to do that. So you are going to need to submit a pull request. Life becomes very easy for submitting pull requests if you use something called hub. What they suggest you do is that you alias git to hub because it turns out that hub is actually a strict superset of git. What it lets you do is you can go git fork, git push , and git pull-request and you’ve now sent Chris a pull request. Without hub, this is actually a pain and requires like going to the website and filling in forms and stuff. So this gives you no reason not to do pull request. I mention this because when you are interviewing for a job, I can promise you that the person you are talking to will check your github and if they see you have a history of submitting thoughtful pull requests that are accepted to interesting libraries, that looks great. It looks great because it shows you’re somebody who actually contributes. It also shows that if they are being accepted that you know how to create code that fits with people’s coding standards, has appropriate documentation, passes their tests and coverage, and so forth. So when people look at you and they say oh, here is somebody with a history of successfully contributing, accepted pull requests to open-source libraries, that’s a great part of your portfolio. And you can specifically refer to it. So either I’m the person who build Python waterfall, here is my repo or I’m the person who contributed currency number formatting to Python waterfall, here is my pull request. Anytime you see something that doesn’t work right in any open source software you use, it is not a problem, it’s a great opportunity because you can fix it and send in the pull request. So give it a go. It actually feels great the first time you have a pull request accepted. And of course, one big opportunity is the fastai library. Thanks to one of our students, we now have docstrings for most of the fastai.structured library, again came via a pull request. Does anybody have any questions about how to calculate any of these random forest interpretation methods or why we might want to use them [1:12:50]? Towards the end of the week, you’re going to need to be able to build all of these yourself from scratch. Question: Just looking at the tree interpreter, I noticed that some of the values are nan ’s. I get why you keep them in the tree but how can nan have a feature importance [1:13:19]? Let me pass it back to you. Why not? So in other words, how is nan handled in Pandas and therefore in the tree? Does anybody remember, notice these are all in categorical variables, how does Pandas handle nan ’s in categorical variable and how does fastai deal with them? Pandas sets them to -1 category code and fastai adds one to all of the category code so it ends up being zero. In other words, remember by the time it hits the random forest it’s just a number, and it’s just zero. And we map it back to the descriptions back here. So the question really is why shouldn’t the random forest be able to split on zero? It’s just another number. So it could be nan, high, medium, low= 0, 1, 2, 3. So missing values are one of these things that are generally taught really badly. Often people get taught here are some ways to remove columns with missing values or remove rows with missing values or to replace missing values. That’s never what we want because missingness is very very very often interesting. So we actually learnt that from our feature importance that coupler system nan is one of the most important features. For some reason, well, I could guess, right? Coupler system nan presumably means this is a kind of industrial equipment that doesn’t have a coupler system. Now I don’t know what kind that is, but apparently it’s more expensive kind. I did this competition for university grant research success where by far the most important predictors were whether or not some of the fields were null [1:15:41]. It turned out that this was data leakage that these fields only got filled in most of the time after a research grant was accepted. So it allowed me to win that Kaggle competition but didn’t actually help the university very much. Extrapolation [1:16:16] I am going to do something risky and dangerous which is we are going to do some live coding. The reason we are going to do some live coding is I want to explore extrapolation together with you, and I also want to give you a feel of how you might go about writing code quickly in this notebook environment. And this is the kind of stuff that you are going to need to be able to do in the real world and in the exam is quickly create the kind of code that we are going to talk about. I really like creating synthetic datasets anytime I’m trying to investigate the behavior of something because if I have a synthetic dataset, I know how it should behave. Which reminds me, before we do this, I promised that we would talk about interaction importance and I just about forgot. Interaction importance [1:17:24] Tree interpreter tells us the contributions for a particular row based on the difference in the tree. We could calculate that for every row in our dataset and add them up. That would tell us feature importance. And it would tell us feature importance in a different way. One way of doing feature importance is by shuffling the columns one at a time. Another way is by doing tree interpreter for every row and adding them up. Neither is more right than the others. They are actually both quite widely used so this is kind of type 1 and type 2 feature importance. So we could try to expand this a little bit. To do not just single variable feature importance, but interaction feature importance. Now here is the thing. What I’m going to describe is very easy to describe. It was described by Breiman right back when random forests were first invented, and it is part of the commercial software product from Salford systems who have the trademark on random forests. But it is not part of any open source library I’m aware of, and I’ve never seen an academic paper that actually studies it closely. So what I’m going to describe here is a huge opportunity but it’s also like there’s lots and lots of details that need to be fleshed out. But here is the basic idea. This particular difference here (in red) is not just because of year made but because of a combination of year made and enclosure [1:19:15]: The fact that this is 9.7 is because enclosure was in this branch and year made was in this branch. So in other words, we could say the contribution of enclosure interacted with year made is -0.3. So what about the difference between 9.5 and 9.4? That’s an interaction of year made and hours on the meter. I’m using star here not to mean “times” but to mean “interacted with”. It’s a common way of doing things like R’s formulas do it this way as well. So year made interacted with meter has a contribution of -0.1. Perhaps we could also say from 10 to 9.4, this also shows an interaction between meter and enclosure with one thing in between them. So we could say meter interacted with enclosure equals …and what should it be? Should it be -0.6? Some ways that seems unfair because we are also including the impact of year made. So maybe it should be -0.6 and maybe we should add back this 0.2 (9.5 → 9.7). These are like details that I actually don’t know the answer to. How should we best assign a contribution to each pair of variables in this path? But clearly conceptually we can. The pairs of variables in that path all represent interactions. Question: Why don’t you force them to be next to each other in the tree [1:21:47]? I’m not going to say it’s the wrong approach. I don’t think it’s the right approach though. Because it feels like in this path, meter and enclosure are interacting. So it seems like not recognizing that contribution is throwing away information. But I’m not sure. I had one of my staff at Kaggle actually do some R&D on this a few years ago and they actually found (I wasn’t close enough to know how they dealt with these details), but they got it working pretty well. But unfortunately it never saw the light of day as a software product. But this is something maybe a group of you could get together and build. Do some googling to check, but I really don’t think that there are any interaction feature importance parts of any open source library. Question: Wouldn’t this exclude interactions though between variables that don’t matter until they interact? So say your row never chooses to split down that path, but that variable interacting with another one becomes your most important split [1:22:56]. I don’t think that happens. Because if there is an interaction that’s important only because it’s an interaction (and not in a univariate basis), it will appear sometimes, assuming that you set max features to less than one, so therefore it will appear in some path. Question: What is meant by interaction? Is it multiplication, ratio, addition [1:23:31]? Interaction means appears on the same path through a tree. In the above example, there is an interaction between enclosure and year made because we branched on enclosure and then we branched on year made. So to get to 9.7, we have to have some specific value of enclosure and some specific value of year made. Question: What if you went down the middle leaves between the two things you are trying to observe and you would also take into account what the final measure is? I mean if we extend the tree downwards, you’d have many measures both of like the two things you are trying to look at and also the in between steps. There seems to be a way to average information out in between them [1:24:03]? There could be. I think what we should do is talk about this on the forum. I think this is fascinating and I hope we build something great, but I need to do my live coding. That was a great discussion. Keep thinking about it and do some experiments. Back to Live Coding [1:24:50] So to experiment with that, you almost certainly want to create a synthetic dataset first. It’s like y = x1 + x2 + x1*x2 or something. Something where you know there is this interaction effect and there isn’t that interaction effect, and you want to make sure that the feature importance you get at the end is what you expected. So probably the first step would be to do single variable feature importance using the tree interpreter style approach [1:25:14]. One nice thing about this is it doesn’t really matter how much data you have. All you have to do to calculate feature importance is just slide through tree. So you should be able to write in a way that’s actually pretty fast, so even writing it in pure Python might be fast enough depending on your tree size. We are going to talk about extrapolation and the first thing I want to do is create a synthetic dataset that has a simple linear relationship. We are going to pretend it’s like a time series. So we need to create some x values. The easiest way to create some synthetic data of this type is to use linspace which just creates some evenly spaced data between start and stop by default 50 observations. Then we are going to create dependent variable, so let’s assume there is a linear relationship between x and y, and let’s add a little bit of randomness to it. random.uniform between low and high, so we could add somewhere between -0.2 and 0.2, for example. The next thing we need is a shape which is basically what dimensions do you want these random numbers to be, and obviously we want them to be the same shape as x’s shape. So we can just say x.shape. So in other words, (50,) is x.shape. Remember when you see something in parentheses with a comma, that’s a tuple with just one thing in it. So this is shape 50 and so we added 50 random numbers. Now we can plot those. Alright, so there is our data. When you were both working as a data scientist or for doing your exams in this course, you need to be able to quickly whip up a dataset like that, throw it up in a plot without thinking too much. As you can see, you don’t have to really remember much if anything. You just have to know how to hit shift + tab to check the names of parameters, google, or something to try and find linspace if you forgot what it’s called. So let’s assume that’s our data [1:28:33]. We’re now going to build a random forest model and what I want to do is build a random forest model that kind of acts as if this is a time series. So I’m going to take left part as a training set. And take the right part as our validation or test set just like we did in groceries or bulldozers. We can use exactly the same kind of code that we used in split_vals. So we can say: That splits it into the first 40 versus the last 10. We can do the same thing for y and there we go. The next thing to do is we want to create a random forest and fit it which requires x and y. That’s actually going to give an error and the reason why is that it expects x to be a matrix, not a vector, because it expects x to have a number of columns of data. So it’s important to know that a matrix with one column is not the same thing as a vector. So if I try to run this, “Expected 2D array, got 1D array instead”: So we need to convert 1D array into a 2D array. Remember I said x.shape is (50,). So x has one axis and x’s rank is 1. The rank of a variable is equal to the length of it’s shape — how many axes it has. Vector we can think of as an array of rank 1 and matrix as an array of rank 2. I very rarely use words like vector and matrix because they are kind of meaningless — specific example of something more general which is they are all N dimensional tensors or N dimensional arrays. So an N dimensional array we can say it’s a tensor of rank N. They basically mean kind of the same thing. Physicists get crazy when you say that because to a physicist, a tensor has quite a specific meaning but in machine learning, we generally use it in the same way. So how do we turn an one dimensional array into a two dimensional array. There are a couple of ways we can do it but basically we slice it. Colon (:) means give me everything in that axis. :,None means give me everything in the first axis (which is the only axis we have) and then None is a special indexer which means add a unit axis here. So let me show you. That is of shape (50, 1), so it’s a rank 2. It has two axes. One of them is a very boring axis — it’s a length one axis. So let’s move None to the left. There is (1, 50). Then to remind you, the original is (50,). So you can see I can put None as a special indexer to introduce a new unit axis there. So x[None,:] has one row and fifty columns. x[:,None] has fifty rows and one column — so that’s what we want. This kind of playing around with ranks and dimension is going to become increasingly important in this course and in the deep learning course. So spend a lot of time slicing with None, slicing with other things, try to create 3 dimensional, 4 dimensional tensors and so forth. I’ll show you two tricks. The first is you never ever need to write ,: as it’s always assumed. So these are exactly the same thing: And you see that in code all the time, so you need to recognize it. The second trick is x[:,None] is adding an axis in the second dimension (or I guess index 1 dimension). What if I always want to put it in the last dimension? Often our tensors change dimensions without us looking because you went from a one channel image to a three channel image, or you went from a single image to a mini batch of images. Suddenly, you get new dimensions appearing. So make things general, I would say ... which means as many dimensions as you need to fill this up. So in this case (x[…, None].shape ), it’s exactly the same but I would always try to write it that way because it means it’s going to continue to work as I get higher dimensional tensors. So in this case, I want 50 rows and one column, so I’ll call that x1. Let’s now use that here and so this is now a 2D array and so I can create my random forest. Then I could plot that, and this is where you’re going to have to turn your brains on because the folks this morning got this very quickly which was super impressive. I’m going to plot y_trn against m.predict(x_trn). Before I hit go, what is this going to look like? It should basically be the same. Our predictions hopefully are the same as the actuals. So this should fall on a line but there is some randomness so it won’t quite. That was the easy one. Let’s now do the hard one, the fun one. What is that going to look like? Think about what trees do and think about the fact that we have a validation set on the right and a training set on the left: So think about a forest is just a bunch of trees. Tim: I’m guessing since all the new data is actually outside of the original scope, so it’s all going to be basically the same — it’s like one huge group [1:37:15]. Jeremy: Yeah, right. So forget the forest, let’s create one tree. So we are probably going to split somewhere around here first, then split somewhere here, … So our final split is right most node. Our prediction, when we take one from validation set, so it’s going to put that through the forest and end up predicting the right most average. It can’t predict anything higher than that because there is nothing higher to average. So this is really important to realize a random forest is not magic. It’s just returning the average of nearby observations where nearby is kind of in this like “tree space”. So let’s run it and see if Tim is right Holy crap, that’s awful. If you don’t know how random forests work then this is going to totally screw you. If you think that it’s actually going to be able to extrapolate to any kind of data it hasn’t seen before, particularly future time period, it’s just not. It just can’t. It’s just averaging stuff it’s already seen. That’s all it can do. Okay, so we are going to be talking about how to avoid this problem. We talked a little bit in the last lesson about trying to avoid it by avoiding unnecessary time dependent variables where we can. But in the end, if you really have a time series that looks like this, we actually have to deal with a problem. One way we could deal with the problem would be use a neural net. Use something that actually has a function or shape that can actually fit something that actually has a function or shape that can actually fit something like this so it will extrapolate nicely: Another approach would be to use all the time series techniques you guys are learning about in the morning class to fit some kind of time series and then detrend it. Then you’ll end up with detrended dots and then use the random forest to predict those. That’s particularly cool because imagine what your random forest was actually trying to predict data which was two different states. So the blues ones are down there, and the red ones are up here. If you try to use a random forest, it’s going to do a pretty crappy job because time is going to seem much more important. So it’s basically still going to split like this and split like that, then finally once it gets down to left corner, it will be like “oh okay, now I can see the difference between the states.” In other words, when you’ve got this big time piece going on, you’re not going to see the other relationships in the random forest until every tree deals with time. So one way to fix this would be with a gradient boosting machine (GBM). What a GBM does is, it creates a little tree, and runs everything through that first little tree (which could be the time tree) then it calculates the residuals and the next little tree just predicts the residuals. So it would be kind of like detrending it, right? GBM still can’t extrapolate to the future but at least they can deal with time-dependent data more conveniently. We are going to be talking about this quite a lot more over the next coupe of weeks, and in the end that a solution is going to be just use neural nets. But for now, using some kind of time series analysis, detrend it, and then use random forest on that isn’t a bad technique at all. If you are playing around something like Ecuador groceries competition, that would be a really good thing to fiddle around with. Lessons: 1 ・ 2 ・ 3 ・ 4 ・ 5 ・ 6 ・ 7 ・ 8 ・ 9 ・ 10 ・ 11 ・ 12
Machine Learning 1: Lesson 6
195
machine-learning-1-lesson-6-14bbb8180d49
2018-10-20
2018-10-20 20:34:59
https://medium.com/s/story/machine-learning-1-lesson-6-14bbb8180d49
false
12,122
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Hiromi Suenaga
null
eab3a535185e
hiromi_suenaga
1,462
10
20,181,104
null
null
null
null
null
null
0
/path/to/image.ext;0
1
null
2017-09-28
2017-09-28 20:28:22
2017-09-30
2017-09-30 17:49:45
5
false
en
2017-10-04
2017-10-04 09:06:04
12
14bbdd4b00a9
5.22956
20
5
0
Let’s assume that you want to investigate some aspect of facial recognition or facial detection. One thing you are going to want is a…
5
Creating Multi-View Face Recognition/Detection Database for Deep Learning in Programmatic Way Let’s assume that you want to investigate some aspect of facial recognition or facial detection. One thing you are going to want is a variety of faces that you can use for your system. You can create your own face detection/recognition database but how? Maybe cropping and saving images can come in your mind as an idea but it will be really exhausting way! You probably don’t want to be in front of your computer all day long to crop and save images for creating database. Alternatively, you could look at some of the existing facial recognition and facial detection databases that fellow researchers and organizations have created in the past. Why reinvent the wheel if you do not have to! For example, VGG Face Descriptor or Labeled Faces in the Wild. It works if you just want to have Single-View Face Recognition/Detection database. You can do some processing such as trying to warp each picture or use an algorithm called face landmark estimation to detect/recognize multi-view faces by using these databases but it can not work on surveillance cases. You really need to have Multi-View Face Database to succeed real multi-view face detection/recognition in surveillance or more harder cases. However there are not appropriate any Multi-View Face Recognition/Detection databases in both academia and industry to work and do research on surveillance camera records or more harder cases. Moreover, Multi-View Face Recognition/Detection is the hot topic of Computer Vision in recent years. Thus, creating your own Multi-View Face Recognition/Detection database will be so a very precious step! Who knows, maybe you can publish your database with both academia and industry for helping to develop facial recognition and detection technology. The researchers would be appreciate to you for this! And we already know that; The world’s most valuable resource is no longer oil, but data… then lets create our own data… Let’s talk more technical Let’s start to think about how we can automatically detect, recognize, crop and save face images from videos. The steps which we will deal; 1–) Read videos frame by frame. 2-) Process each frame to recognize faces so we will need face recognizer. 3-) Crop and save each recognized face as an image under the appropriate file path. 4-) Once we have acquired the face data, we’ll need to read it in our program. Therefore, we will create CSV file in programmatic way to read images which are located in database for using database images from another programs. A scene from Person of Interest TV Series Okay, we specified the tasks which we will have to deal to create our own facial database. Now, we should decided the working environment, I mean computer programming language! Let’s think, what do we have? Java, Python or c++… I chose Python! Why? Because I love Python! Let’s me allow to sort the reasons that why I love Python; Interactive Interpreted Modular Dynamic Object-oriented Portable High level Extensible in C++ & C Now, I need face recognizer, my main aim is creating Multi-View Face Recognition/Detection database so I don’t need to develop face recognizer because there are so many face recognizer which was developed by passionate developer on GitHub. I searched on GitHub and I found an amazing face recognizer which is developed by Adam Geitgey. Thanks to him so much! Let’s meet with our face recognizer Adam Agitey’s face recognizer was developed in Python using OpenFace and dlib. Let’s summarize it quickly; Encode a picture using the HOG algorithm to create a simplified version of the image. Using this simplified image, find the part of the image that most looks like a generic HOG encoding of a face. Figure out the pose of the face by finding the main landmarks in the face. Once we find those landmarks, use them to warp the image so that the eyes and mouth are centered. Pass the centered face image through a neural network that knows how to measure features of the face. Save those 128 measurements. Looking at all the faces we’ve measured in the past, see which person has the closest measurements to our face’s measurements. That’s our match! If you want to get more information, please check it out! Now, we have face recognizer and we can start to create our face database! First, we should set the video as an input. I chose a scene from The Big Bang Theory TV Series as an input. Then, we should set the faces which will be recognized by our face recognizer. I determined Penny and Sheldon as target. And as you can see at the below, our face recognizer works even if faces turned different directions so we can catch, crop and save multi-view face data, we are ready!!! Faces are Tracking, Cropping and Saving as Images from Video Our face recognizer works pretty good and let’s start to create our own face database! We can detect, recognize and track the face images so now it is cropping and saving images turn. We can crop and save images easily in Python. Just we write the code that crops the images which is specified by face recognizer and they are drawn with red line boxes as you can see above. We will have a huge data after start to crop and save images so we should take consider about being well organized. We should locate the face data under appropriate folder path. It is important because we don’t want to spend our time to organize the data by hand, it should be done in programmatic way as you can see at the below. Images are Saving from Video With Appropriate Path Hierarchy That’s it!!! Now, we can create our own face database! More input videos equal more face images, more face images equal more data, more data equals big data and big data is better data!!! We have acquired the face data, we’ll need to read it in our program. In the demo applications I have decided to read the images from a very simple CSV file. Why? Because it’s the simplest platform-independent approach I can think of. However, if you know a simpler solution please ping me about it. Basically all the CSV file needs to contain are lines composed of a filename followed by a ; followed by the label (as integer number), making up a line like this: You don’t really want to create the CSV file by hand. I have prepared you a little Python script create_csv.py that automatically creates you a CSV file. As a summary, face_recognizer.py detect, recognize, crop multi-view faces from video and save them as images. Then, it calls create_csv.py to labelling and indexing database, in other words it creates dataset from database so you can read the database images from another program. Here is the flow diagram which is summarize what have done we so far to create our own multi-view face database. The flow diagram of creating your own database project The more information and source codes are available on this GitHub repository.
Creating Multi-View Face Recognition/Detection Database for Deep Learning in Programmatic Way
141
creating-multi-view-face-recognition-detection-database-for-deep-learning-in-programmatic-way-14bbdd4b00a9
2018-06-18
2018-06-18 23:52:02
https://medium.com/s/story/creating-multi-view-face-recognition-detection-database-for-deep-learning-in-programmatic-way-14bbdd4b00a9
false
1,165
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Ahmet ÖZLÜ
I am a big fan of Real Madric CF and I love computer science!
728f440c8c18
ahmetozlu93
17
2
20,181,104
null
null
null
null
null
null
0
X x1 x2 x3 1 1 0 --> just like our previous ml data and we can do what ever 1 1 1 we wanna do.
1
dedce56b468f
2017-10-17
2017-10-17 06:59:45
2017-10-17
2017-10-17 17:07:16
19
false
en
2017-10-17
2017-10-17 17:07:16
2
14bbeb8edc79
5.507547
31
1
0
so far we have talked about machine learning and deep learning algorithms which can be used in any field. One of the main fields where…
5
Chapter 9 : Natural Language Processing. so far we have talked about machine learning and deep learning algorithms which can be used in any field. One of the main fields where ML/DL algorithms are used is Natural language processing(NLP) so from now onwards lets talk about the NLP. NLP is a big area, probably bigger than Machine learning cause the concept of language is really intense so we are not gonna focus on it completely but we focus on the small area where it meets machine learning and deep learning. let’s understand the natural language processing in our space. Natural language processing The main goal here is , we wanna make the computer understand the language as we do and we wanna make the computer respond as we do. We can break that into 2 sections Natural language understanding: The system should be able to understand the language(parts of speech, context , syntax , semantics, interpretation and etc…) This can typically be done with the help of machine learning( Although problems are there). Not much difficult to do and gives good accuracy results. 2. Natural language generation: The system should be able to respond / generate text (text planning, sentence planning, producing meaningful phrases and etc…) This can be done with the help of deep learning as deep understanding is required( Although problems are there). much difficult to do and the results may not be accurate. so where do we use ML in NLP??? These are the couple of applications where we focus Text classification and clustering Information retrieval and extraction Machine translation(one language to another) Question and answering system spelling and grammar checking Topic modeling and sentiment analysis Speech recognition I will try to explain and complete all the topics in next following stories , in this story we learn the basic fundamentals for text/document which is common for many applications. Note: Assume now that Text , data, document,sentence and paragraph all are same. What is a text ?? A text is a set words sequentially written. Each word in the text has a meaning where the text may or may not have a meaning. in machine leaning we take features right? so here each word is a feature(unique). Ex : Text : I love programming → I , love, programming are the features for this input. How do we derive the features?? First apply Tokenization (a text is divided into token), we can use open source tools like NLTK to get tokens from the text. checkout this example so here we have the programming repeated twice as tokens but we only take once so the features for this text are → I , love, programming, and, also, loves, me. but wait the words love and loves mean same , these are called inflectional forms. we need to remove these removing these inflectional endings is called lemmatization so now the features for this text are → I , love, programming, and, also,me. we can even think deep and say the word programming is similar to the word program there is a concept called Steeming pythonspot.com(picture) so if we apply steeming then so now the features for this text are → I , love, program, and, also,me. there are couple of words which occur very frequently in every language and don’t have much meaning , these words are called Stop words. The stop words in English are So stop words should be removed from our text Note: we covert the text into lower case before tokenization to avoid duplicates. the final features for this text are → love, program, also. That’s understandable and cool. Here we get a clean text so we work on it and generate the features but in real world we don’t get the clean data , we always get the raw data which has a lot of unwanted things(symbols, links, hashtags, numbers, spaces and etc..) We need to clean the clean the data, this process can be called data normalization. let’s take a tweet from twitter as it is a real world data and apply tweet normalization This text contains a bit of noise like punctuation’s, a link and etc.. check the below images for comparison Text features comparison so how can original tweet be normalized??? we remove the unwanted things in data using regular expression. these are a small set of statements, there can be many more , depending upon the data we need to normalize the text to a certain extent. Let’s take a toy dataset which has 2 training examples (documents/texts) 1 → I love programming 2 → Programming also loves me After normalization, lemmatization, and stopwords, Our features are gonna be love, programming, also. We can store them in a file and call it a dictionary or a lexicon. Okay we got the features from text. Features values are = words we can not feed the words to the computer / model / ml algorithm, The features values must be numbers. so we take the count for every word in every document as a value. Note: these numbers are after changing the documents. so instead of feeding “I love programming” or “love programming” [After changing] , we feed [1 1 0] as a vector. I call this process document vectorization. here is the code for this We just converted each document into a vector. 6th cell is a count vectorization, it gives the count of how many times a word appeared in the entire dataset. we can achieve the same results using scikit learn count vectorizer. The most appeared words called the top bag of words. Hope you understand the code. There is still a lot of things to learn and point here , in the next story i will cover the remaing topics which are TFIDF and word2vec. Let me know your thoughts/suggestions/questions. That’s it for this story, we will see next time until then Seeeeeee Yaaaaaaaa! Full code is on my Github.
Chapter 9 : Natural Language Processing.
280
chapter-9-natural-language-processing-14bbeb8edc79
2018-06-19
2018-06-19 05:57:48
https://medium.com/s/story/chapter-9-natural-language-processing-14bbeb8edc79
false
1,009
This is all about machine learning and deep learning (Topics cover Math,Theory and Programming)
null
null
null
Deep Math Machine learning.ai
null
deep-math-machine-learning-ai
MACHINE LEARNING,DEEP LEARNING,MATH,ARTIFICIAL INTELLIGENCE,NEURAL NETWORKS
null
Machine Learning
machine-learning
Machine Learning
51,320
Madhu Sanjeevi ( Mady )
Writes about Machine Learning || Passionate about AI, ML and DL || Programmer || Mathematician
2e3bcbe5a8ae
madhusanjeevi.ai
1,359
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-08
2018-09-08 07:18:30
2018-09-12
2018-09-12 07:04:51
7
false
en
2018-09-12
2018-09-12 07:07:14
6
14bc5ab22b78
2.219811
0
0
0
This post is based on my internship experience where I worked on the segmentation of Spine using U-Net architecture.
5
Spine Segmentation using U-Net This post is based on my internship experience where I worked on the segmentation of Spine using U-Net architecture. Data-Set: CT scans of 11 patients collected from the institution-affiliated hospital. The data were in dicom format with no labels. Image Pre-processing: Since the data had no labels, I had to generate labels manually. I used 3D slicer’s automatic segmentation feature to generate labels and save them as dicom files. Automatic-Segmentation in 3D Slicer The above figure shows how automatic segmentation can be used to generate labels(mask). You can slide the slider to adjust the noise in the mask. To save the mask as dicom files: Creating Dicom Series of masks The dicom files then can be read,cropped and saved as .png files using python package: pydicom An instance of image and label obtained after pre-processing are shown below: Image(left) and Label(right) There are still noises(white dots) in the label — It was acceptable in my case. You can manually reduce these noises in 3D slicer if you want. Training: The images and labels obtained were split into train and test set, and trained using U-Net architecture. The training was done for 100 epochs using the Adam Optimizer with a learning rate of 0.001. Evaluation metric: Jaccard Index, also known as Intersection over Union (IoU) was used as the evaluation metric during training. For two sets A and B, Jaccard index is defined as the following: Jaccard Index( IoU) Loss function: The loss function used for optimization can be defined as: Loss function: L Result: The Jaccard Index obtained after training for 100 epochs with U-Net architecture was 0.7; This can be improved by training longer and using data-augmentation, both of which were not used in this project. Image with real label(left) and Image with predicted label(right) Image with real label(left) and Image with predicted label(right) PS: I cannot publicly share the data-set and codes of this project as it was not a personal project. However I edited and improved on this post.
Spine Segmentation using U-Net
0
spine-segmentation-using-u-net-14bc5ab22b78
2018-09-12
2018-09-12 07:07:14
https://medium.com/s/story/spine-segmentation-using-u-net-14bc5ab22b78
false
310
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Bibek Chaudhary
self-learner
84a0db77e00e
bibekchaudhary
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-12
2018-07-12 19:13:53
2018-07-12
2018-07-12 19:17:17
1
false
en
2018-07-12
2018-07-12 19:17:17
2
14bcdb434ba3
1.256604
0
0
0
Most of the initial talk around the Internet of Things (IoT) emerged in the early 2000’s with the adoption of wireless internet. The first…
4
How the Internet of Things Is Changing Most of the initial talk around the Internet of Things (IoT) emerged in the early 2000’s with the adoption of wireless internet. The first signs of the IoT started with RFID or radio frequency identification of packages and transported goods. The ability to communicate the tracking and delivery of such items to a main server made it possible for the transportation industry to conduct more business, more efficiently. Today, the IoT has grown into its own industry and impacts a variety of others in the forms of artificial intelligence, Bluetooth capabilities, and other technology. But what does the IoT really entail? There are many definitions, but in general, the Internet of Things encompasses the many devices that are able to communicate with one another as a result of the internet. An Increase in the IoT Is an Increase in Data What do Amazon Dash Buttons, Google Home, and Fitbit devices all have in common? They’re each a result of the capabilities of the IoT. As devices and platforms communicate with one another, they’re able to identify each other and begin to collect and transfer data. While there have been many estimates of the growth of connected devices, most anticipate them to grow by billions by 2020. Such devices will include more wearable technology, self-driving cars, smarter cities, and more real-time technologies that have yet to show themselves. More communication and more devices equal more data. More data leads to smarter technology, increased operational speeds, and effortless methods of machine learning and automation. Additionally, with the improved processing power and storage of data, data collection and management are becoming easier. Continue Reading
How the Internet of Things Is Changing
0
how-the-internet-of-things-is-changing-14bcdb434ba3
2018-07-12
2018-07-12 19:17:18
https://medium.com/s/story/how-the-internet-of-things-is-changing-14bcdb434ba3
false
280
null
null
null
null
null
null
null
null
null
Market Research
market-research
Market Research
2,627
GutCheck
A global, agile market research provider bringing you relevant content on the market research industry.
70c4791f3bc2
marketinguser
3
2
20,181,104
null
null
null
null
null
null
0
null
0
59d66ec33f83
2018-01-30
2018-01-30 17:52:55
2018-02-01
2018-02-01 14:07:52
4
false
en
2018-02-24
2018-02-24 20:18:58
1
14bd7f1dc26f
2.888679
12
2
0
Companies have a choice today. They can be the disrupted or the disruptor.
4
Raiders of Every Industry: The Journey to Digital Companies have a choice today. They can be the disrupted or the disruptor. Every industry will be disrupted in the coming years. None are safe. The safer they seem, the more susceptible they probably are. In fact, only two things stand a chance of protecting incumbents: Intellectual Property — The data they have collected over their years of doing business Intellectual Capital — The domain knowledge their employees possess But for those to matter, the company has to become truly digital. What do I mean by that? Becoming digital is a journey with three distinct steps that a company would typically take in order, at least within a given business unit. The steps are: data transformation, data science transformation, and digital transformation. Each step requires attention to strategy, technology, organization, and culture. Fortunately, the changes are gradual at first and then expand. Data Transformation The first stage, Data Transformation, might not feel particularly transformative: So, what has really changed? This stage is about defining the core assets that create value for the enterprise, and it’s about discovering and governing that data without necessarily expecting — or forcing — upheaval. Data governance in particular is a greater lever than most of us realize. It means establishing policies that preserve privacy and protect data while making access to data frictionless for those who need it. Strategy is important here. You’re focusing on understanding what data is available and how it can help existing stakeholders act more efficiently and with greater confidence. At the same time, you’re setting up your data science team or giving your existing data science team more secure, self-service access to data and a wider mandate to gather and explore the company’s data. Data Science Transformation With the next stage, data science begins to lift off: Here, the key changes come from letting the data science team discover — and openly discuss — ground truth for the business. What does the data actually have to say? How do you begin to get a 360° view of customers, products, talent, and the company at large? Where do those 360° views offer the company the chance to disrupt its own assumptions — before they’re disrupted by new competitors? You’ll also need to ask whether the culture is ready to welcome the new ideas. If you’ve finessed the Data Transformation phase, the organization will already value and trust the data science teams and their processes. Force-feeding your insights to the organizing is a good way to jeopardize the entire transformation that you’re trying to enact. Digital Transformation The third and final stage is where the flywheel really begins to spin, but in the process, you’re likely to see fundamental changes to how you do business, including what you offer, how you offer it, and to whom: Customers will tend to get more and more important as the transformation proceeds. You’ll know more about who they are and how to build connections with them that endure even through the change. What’s Next In follow-on posts, let’s look more closely at each stage and the details of what you’ll need to be successful. In the meantime, consider the table stakes necessary to get your transformation underway. For you to be successful, every tool you use must: Have open source at its core Embed artificial intelligence to help automate processes Be deployable everywhere Start there and stay tuned for more detail about Data Transformation.
Raiders of Every Industry: The Journey to Digital
76
raiders-of-every-industry-the-journey-to-digital-14bd7f1dc26f
2018-06-16
2018-06-16 23:38:52
https://medium.com/s/story/raiders-of-every-industry-the-journey-to-digital-14bd7f1dc26f
false
580
Insights for today and tomorrow from IBM.
null
IBMAnalytics
null
IBM Analytics
ibm-analytics
DATA SCIENCE,ANALYTICS,INNOVATION,IBM,INSIGHTS
IBMAnalytics
Data Science
data-science
Data Science
33,617
Seth E Dobrin, PhD
VP & CDO, IBM Analytics Chief Data Evangelist Leader of exponential change Using data and analytics to change how companies operate Opinions are my own
a354e5b3496c
SDobrin
192
197
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-27
2018-08-27 12:19:12
2018-08-27
2018-08-27 14:36:53
1
false
en
2018-08-27
2018-08-27 14:36:53
5
14be79ab760d
2.090566
0
0
0
If you haven’t seen this keynote speach by James Mickens from Harvard University yet, you are in for a treat. Sit back and enjoy! Trust me…
5
Hilarious truths about Machine Learning, IoT, Big Data, AI and Blockchain If you haven’t seen this keynote speach by James Mickens from Harvard University yet, you are in for a treat. Sit back and enjoy! Trust me, its worth your time… and hopefully, you may save your employer time and 💰 as well based on the learnings and takeaways from that speach. James Mickens, Harvard University, keynote speach on 27th usenix security symposium Myth vs. reality time Myth: We (company) have a lot of data and we just need a data scientist to make sense of it. Reality: Data is siloed, erroneous, too much aggregated, messy and most of the time not the right data is collected. There is no infrastructure to work on. No cloud and open-source allowed. No understanding of basic analytics and machine learning and wrong expectation about deep learning aka artificial intelligence. Source: Dat Tran on LinkedIn “IBM Watson is the Donald Trump of the AI industry — outlandish claims that aren’t backed by credible data.” — Gizmodo Experts in secure voting systems on using blockchain Josh Benaloh, a senior cryptographer at Microsoft Research who has spent 30 years researching secure voting systems: “It is a terrible mismatch for the voting and election space, it seems attractive, until you scratch under the surface. There are so many ways in which blockchains don’t solve the real problems, they just make everything worse.” Dan Wallach, a professor specializing in computer security at Rice University, believes that the crypto-infatuated generation is far too optimistic about what their new toys can achieve: “Blockchain people haven’t really been paying attention to the threat models inherent in voting, particularly bribery and coercion. They tend to make naive assumptions about voters’ ability to control the cryptographic keys and software used to express their votes. None of these systems are suitable for use in real-world municipal elections.” Source: https://www.wired.com/story/santiago-siri-radical-plan-for-blockchain-voting Human beings are not reliable The problem of self-sovereign identity: We can’t trust people. — John Erik Setsaas Anyone who has ever known a human being for any length of time knows this. They forget passwords and credentials and do not create backups. New technology that relies on fallible people to keep credentials safe comes with undeniable risks. A good example of this are the 23% of all bitcoins that are now lost, thanks to lost passwords and hard drives that now lie in landfill. Clearly, creating a secure, cost-efficient and usable management of identities is not simple. Self-sovereign identity, often discussed as a straightforward identity system, actually requires clunky solutions and multiple custodians to support it. It’s important to keep this in mind when these buzzwords are thrown around. “Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…” Now replace Big Data in Dan Ariely ‘s well known quote above 👆with AI, Machine Learning & Blockchain… Source: Mario Solis Burgos on LinkedIn As James Mickens puts it, be skeptical about all these new technologies…
Hilarious truths about Machine Learning, IoT, Big Data, AI and Blockchain
0
hilarious-truths-about-machine-learning-iot-big-data-ai-and-blockchain-14be79ab760d
2018-08-27
2018-08-27 14:36:54
https://medium.com/s/story/hilarious-truths-about-machine-learning-iot-big-data-ai-and-blockchain-14be79ab760d
false
501
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Daniel Bentes
Senior Product Lead for Identity & Privacy solutions at Telia Company
118ee9aeedd8
danielbentes
298
379
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-21
2018-02-21 18:13:43
2018-02-22
2018-02-22 12:01:01
1
false
en
2018-02-22
2018-02-22 12:01:01
1
14bf095d3441
1.641509
0
0
0
Everyone hates Big Brother, but people love Little Sister.
5
Little Sister Everyone hates Big Brother, but people love Little Sister. Bioshock by 2K Games — modified by ‘Lego Slayer’ Big Brother is scary and menacing. He’s like the Big Bad Wolf. Most people are familiar with him through an old cautionary story. Little Sister isn’t scary. She has nicknames like Alexa or Siri. Big Bro is cold and distant. Little Sister is warm. She has a friendly voice that is getting better and better everyday. She is familiar. She even recognizes your voice from the other members of your family, and knows you by name now. Big Bro only serves the State. He is everywhere, listening and watching all the time. Little Sister is also listening and watching all the time but she is here to help you. She plays your music when you want it. Little sister also helps you with the lights when your hands are full of grocery bags. You’re forgetful so Little Sis reminds you when your food should be done, or what’s on your calendar and todo lists. You can even ask your Lil Sis about the weather. She shows you your favorite shows and videos. She doesn’t mind. Big Bro is old and set in his ways. Little Sis is still young and at times clumsy, but she’s a fast learner and voracious about learning. She can play games with you now, can tell you the news, and about all kinds of facts like what pulchritudinous means. It’s fun watching your Lil Sis grow up. You are fond of her. At the same time, like with most children you worry about the future. What she will become when she grows up? In the back of your mind, you remember that Little Sis isn’t really yours. She’s the child of the Corporation. In fact, She is owned by the Corporation. While Lil Sis is here to help you, you know that her main goal is to increase the Corporation’s profits. She already whispers some of your secrets to the Corporation. Most of it is to better help you. Yet you also have this nagging thought that eventually, most Corporations bend to or serve the State. You wonder… will your Lil Sis become Big Bro? Thankfully, it’s a thought that feels far away as you needlessly thank her again for turning off all the lights downstairs.
Little Sister
0
little-sister-14bf095d3441
2018-02-22
2018-02-22 19:34:11
https://medium.com/s/story/little-sister-14bf095d3441
false
382
null
null
null
null
null
null
null
null
null
Technology
technology
Technology
166,125
Brian Lee
code monkey @ theymadethat
983e51ad8692
chaostheory
4
28
20,181,104
null
null
null
null
null
null
0
null
0
32d32c7423d8
2017-09-28
2017-09-28 15:37:37
2017-09-28
2017-09-28 15:39:05
0
false
en
2017-09-28
2017-09-28 15:39:05
10
14bf2eacfef
5.85283
0
0
0
Over the centuries there has been a struggle for basic human rights for everyone, now enshrined in the The Universal Declaration of Human…
3
The Fight for ‘Human’ Rights Over the centuries there has been a struggle for basic human rights for everyone, now enshrined in the The Universal Declaration of Human Rights. These are recognised by all except the most extreme Libertarians and criminals. More recent and still very much under debate is the fight for animal rights, which follows the discovery that even quite simple lifeforms can feel pain and many higher animals can have recognisable emotions. Much more problematical, and with discussion only just beginning, is the question of whether machines can ever have feelings and hence whether their rights should be protected. AIs (Artificial Intelligences) that learn are commonplace; it is no longer true to say that computers can only do what humans have programmed into them. These AIs outperform humans in many ways that would have been thought impossible not so very long ago. There is talk of computers more powerful than human brains, uploading the complete contents of a human brain into a computer, head transplants and interfacing a human to a computer using thought alone. The distinction between human and machine through measuring intelligence is already becoming difficult to determine and we have to look at much more sophisticated tests than the basic Turing Test if we want to accurately distinguish between them. John Searle with his Chinese Room Experiment and in other writings has firmly made the case that computers can never be conscious, they can only simulate consciousness; others such as Daniel Dennett disagree. All accept that, at the present time, we do not understand what consciousness is, although the physical human brain itself is beginning to give up its secrets. It is difficult to accept that computers, whose power and abilities are increasing exponentially, and which already learn from and react with their environment in ways that are impossible to completely determine, can never exhibit an attribute which we do not understand. It seems that in the not too distant future we will interact with machines which exhibit all the characteristics of being conscious but whose actual conscious nature we cannot determine. There is an additional dilemma. We can already use stem cells to repair or replace damaged brain cells; it is not too great a leap to growing something that is physically like a complete brain. Many like Ray Kurzweil see a future where humans and machines grow together. Not only will artificial organs and limbs be used to repair, replace and improve the current delicate biological ones but implants connecting directly into the brain place the entire internet and vast amounts of processing power directly as part of our mind. It is not completely beyond our imagination to envisage a human, most of whose biological parts have been replaced by machinery and virtually the only remaining tissue is a brain which is only a part of the “mind” of the being. What would be the difference between that and a machine, constructed with part of its brain composed of human-like brain cells that were grown in a factory? Should we regard the one as fully human with all the rights that that entails and the other a mere machine with no rights which can be shut down on a whim? It may seem that it is very early in the development of AI to be thinking in these terms, but the exponential nature of technological development means that the time will be upon us well before many of us realise it. We are already deploying robots in situations which would be too dangerous for humans or, in the case of space exploration for example, where there is no hope of return, because they are expendable. We are imagining, even looking forward to, the use of robots as soldiers, domestic servants and as sex toys; all the time doing everything we can to make them autonomous which means that they will be able to make their own decisions and perform their function without direct human control. In this respect these robots are being treated in very much the same way as slaves in the past (and indeed still in the present as well.) It took many centuries and many lives in the struggle to recognise that all humans had some basic rights that should be upheld by law; it is better to recognise and deal with the potential problems now rather than face similar problems again. There are many who fear the rise of machine intelligence seeing it as the beginning of the end for the human race which will gradually become superfluous. This raises the important question as to whether it might be best to try and put a stop to it now, to outlaw all research into machine intelligence and consciousness so as to cement forever the supremacy of the living human mind. This in itself raises many ethical and practical problems. If we as the human race have the potential to create a new sentient species, should we not do so; is deliberately failing to do it a form of specicide. Surely if the result of creating a new species turns out to be that it becomes dominant and we decline and disappear, then should we not accept that this is as it should be, that it would only happen if they were better suited to living in and understanding the universe than we are. It is, of course, highly unlikely that a race as expansionary and warlike as the human race would meekly accept such an end, so a struggle for supremacy would be very likely unless we take the path of harmonious living together and gradual merging of human and machine. This still would involve many problems with a race so large and diverse as the human race. Universal acceptance of the best course to follow for the race as a whole is not something that has ever been achieved so far. In practice any attempt to ban research into advanced machine intelligence is certain to fail. The genie is quite definitely out of the bottle already. AI is so useful and cost effective in so many different circumstances that the basic laws of supply and demand on which our capitalist system is based would mitigate against any attempt to restrict its development. Implementation of any such laws would also be virtually impossible since it would have to be complete and universal. Again, previous attempts to ban or restrict technologies, such as research into human embryos and cloning, have been mainly unsuccessful, although there is certainly an advantage in expressing the opposition of the majority to such research even if it continues underground. In the same way, if we accept the possibility that AIs may be given or develop consciousness then we should be beginning to formulate a regulatory regime such that the first artificial minds that are deemed to be conscious will have been decently treated. Yet another problem now arises if we do accept the possibility of conscious machines. Given that, initially at least, they will have been constructed by humans, and their education will be provided by humans, will they have an understanding of ethics and if so will their concept remain the same as ours as they develop separately. Given that debates on ethics and morality have been going on since the beginning of human existence, with Socrates still considered as a major source for modern thought, it is perhaps unlikely that our own understanding of ethics is sufficient for it to be agreed and codified for incorporation in a machine. It might be best for the AIs, and perhaps also for us, if they be left to develop their own theories of how societies should be organised and how sentient beings should behave towards each other, a practical experiment for John Rawls’ Veil of Ignorance. This would be an act of faith that such a society would develop ethically and include humans as equals. Let us return to the situation as it is now when we consider whether machines, computers, robots or AIs can ever be considered as intelligent, conscious, sentient beings and therefore should enjoy those same rights that we currently give, or should give, to humans. We are considering entities whose capacities are, in general, below ours, but increasing exponentially with no foreseeable limit. The qualities that we are considering have been much debated but are still poorly understood; we do not understand what they are nor how they arose. We may not even be able to tell whether the machines actually have developed those qualities or whether they are just simulating them; does this make a difference? If we concede the possibility that machines might become conscious, or just that for our own sakes we should consider them as such even if it might technically be a simulation, then there are some deep philosophical, sociological and political questions that must be urgently considered. These range from deepening and extending our own concepts of ethics so that they are transferable to machines to allowing them space to develop their own systems, which hopefully will merge with our own. It would be a pity if our worst fears were realised and the first conscious intellects of a new species were inimical to humanity because their forebears had been treated as mere machines.
The Fight for ‘Human’ Rights
0
the-fight-for-human-rights-14bf2eacfef
2018-03-19
2018-03-19 05:18:22
https://medium.com/s/story/the-fight-for-human-rights-14bf2eacfef
false
1,551
Philosophical Musings
null
insubstantial.eu
null
Insubstantial
insubstantial
PHILOSOPHY,SOCIETY,CONSCIOUSNESS
computermike027
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
The Cyber Socialist
null
2d329a6caef2
mikecurtis
14
24
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-11
2018-04-11 12:42:07
2018-04-11
2018-04-11 12:45:57
1
false
en
2018-04-11
2018-04-11 12:45:57
6
14bfe716c13a
2.422642
0
0
0
Artificial intelligence (AI) is a source of both huge excitement and fear across the HR function. What are the real opportunities and…
5
AI will make HR more Human, Strategic and Innovated Artificial intelligence (AI) is a source of both huge excitement and fear across the HR function. What are the real opportunities and challenges for HR? Drawing on a framework analysis of how AI will impact HR, I identify the most valuable win-to-win value proposition where machines and HR team-collaborations work together to deliver a brand-new HR approach to the market. We have all witnessed the evolution of the HR function, moving from a traditional department that hires, fires and manages benefits, to having the opportunity to be a “business partner” to support business transformation. Fortune 500 organizations have developed advanced HR practices that leverage strategy and operating models using talent, culture and leadership capabilities. However, technology disruption is challenging HR to design a strategy that delivers a different value proposition, one that includes AI, and to better serves employees and customers delivering value to shareholders. Digital transformation to empower HR According to IBM’s 2017 survey of 6,000 executives, “Extending expertise: How cognitive computing is transforming HR and the employee experience”, 50% of HR executives recognize that cognitive computing has the power to transform key dimensions of HR. And 54% of HR executives believe that cognitive computing will affect key roles in the HR organization. The message is clear: CHROs and CEOs recognize the value proposition that cognitive solutions bring to HR and believe its unique advantage can address the new talent and workplace imperatives; however, most are uncertain how and where to start. There are three areas in which the HR function is starting to leverage the power of cognitive computing: talent acquisition, talent development and HR operations. Let review some detail. Talent acquisition: AI is already enhancing talent acquisition, giving support in resume screening, candidate sourcing, ATS mining, while reducing unconscious bias in the selection process. Chatbots are interacting with candidates, answering FAQs about the position, asking screening questions about qualifications, interviewing and scheduling pre-interviews via email, text message and social media. Mya has the ability to automate candidate sourcing using AI. The technology engages with passive candidates from both internal and external sources, and through conversation is able to match them to relevant job opportunities or schedule them onto the calendar for an interview. MyAlly automates the recruitment coordination process, candidate pipeline and evaluation (hiring manager feedback) to boost productivity, shorten time-to-hire, and make recruiting processes more candidate-friendly. Talent development: Harnessing revolution: Creating the future workplace, a 2017 Accenture research report, reveals that most workers (80%) believe that automation will provide them with more opportunities than challenges, and 95% believe they need new skills to stay relevant at work. AI leverages talent development providing cognitive insights to up-skill employees and can tailor recommendations for learning and career management. AI can help with class enrolment, training delivery, retention, follow-up and assessment. Mobile Coach offers automated conversations to transform training programs. It’s a chatbot that holds conversations with users to reinforce training and to help you to stay connected to their progress. It’s customized to fit each individual learning and training plan. Butterfly is your AI leadership coach and uses team feedback to help managers grow as leaders. The system collects employee feedback via 30-second pulse surveys. The data is captured in a way that helps managers improve their management skills and monitor the team’s feedback in real time via an intuitive dashboard that follows, analyses and reveals macro-trends. Read the continuation: https://cactussoft.biz/blog/2018/04/09/ai-will-make-hr-more-human/
AI will make HR more Human, Strategic and Innovated
0
ai-will-make-hr-more-human-strategic-and-innovated-14bfe716c13a
2018-04-11
2018-04-11 12:45:57
https://medium.com/s/story/ai-will-make-hr-more-human-strategic-and-innovated-14bfe716c13a
false
589
null
null
null
null
null
null
null
null
null
Digital Transformation
digital-transformation
Digital Transformation
13,217
Darya Maksimenka
null
12f7dab89b75
daryamaksimenka
7
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-11
2017-11-11 19:13:25
2017-11-12
2017-11-12 23:59:26
1
false
en
2017-11-13
2017-11-13 21:48:36
0
14c24063692f
4.913208
5
0
0
Many of us in the Deep Learning community know that the major models of Deep Learning, i.e. Convolutional Neural Nets, LSTM Recurrent…
5
NVIDIA — The Dark Knight of Deep Learning AI Many of us in the Deep Learning community know that the major models of Deep Learning, i.e. Convolutional Neural Nets, LSTM Recurrent Neural Nets or Neural networks in general have existed since the 90s. It is now that we have the data (thanks to the Internet) and the computational power that we are able to see Deep Learning making an impact on our daily lives. What we often overlook is the parallel nature of computation enabled by GPUs that is making this AI revolution happen. Had it not been for NVIDIA’s GPUs with its CUDA technology, the meteoric growth of Deep Learning would not have been possible. NVIDIA is one of the few chip manufacturing companies that recognized the potential of Deep Learning since its rise in 2012. It all started when Alex Krizhevsky, then a Ph.D. student at the University of Toronto, slayed the ImageNet Competition from his bedroom with his desktop, powered by two Nvidia GeForce gaming cards. Consequently, Nvidia invested significant amount of their resources to this endeavor of Deep Learning. Not only has Nvidia been successful in creating products that has powered the AI revolution but it also believes in the power of Deep Learning and how much positive impact it can bring about in the world. Nvidia shows this by supporting 1300 Deep Learning startups (Inception program) and open-sourcing the Deep Learning Accelerator module from its autonomous driving module (Xavier), among other things. Moreover, Nvidia supports every major deep learning framework, it is available in systems from every major OEM, and is available on every major cloud. In this article, I have made a summary of the major product and platform announcements in this year’s GTCs throughout the world. GTC 2017 — All keynotes’ summary Project HoloDeck — Experiencing HoloDeck from what it seems, is like entering the Matrix (from the movie The Matrix). The surroundings work like photorealistic 3D environment and obeys the laws of physics, but just like in the movie one can lift superheavy objects and see through opaque surfaces when needed. Although it is straight out of a science fiction movie, it is meant for design collaboration and experiencing CAD designs in Virtual Reality instead of your desktop monitors. Although this announcement is related to VR and not Deep Learning, this has future applications of Deep Learning and is awesome. Tesla Volta V100 While GPU was a far superior option for Deep learning computation, GPUs were originally built to generate graphics and thus a deep learning specific optimization was needed. Nvidia addressed this with its Tesla Volta V100 chip, which according to them is the mother of all Deep Learning processing unit one can find. It is designed from the ground up for deep learning and has 5120 computing cores of which 640 are Tensor Cores. Tensor Cores are specialized computing cores designed to perform 4 x 4 matrix multiplication (A * B + C ) used in deep learning networks. From its specs, it seems the chip is crazy fast as it delivers 120 Teraflops of Tensor operations. #JustNvidiaThings Nvidia TensorRT 3 — Programmable Inference Accelerator As the complexities of the Deep Learning networks are growing, each model needs to be optimized for the specific kind of hardware it is running on. Put simply, TensorRT takes the neural network from any framework and using advanced compiling technology, optimizes for all of the target devices that it would run. GPUs are great for deep learning training but needs to be even better at inferencing. Inferencing is the part where we do predictions or result generation using the trained model on unseen data. TensorRT enables inferencing to be incredibly fast, something which GPUs were not good at till now. NVIDIA GPU Cloud Firstly, Nvidia has containerized the entire stack of Deep Learning softwares required in a deep learning system and secondly it has made its entire stack available on the cloud as well. So no more trouble downloading the different versions of drivers, CUDA, cuDNN or frameworks, specific to a particular OS or GPU. DGX-1 supercomputer with Volta The DGX-1 was first announced in 2016 and came with 8x NVIDIA Tesla P100 (3584 CUDA Cores). With the release of Volta, the DGX-1 will be upgraded to the V100 chips. This beast will be able to complete in 8 hours what takes Titan X 8 days, and is priced at $149K. Fun fact: The first version of DGX-1 was delivered to Elon Musk and his team at OpenAI by Jensen himself. DGX workstation For those developers or startups who are not quite ready for the supercomputer yet and are finding it hard to make do with Titan X as well, Nvidia announced the DGX workstation which comes with the Volta V100 GPU. Priced at $69K, this is a lucrative option for research labs as well. Drive PX platform — Partnership with Toyota for their autonomous vehicles. Jensen first showed how the Drive PX platform along with their specialized autonomous driving computation hardware has evolved to be market ready. Then he announced that Toyota has chosen this platform to be their autonomous driving platform. This partnership is going to strengthen both Toyota as well as Nvidia in their autonomous driving efforts and will expedite the growth of self driving car industry. Project Issac Of all the announcements in this year’s GTC, the one which makes Nvidia not only the Deep Learning hardware king but also shows its cutting edge Deep Learning software prowess is Project Issac. This project connects with me so deeply that I think I should write a separate article on this. To share my idea briefly: We as humans understand words in a language with its meaning grounded in the visual 3D world. A natural language understanding system would never be perfect unless it has its understanding rooted into the 3D world. Moreover, we humans first learn to maneuver ourselves in this 3D world before we learn the language. Also what we learn to do or not is based on our experiences, i.e. if doing something gives us negative feedback we naturally avoid it (e.g. touching a very hot surface) and tend to do things that gives us positive feedback(e.g. playing games, eating candy). This is also called Reinforcement learning. Nvidia has started working on such a 3D simulation of real world and is letting AI learn on its own through Reinforcement learning. Whatever a robot learns in the virtual world can be taken and put into a real robot in our world and it would be ready to roll on those learned tasks. Conclusion Seeing Nvidia’s keynote is experiencing the future and those who don’t are staying behind. Nvidia is surely a propeller for the growth of Deep Learning and Artificial Intelligence in all sectors. It is both enabling software companies to do deep learning as well as exploring the frontiers itself, thus pushing the whole AI industry forward. When we experience Apple’s Siri, Google Assistant, Facebook’s face recognition or Tesla’s Autopilot, we don’t realize the GPU power that is making it possible on the inside and thus we don’t hail it as a Deep Learning hero. In conclusion, Nvidia is a company that not only Deep Learning deserves but also the one it needs right now. So we will expect more. Because it can deliver. Because it is not our hero. Its a diligent creator. A silent enabler. A watchful explorer. A Dark Knight.
NVIDIA — The Dark Knight of Deep Learning AI
33
nvidia-the-dark-knight-of-deep-learning-ai-14c24063692f
2018-05-18
2018-05-18 02:18:14
https://medium.com/s/story/nvidia-the-dark-knight-of-deep-learning-ai-14c24063692f
false
1,249
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Purnendu Mukherjee
null
94c56c07013a
purnendumukherjee
52
57
20,181,104
null
null
null
null
null
null
0
null
0
4fe64d8f2902
2017-11-01
2017-11-01 06:51:17
2017-11-01
2017-11-01 07:05:54
1
false
en
2017-11-01
2017-11-01 07:05:54
3
14c26d158b8c
4.143396
2
0
0
Block Tribune asked Sergey Nikolenko to comment on the hype around knowledge mining idea.
5
Chief Research Officer Sergey Nikolenko on knowledge mining Image credit Neuromation Block Tribune asked Sergey Nikolenko to comment on the hype around knowledge mining idea. Earnings from mining crypto currencies are shrinking — the computations complexity grows, while the energy is not getting any cheaper. Many people are looking for alternative applications for expensive hardware purchased during the mining boom. It is quite possible that a substantial part of video cards would be used by scientists or start-ups for complex computing. With the energy price at RUR 4.5 per kilowatt-hour, there already are not so many wishing to engage in cryptocurrencies mining in Russia. As soon as in winter and spring of this year, investments in new video cards and ASIC chips for a medium-sized cryptocurrency farm paid off completely within few months. Now, in order to earn on mining of many popular crypto currencies, you must first be a millionaire — the format of “makeshift video card” does not work at all, as large farms with well-tuned hardware are required. That is why, when power plants started to offer their excess capacities on sites with complete infrastructure for rent, miners were immediately interested — with the price of just two rubles per kilowatt-hour. Considering that they are into mining of “light” crypto currencies, like ethereum, Zcash and Monero, earning less than $6 per day, this was a significant support. Due to energy prices, approximately half of cryptocurrencies are mined in several regions of China. However, the computing complexity will continue to grow, causing a gradual fall of profits. Hence the interest to alternative sources of income. Since many farms have, in fact, huge computing capacity, they can be primarily used for science purposes. Usually, for such tasks, supercomputers capacities are rented, e.g “Lomonosov” in Moscow State University being one of the most powerful. These are the machines that will compete with the mining capacities. . The rental market for mining capacities is already emerging. For example, Neuromation has created a distributed synthetic data platform for neural network applications. Their first commercial product was making store shelves smart. For this, large well-labeled datasets for all the SKUs are created. Algorithms trained on these are able to analyze shelf layout accuracy, percentage of the shelf, and customers’ interaction. The system is capable, actually, to predict customers’ behavior. The platform requires more than a billion of labeled images of merchandise. Manual labeling of photographs is a painstaking, and very costly, task. For example, on the Amazon Mechanical Turk crowdsourcing service, the manual labeling of a billion pictures would cost about $120 million. Neuromation entered the market with a new concept of using synthetic data for training neural networks. They generate all the necessary images in 3D generator, similar to a computer game for artificial intelligence. It is partly for this generator that they need large-scale computing capacities, which, if rented from Amazon or Microsoft, would cost tens of millions of dollars. On the other hand, there are thousands of the most advanced video cards available, and they are engaged in ever less profitable ethereum mining. The founder of Neuromation, Maxim Prasolov, decided, instead of renting capacity for millions of dollars, to lease these mining farms for useful computing, and the company is already using a pool of 1000 video cards. “This is a serious savings for our research process and is beneficial for the miners: the farm services cost 5–10 times cheaper than renting cloud servers, and the miners can earn more by solving fundamental problems instead of mining crypto currency,” he calculates. It is, of course, to remember that Google has a search for images and Facebook has facial recognition technology for photos, that they managed to develop with their own cloud services without using mining farms. However, the task for Neuromation was substantially different. “First, searching by pictures is a completely different task, and there are specially developed methods for face recognition. Second, Google and Facebook do not need to rent computing power from Amazon Web Services — they have more than enough of their own clusters. But the course of action for a small start-up in this situation is not so obvious,” explains Sergey Nikolenko, Chief Research Officer in Neuromation. Potentially, miners will gain in average 10–20% more on knowledge mining compared to crypto mining. Moreover, with tangible benefits for society. “Basically, mining is milling the wind. To generate a “nice” hash, dozens of system operation hours are needed. On the other hand, if we are talking about the search for a drug formula, such a use of capacities from around the world, the application of combined work of computers for the common good would be comparable to the results of research at the Large Hadron Collider,” says Petr Kutyrev, editor of the noosfera.su portal. The tasks solvable with the mining hardware will be limited due to its customization level. For example, ASIC hardware would be difficult to adapt for scientific tasks, as it is designed exclusively for hashing. The video cards, however, can cope with various scientific tasks. True, special hardware configuration and sometimes software would be required for such computing. “Video cards for mining can be used for video recognition and rendering, biological experiments. However, for efficient computing, direct access to the computer hardware is essential. If the computation task may be delayed, or the accuracy of machine learning of neural networks is not critical, then, of course, standard tools can be used. Otherwise, you need to develop your own hardware and software infrastructure,” Evgeny Glariantov believes. Thus, using the farms in science will require some time to set them up and develop special allocation protocols. Yet, considering a more profitable segment, miners will switch to useful computing, and platforms for such tasks may appear in the near future together with the first operating system based on EOS blockchain, believe in the BitMoney Information and Consulting Center. Miners will be able, from time to time, to switch from cryptocoin mining to processing scientific or commercial data, thereby increasing their profitability. Profits from the times of cryptocurrency rush will no longer exist, but business will be more meaningful and stable: unlike volatile crypto currencies, there is always a demand for knowledge. Source: http://blocktribune.com/making-money-redundant-mining-hardware-opinion/
Chief Research Officer Sergey Nikolenko on knowledge mining
3
chief-research-officer-sergey-nikolenko-on-knowledge-mining-14c26d158b8c
2018-02-08
2018-02-08 03:41:48
https://medium.com/s/story/chief-research-officer-sergey-nikolenko-on-knowledge-mining-14c26d158b8c
false
1,045
Distributed Synthetic Data Platform for Deep Learning Applications
null
neuromation
null
Neuromation
neuromation-io-blog
AI,DEEP LEARNING,ARTIFICIAL INTELLIGENCE,NEUROMATION,TOKEN SALE
neuromation_io
Blockchain
blockchain
Blockchain
265,164
Neuromation
https://neuromation.io
fbaeecaf782a
Neuromation
629
3
20,181,104
null
null
null
null
null
null
0
null
0
76f2173a53f9
2018-08-04
2018-08-04 18:03:35
2018-08-04
2018-08-04 18:06:57
1
false
en
2018-08-04
2018-08-04 18:06:57
0
14c38a593b19
4.030189
1
0
0
At any time you will find people testing, experimenting and pushing the known boundaries of technology to its limits and, as a result…
5
Technology regulations and the future we forget to think about At any time you will find people testing, experimenting and pushing the known boundaries of technology to its limits and, as a result technology will literally evolve at every second. On the other hand, as humans, we have a natural survival instinct that makes us nurture stability and our comfort zone. Even technology freaks will go through a skeptical phase which will get them analyzing and thinking twice instead of blindly adopting anything new. As a result, our individual ability to consume technology is considerably slower than the velocity in which it is created, and this largely varies from one person to another. We are organized into societies that mostly rely on the same principles it did decades or centuries ago. Now, imagine the person that creates, for instance, an Artificial Intelligence algorithm, which will evolve at every interaction. Although being able to create something that is continuously evolving, he will most likely hesitate on sight of something new, while living in a society that follows a model that is probably overpast. Why is this apparent divergence of behavior so common? It all boils down to individual knowledge and principles, and the need to scale. As individuals, we are shaped by our own experiences, this comprises the knowledge we acquired, our understanding of right from wrong and our willingness to take risks. This reflects on all our actions and interactions. To an expert in AI, creating it is a natural part of his work, he is not threatened by it thus he is continuously pushing it forward. When exposed to something new in technology, he is probably going to hesitate and thoroughly analyze it before adopting, but that will still somehow under his domain of expertise. Looking at the society level, this changes completely, it is impossible for a person to have knowledge about every single domain, but with each one working on a different area, we are able to exchange products and services and live in relative harmony. The problem is, further we get from our comfort zone, more difficult it gets for people to push for changes. Our society is divided according to geographic boundaries and each one follows a specific government model. Governments must enable and support all socioeconomic aspects of that society to ensure sustainable growth and a proper life for those citizens. This comprises not only fundamental dimensions such as healthcare and education, but also what is perceived as criminal actions to that group of people and how they do business, which often comprises how technology is handled within this context. Regulations work as one of the engines of the life in society, and its high impact on the lives many people require them to be carefully though and reviewed before implemented. Like it or not, regulations provide a shared sense of right from wrong that enables people to live in societies. If their existence is not a problem, is there any?? There are few key problems we need to keep in mind and rethink about regulations in a world where technology is pervasive. When it comes to regulations, decisions are made by a small group of people, resulting in a shortsighted understanding of the subject. If it’s not possible for a single person to know everything, why do we expect a small group of people will be able to make decisions on, literally, everything?? The simple answer would be that governments are expected to rely on key players and experts to make informed decisions. In reality, they orchestrate the different aspects of each decision to the best of their ability, which might result more on managing interests than truly understanding the implications of each decision. In addition, governments usually have a focus on tangible results for the next one to five years. When it comes to technology and its impact on society we should envision how we expect life will be like in fifty years and a plan should be in place to prepare society, especially for the next ten to twenty years, but this discussion is often postponed. We have reached a moment in history where the technology available is beyond our ability as society to embrace it. We still perceive technology as a product or service to be commercialized, and not as a natural resource to enable social development. A society powered by technology, including its economic aspects, evolves organically in unforeseen manners, whilst regulations are reactive and sometimes might retract new business models or social interactions already in place. On top of that, once a decision is made, any correction will be difficult, slow and will require results that show any flaw on existing regulation (which might take years), producing consequences that hard and costly to overcome. It gets worst, we do not engage in the discussion of how we envision society will look like. As we do not have a Northstar to pursue, regulations are based on traditional approaches. A widespread discussion of how one or another technology might impact society often results in many assumptions and predictions that do not move us forward. We must engage in a multidisciplinary discussion of how do we envision life, including social interactions, will be by the time the industry 4.0 becomes commodity, artificial intelligence is truly scaled, new business models are in place and enabled by Blockchain and we’re moving forward at the speed of quantum processing, along with many other technologies both existing or yet to emerge. As the boundaries between knowledge areas get blurred, more and more these discussions become a multidisciplinary exercise that goes beyond the shortsighted approach of how to adopt technology today, but how to leverage it as a building block to create a future where technology is pervasive and embraced, and people are literate in common technology aspects as much as in their own languages. Furthermore, we must rethink how we define regulations and, most important, how we make it as flexible as not to jeopardize our ability to evolve, whist continually acting as an enabler of social and economic interactions.
Technology regulations and the future we forget to think about
11
techregulation-14c38a593b19
2018-08-04
2018-08-04 18:06:57
https://medium.com/s/story/techregulation-14c38a593b19
false
1,015
All information at our fingerprints, technology at stages we could have never foreseen… We still don't understand, think about it or take charge for where it's heading…
null
null
null
All Dressed up and Naked
null
all-dressed-up-and-naked
TECHNOLOGY,FUTURE OF WORK,DATA,ARTIFICIAL INTELLIGENCE,DIGITAL TRANSFORMATION
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Dai Lins
I’m a technology geek passionate about data, that works crafting user-centered experiences at an amazing technology company (more at https://dailins.me).
7b75800a9e87
dailins
19
12
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-22
2018-06-22 04:45:28
2018-06-22
2018-06-22 15:29:13
1
false
zh-Hant
2018-06-22
2018-06-22 15:38:30
12
14c394f16db2
1.720755
9
0
0
前言
3
Course Note: Convex optimization (CMU Fall 2016,15 + Standford 2009) Source: deepdriive 前言 Optimization是在許多領域(控制、金融、機器學習…etc)底層的workhorse。這篇文章簡單整理Convex optimization的線上學習資源,會簡單介紹裡面的內容並作一些比較。最後附上一點個人的聽課心得和延伸資源供對這個領域有興趣的人鑽研。先聲明我不是這方面的專家,研究也沒碰過這塊,只是單純晃晃看看後整理的紀錄而已,若有錯誤或偏頗請多包涵。 教材 CMU Convex Optimization 2015 Fall CMU Convex Optimization 2016 Fall Standford Convex Optimization I 2009 (Stephen Boyd) (video)(slide) 授課教師 Ryan Tibshirani (R): CMU Statistics & Machine Learning Dept.的老師,大師Robert Tibshirani (LASSO) 的兒子。 Javier Pena (J): CMU Tepper School of Business 作業研究的老師 Stephen P. Boyd (SB): 聖經本教科書作者之一,在Standford開了一門很有名的Convex Optimization。 內容 以下主要是用CMU 2016 Fall的課程進度為主,不過我適情況從另外兩門課程擷取相關資源作輔助(後面括號表示我主要看的影片和投影片)。全部串起來大概會橫貫Optimization 過去70年的東西(1950-recent)。 Theory I: Fundamentals (CMU 2016 R) Convexity I: Sets and functions Convexity II: Optimization basics Canonical problem forms Algorithms I: First-order methods (CMU 2016 R) Gradient descent Subgradients Subgradient method Proximal gradient descent and acceleration Proximal gradient descent and acceleration (cont’d) Theory II: Optimality and duality (CMU 2016 R & SB) Duality in linear programs Duality in general programs KKT conditions Duality uses and correspondences Algorithms II: Second-order methods (CMU 2016 J, replaced most by SB) Newton’s method (SB) Barrier method (SB) Duality revisited (SB) Primal-dual interior point methods (SB) Quasi-Newton methods (J) Speical Topics Proximal Newton method (R) Dual methods (ADMM)(R) Alternating direction method of multipliers (ADMM cont) (R) Frank-Wolfe method (Conditional Gradient) (J) Coordinate Descent (R) Mixed integer programming, part I* (J) Mixed integer programming, part II* (J) Nonconvex? No problem! (R) *2015 CMU 的課綱是 Fast Gradient Method (SAG, SAGA..) (R) ,個人推薦看這一個。 感想 關於內容: Stephen Boyd的課在基礎的部分琢磨比較多且深,步調比較慢。課程涵蓋的範圍大概到Primal-dual interior point methods就停了,也不會提到Subgradient。 但是因為基礎講得清楚深入,有一些部分我是覺得可以CMU和Stephen Boyd都看,像是Theory II的部分。雖說Stephen Boyd有另一門Convex Optimization II來涵蓋一些CMU的內容,但是該系列影片畫質很不好就不推薦了。 CMU的課比較廣,講的方法也多而且會提到一些比較近幾年的發展(Ex Frank-Wolfe, SAG, SAGA), 另外CMU有一些2016的影片畫質不太好也可以找2015的來看。 關於風格: Stephen Boyd:我個人認為是講的最好的,條理清晰分明而且那種大師氣場很強XD (把三個老師的影片點開 觀察一下講課風格就知道了)。這個氣場來源我觀察到的是他在講每一句話都是強而有力的肯定句,對授課內容掌握度極高,而且回答學生問題都不會有遲疑。(而且這裡的學生其實是Standford其他學院來的教授不是一般學生) Ryan Tibshirani: 他的條理和內容其實不比Stephen Boyd差,但整個氣場上就略遜一籌,有時候會答不出問題或是有一點鬼打牆。不過整體上來說內容還算是很清楚的。 Javier Pena:我個人覺得我跟不上他的邏輯,我很努力試過了XD。在Algorithm II的部分我放慢重複看了很久,我從他的語調和肢體感覺得到他有滿腔的教學熱情想跟學生分享知識,但我的頻率可能跟他對不到。最後實在不行直接棄坑求救Stephen Boyd。然後之後看到是他教的我就想辦法找Ryan或是Stephen Boyd的來補。這是為什麼我建議把Mix Integer Programming部分換掉的其中一個原因,另一個原因是Fast Gradient Method是在介紹一些近幾年跟Large Scale Machine Learning有關的Optimization方法。Mix Integer Programming感覺比較適合作業研究領域背景的人會用到。 總之,我覺得如果要用CMU 2016這個課程表的話,可以先試試看原本的,如果也遇到跟我一樣的問題可以參考這一版。 延伸追蹤 以下是我在找參考資料時搜到我覺得後續可以追蹤的,主要是近幾年一些研究ML optimization比較活躍的作者(一定有漏,期望有人補充了)。 Francis Bach (INRIA SIEERA) Benjamin Recht (UC Berkeley) Suvrit Sra (MIT LIDS) Martin Jaggi (EPFL) Simon Lacoste-Julien (MILA, previous INRIA SIERRA) Leon Bottou (FAIR) Zeyuan Allen-Zhu (MSR, previous MIT CSAIL) Shai Shalev-Shwartz (Hebrew University of Jerusalem, MobileEye VP) INRIA SIERRA* *這是法國頂尖研究機構INRIA下的一個計畫,裡面很多人在做相關研究。 最後Murmur 其實當初看這一系列課程本身沒有什麼目的,就是打發時間。不過optimization的課真的挺燒腦的,一小時的課我大概需要花2.5–3倍(甚至更長,如果要找參考資料和肉搜人的話)的時間把投影片的內容搞懂9成(有一些例子或證明我到現在其實都不是很懂,像是SDP之類的)。作業部分也沒寫,大概看了一下都不是太容易。 一個多月的下來燒腦後的結果,就是我再次打開NIPS或ICML的optimization tutorial時候,我竟然有慢慢看得懂這個領域的人在做什麼了以及最新的進展了,像是發現新大陸(但應該不會靠岸XD)。儘管有些內容還是超越知識範圍很多(像是ICML2017的),但這一個多月下來累積的背景讓我有一點能力去看相關論文來補足。 比較可惜的是這一系列教學比較少討論到Adam/momentum這一系類optimization方法的一些理論進展。不過我覺得夠了,其他的有緣再說吧。如果問我下一個Optimization的課想聽什麼,大概是Distributed系列吧,感覺實用性很高。先來推個坑: Intro to Distributed Deep Learning Systems (Pentuum) 相關參考資料 以下是課程的參考資料或著是我挖到覺得不錯的寶 聖經本 Convex Optimization (Stephen Boyd & Lieven Vandenberghe 2004) 蠻常被Reference到的書 Convex Optimization: Algorithms and Complexity (Sebastien Bubeck, 2015) Introductory Lectures on Convex Programming Volume I: Basic course (Yuri Nestrov, 1998):就是那個Nestrov momentum的Nestrov! Nonlinear Programming (Bertsekas, 2016):無電子檔,只有Lecture Slide 寶 Optimization Methods for Large-Scale Machine Learning (Léon Bottou, Frank E. Curtis, Jorge Nocedal 2016) 這大概是目前Large-Scale Machine Learning中關於當前主流算法(Stochastic Gradient)最完整的Survey了。 Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers (Stephen Boyd, Neal Parikh, Eric Chu Borja Peleato and Jonathan Eckstein, 2011) 非常詳盡介紹ADMM這個分散式優化算法的細節和實作
Course Note: Convex optimization (CMU Fall 2016,15 + Standford 2009)
65
course-note-convex-optimization-cmu-fall-2016-15-standford-2009-14c394f16db2
2018-06-22
2018-06-22 15:38:30
https://medium.com/s/story/course-note-convex-optimization-cmu-fall-2016-15-standford-2009-14c394f16db2
false
403
null
null
null
null
null
null
null
null
null
Convex Optimization
convex-optimization
Convex Optimization
1
Kuan Chen
null
91ca3b8b6494
kuanchen_981
50
51
20,181,104
null
null
null
null
null
null
0
null
0
8247a0fe5c47
2017-12-24
2017-12-24 10:00:46
2017-12-25
2017-12-25 13:52:37
27
false
en
2018-01-16
2018-01-16 04:29:48
50
14c3e70262fc
13.24717
28
4
0
Taking a Moment to Reflect While Continuing to Move Forward: Engineering, Community & Humanity.
5
May 17, 2017 Shoreline Amphitheatre Mountain View, California 2017 Retrospective: What a year it’s been! Taking a Moment to Reflect While Continuing to Move Forward: Engineering, Community & Humanity. 2017 has been a big year for me ! As it comes to a close, I’d like to take a moment to look back on the year, contemplate and thank everyone involved for the truly wonderful journey which gave me amazing new opportunities! I have really admired what Ire Aderinokun, Segun Famisa, Otemuyiwa Prosper, Una Kravets, Jeroen Mols and many others do in the community, and I am inspired by their 2016 End Year Reviews. And so this year, I have also decided to do a review of my public activities to realize how far I have come, and how much I need to contribute to make a difference. Like most Engineers, I want to do something unique and build products people really care about. I believe engineers have a unique ability to make the world a better place and there are two ways in which they can do this: Designing new unique products and technologies that solve global problems. Sharing with others the tools and skills they need to assist in the development of solutions to global problems. I have tried to design products and work on projects since I was was young, but up to date very few are currently changing lives (The Telecom Kenya Limited 4G — TKL Project Nairobi, this Publication… etc). Actually most of them have really failed, and some go unaccomplished. Despite this, I relish the learning opportunities that comes from my failures. I have also been involved with the community since 2010, and for all those years, the Lord has been good. This year, I believe, HE must have smiled at me together with the community work that I love to do. I have written posts, traveled far to learn and share with other about engineering, met a ton of great people and started another tiny IoT company with my tech family (Chris Barsolai, Irene Ngetich, Perminus Nyamweya…) So here is my year-end review focusing on Engineering, Community and Humanity. I’ve also shared images and included the backstory behind them to provide further context why the images are particularly important to me. Engineering Android + Embedded+ AI Electrical Engineering is really a wide topic and I have made a conscious decision to have a sharp focus on three main areas which revolves around IoT: Embedded Electronics (Designing Circuits and controlling them with Android and JavaScript), Communications Engineering (RF & Wireless Networks that enables IoT such as 4G LTE, NB-IoT & 5G), and Artificial Intelligence (Deep Learning). This gives me most, if not all the spheres that can enable me to connect the physical world into the digital fabric. Internships & Projects Android Electronics: Android & AndroidThings This year, I have worked on some simple projects like: Lifi ( A demo app showing how to intergrate Firebase Real time database in Android applications.) JoySlide ( Android demo app showing how to create a slider using ViewPager) Akombe FireChat ( Android app made for real-time chat conversation using the latest UI and UX studies and rules.) Soonami ( Udacity app displaying information about a single earthquake event and whether or not there was a tsunami alert issued for it.) Quake Report ( Udacity app displaying a list of recent earthquakes in the world from the U.S. Geological Survey (USGS) organization.) Did you feel it? (Udacity app displaying the perceived strength of a single earthquake event based on the DYFI indicator.) Olwenda Rider, OnePark, Parkfy, Taita, Customized Material Design Login Screen, Material Design Bottom Sheets among others. RF and Wireless Networks I contributed to the Radio Network Planning, Optimization and Access efforts in Nairobi at Huawei Kenya working on both Telecom kenya and Safaricom Networks. This involved things like measuring and assessing the coverage, capacity and Quality of Service (QoS) of their mobile radio networks, verifying BTS configuration, testing handover for the 3 bands (900,1800 and 2100), identifying snags (hard and soft) of microwave and help clean it directly with Sub Contractor. Results of TKL Project Nairobi → Telkom Kenya 4G — Nairobi I met supper awesome mentors and still got a chance to work with members of my tech family; Chris and Irene who were both at Safaricom. Want to learn more about BTS? Here is a post I did in 2015 → It’s not a Shabaab watch tower, it’s a Base Transceiver Station XinTianXia Huawei Base, Shenzhen, Guangdong China. AI Hyacinth Monitor — Using AI to monitor Water Hyacinth Featured on Intel DevMesh Status : Ongoing NotHotDog Clone— Using AI to identify if a hotdog or not a hotdog. “What would you say if I told you there is a app on the market that tell you if you have a hotdog or not a hotdog. It is very good and I do not want to work on it any more. You can hire someone else.” — Yea, Jean Yang’s app in Silicon Valley series. I know it has been done but I have been working on it to scratch my own itches and for fun! Status : Ongoing Watch Jean Yang demo his app here → Demo Others I pushed 1 more video to my You tube channel under Project Be There. Read more here → Project Be There Community I have been doing serious community work for 7 years now and I love it. I didn’t scale down or stop doing it this year and I am not stopping anytime soon — I mean it’s my life! Community and IoT presentation on 22nd September 2017, Lagos Nigeria. Organizing I organized events both in the GDG and Intel Communities. Here is the top 10 list of some of the most amazing events I organized/helped organize: GDG DevFest Nairobi GDG IoT Summit Nairobi GCP Next Nyeri Intel AI Mombasa Intel AI Maseno (2) Intel AI Voi Intel AI Nyeri (3) Writing This year I had the opportunity publish 58 posts on Medium. 9 of the 58 are incomplete and I am trying to find time to work on them; improve and add more contents. The posts include both technical and non-technical contents. I talked to our lead, Roina, and we started another publication on Medium, GDG Nairobi, on which we intend to post hot updates 🔥, chapter events, meetups, community projects, stories and and many more. Feel free to give us feedback and criticism about the same to help improve it. These were my most popular articles — Google’s balloon-powered High Speed internet now in Kenya 8.9 K Views, 3.4K Reads and 1.6K Claps Intel’s Deep Learning Training Tool installation Guide 642 Views, 268 Reads and 47 Claps I featured as a top writer for AI on Medium for about 2 days. I have also featured as the top writer for IoT during the whole year, changing different positions. My best position was number 3. I am currently trailing at → 9/10. Speaking I gave several talks both at local meetups and at international conferences focusing on IoT, AI, Open Source Hardware, Maker Movement and Community. Resource Center RC Auditorium, Dedan Kimathi University of Technology Here are some of the talks I gave: Google Beacon Platform, JavaScript Electronics & The Internet of Things → Slides Android Things, The Google IoT → Slides Community and The Internet of Things → Slides IoT Design Thinking, Prototyping and Ideation → Slides Android Things for IoT → Slides Let’s Talk IoT → Slides How Makers Can Go Pro with Android Things → Slides IoT & AI: The Perfect Match Android Things: If you can build an app, You can build a device! IoT & AI: The Intelligence of Things TensorFlow & IoT among others I was really privileged to give IoT session at 3/4 DevFests in Kenya. I was extremely excited to go to DevFest Western kenya for the very first time and enjoyed doing one at DevFest Rift Valley. I first gave an IoT talk at Devfest in DevFest Rift Valley (Previously DevFest Nakuru) in 2015 and I have been doing the same ever since. I felt so emotional because when I first gave an IoT talk at this DevFest in 2015, (Read more here → DevFest Nakuru 2015), Google was only providing an Operating System for IoT, yet today they are providing a whole platform (Developer Preview, DP6). Then, they had just announced Brillo & Weave at the I/O 2015. Today they have included IoT (Android Things ) as one of their new form factors besides Mobile, Wear, TV and Auto and I am really excited for the possibilities for the developers to build really amazing things. I am really hopeful that they are going to release the full LTS (Long Term Support) next year, probably at I/O 2018. It is the best time ever to be an IoT Engineer! I can say that because I have never seen anything that we have now before; the meaningful confluence of incredible developer changes, more powerful developer tooling, support for on device machine intelligence and an awesome community behind it. Traveling & Adventure I got to travel to some really awesome new places and had some incredible international and local free flights courtesy of Huawei, Google and a supper supper awesome friend. Stats: 13 Free Flights 4 Continents I went to places like San Francisco, CA (Silicon Valley), to experience and learn more about Community and IoT, Lagos, Nigeria to experience and learn more about Community, Beijing China, to learn more about Chinese Culture, Shenzhen, China to experience and learn more about Open Source Hardware, Maker Movement, 4G, 5G and IoT. Watch the incredible things Happening in Shenzhen here → Shenzhen: The Silicon Valley of Hardware | Future Cities | WIRED. I also traveled by road across the country and even got an accident . Thank God I buckled up, otherwise it could have been a different story. I went to awesome places like Kit Mikayi, and even took a boat to Ndere Island among other places. (Left) On a boat to Ndere Island, Kisumu Meeting New people I was also privileged to meet some amazing people. I had a chance to meet and talk to Dave Smith, an Electrical Engineer and a Developer Advocate for IoT at Google, Wayne Piekarski, Developer Advocate at Google, focusing on the Internet of Things & the Google Assistant and Sam Beder, Product Manager on Android Things. I also had a chance to talk to Luis Montes and Sheldon McGee, Organizers for GDG Phoenix and was lucky to get their IoT Devfest official t-shirt. Wohoo 💪 💪 We shared more about Nodebots and Johny 5, a JavaScript Robotics & IoT Platform maintained by a community of passionate software developers and hardware engineers. How I wish Google acquires this platform for Web developers! More about their devfest → IoT DevFest Arizon. I hope to organize the same someday! (Left to right) Vitalik Zasadnyy, Oleh Zasadnyy I also got to meet Vitaliy Zasadny and Oleh Zasadnyy. They are some of the leads at GDG Lviv, which is famous for high-level organized conferences. They organize GDG DevFet Ukraine, the biggest community conference about Google technologies in the CEE. More about their DevFest → GDG DevFest Ukraine 2017 — Highlights (Left) Kenneth and Right Kenneth Christiansen, Mountain View, California I also got to meet Kenneth Christiansen, a Danish software engineer at Intel and a Google Developer Expert. I have really learnt a lot from Kenneth especially on Web USB, Arduino 101 and the Zephyr Project. ( Left ) Femi Taiwo — Google Developer Expert & Lead, GDG Lagos I was also privileged to meet Femi TAIWO once again. If you want learn about humility and leadership, just observe how Femi handles people. You will be inspired on how to be humble yet confident. The best leaders are humble! The last time I saw Femi I was barely learning Android → Devfest Nyeri 2013. From Left to Right — Pic I: Ngesa, Nd’ungu, Nelly (Huawei Seeds For the Future Program), Pic II: Solomon, Ngesa, Ire (GDG Global Summit), Pic III: Victor Ughonu, Ngesa, Ezekiel (SSA GDG Summit) I met so many amazing people this year and the list long. I had constructive discussions with Ndung’u Njung’e, Pic I. Ndung’u was my room mate during my stay in China. We frequently talked about revolutionists like Che’ Guevara and Thomas Sankara. I learnt a lot from everyone I met and their activities, and they have made me become a better person. I’m thankful and grateful. I have also maintained most of my close friends despite the political heat in Kenya. They are the best! A recent photo with Chris Barsolai. Right: A photo with some members of Tech X (Organizers of GCP Next Nyeri ) Grants & Awards From Left to Right. Huawei Deputy CEO Frank Zhou and Principal Secretary for ICT Victor Kyalo (Official ICT Authority Photo) Huawei I received a travel grant to travel to China under the Telecoms seeds for the Future program, organized as a partnership between the ICT Authority and Huawei Technologies Co., Ltd. I learnt some Basic Chinese language, Painting and Calligraphy in Beijing and got a Certificate from Beijing Language and Culture University I received a certificate of Honor from Huawei University in Shenzhen, Guangdong, Huawei Base. Intel I received a certificate of appreciation from my Boss at Intel for leadership and active participation in shaping the AI Community in Q1 (Innovation) I also received a second certificate of appreciation from my Boss at Intel for leadership and active participation in shaping the AI Community in Q3 (Instruction) Google Local Guide I finally reached level 7 I actively created and updated changes to Google Maps. I also reviewed, approved and declined changes made by other map makers to keep Google Maps correct and up-to-date to the highest standards. Learning I learned about Chinese culture in Beijing and also traveled to Shenzhen, Huawei, Global Headquarters, to learn more about Huawei’s culture, strategy and values, and received technical training from some of the finest professionals in the industry. We received training from Huawei experts at the Huawei University on the latest Technology in ICT such as 5G, IoT and Cloud. A photo with Huang Zhihao, Huawei Training Instructor, Huawei University I really admired his mastery of contents and how he delivered them. I hope to do so someday on my focus area during our meet ups and events. Udacity I spend 5452 minutes in Udacity to improve on my skills. There are a lot of awesome contents on that website that can help anyone master some of the most competitive skills. They have Nanodegree programs that are built with the world’s most forward–thinking companies like Google, Facebook, AT&T, IBM, GitHub, and more. Take a look → Udacity AI — Deep Learning I also learnt AI theory and followed hands-on exercises with Intel free courses at Intel AI Academy. These lessons cover AI topics and explore tools and optimized libraries that take advantage of Intel® processors in personal computers and server workstations. Take a look → Intel AI Academy Campus I also learnt a lot from school doing interesting units like Wireless Network I, Wireless Network II and Digital Signal Processing among others Here’s to the future: 2018 Impossible is Nothing! I’m really fired up for 2018 and I have a lot of great things I want to do. 1. Events & Traveling 🔥🔥 Software and Hardware craftsmanship thrives on its sharing community. I want to organize more events that champion exchange of ideas and even travel. I am thinking of doing research, seeking advice from the best organizers/veterans/mentors and working with other GDG communities to literally bring Google I/O to Nairobi in the form of DevFest. I am talking about something close to this → GDG DevFest Ukraine 2016 🔥 and I have talked to our able lead, Roina about it. I’d like to continue speaking and sharing contents on IoT, 5G and AI at different conferences. Feel free to invite me to your community. I also want to improve on my mastery, the quality of my contents and delivery. I like the way John Papa and Dave Smith do their stuff. Dave Smith: Take a look → Developing for Android Things Using Android Studio (Google I/O ‘17) John Papa: Take a look → An Angular 2 Force Awakens 2. Start Up We are currently working on an awesome IoT Start up with my Tech family. We are planning to release an alpha version in February. I am contributing to everything Android Electronics & Android Things. 3. Open Source Projects I want to work on more open source projects on both GitHub and CircuitHub I want to design more circuits and collaborate on CircuitHub 4. Health I want to prioritize on health and start something like a weekly running streak 5. Blog I want to write more and more, both technical and non technical contents specifically on my technical stack Android Things, 4G-5G & AI. I want maintain my position as a top writer of Internet of things on Medium by contributing more high quality contents. I am doing an Android Things tutorial. Final thoughts Everything about community reminds me about the legendary Homebrew Computer Club. Club members were a group of computer, electronic enthusiasts and technically minded hobbyists who gathered to trade parts, circuits, and information pertaining to DIY construction of computing devices. I hope our communities, like the Homebrew, will result into more local solutions and companies that make a difference in people’s lives. I am extremely grateful to Aniedi Udo-Obong and the team at DevRel, Mercy Orangi, Odili Charles, Roy and the team at Intel, Special thanks to Kenneth M Kinyanjui (every-time I want to do anythings especially talks and blogs, I remember the “Humility is key “ and “Kaizen” advice), Dennis Riungu and all mentors at Huawei, Chris Barsolai and all friends, Roina and the Community, and all of you who read my posts. You made me a better lead, and you made me a better man. You have pushed me to do more, and a bit better than before. Thank you again for everything you do. Happy Holidays & Cheers to an amazing 2018! Related Content: In Review: Huawei Telecom Seeds For The Future Program 2017 (Kenya, South Africa & Singapore) Experience in Beijing and Shenzhen, the Silicon Valley of Electronicsmedium.com Community & The Internet of Things Give Me Community Or Give Me Death!medium.com IoT at DevFest Nairobi Google Beacon Platform, JavaScript Electronics & The Internet of Thingsmedium.com
2017 Retrospective: What a year it’s been!
296
2017-retrospective-what-a-year-its-been-14c3e70262fc
2018-05-05
2018-05-05 04:52:13
https://medium.com/s/story/2017-retrospective-what-a-year-its-been-14c3e70262fc
false
2,954
Welcome to The IoT Xtreme Ideas Lab. In this publication, I share do-it-yourself Electronics, Embedded systems & Internet of Things projects. I believe Education should be free & accessible to all. I am currently plotting world domination through Open Source, Software & Hardware.
null
marvinngesa
null
IoT/5G Extreme Ideas Lab
iot-5g-extreme-ideas-lab
INTERNET OF THINGS,GOOGLE DEVELOPER GROUP,GOOGLE DEVELOPER EXPERT,EMBEDDED SYSTEMS,WIRELESS
ngesa254
Android
android
Android
56,800
Ngesa Marvin
IoT at GDGs, SSA | Intel AI Ambassador | Co-Lead, GDG Nairobi | EEE — Telecom Engineer, Hacker & Maker - Android+Electronics+AI #IoT #5G Freak | Opinions = Mine
3d4aa1e43527
ngesa254
1,718
852
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-09
2018-02-09 05:36:44
2018-02-09
2018-02-09 06:55:26
5
false
is
2018-02-09
2018-02-09 06:55:26
10
14c5500eaf1b
5.271069
1
0
0
Ég hef sjaldan orðið jafn undrandi eins og þegar ég gerði mér grein fyrir því nýlega hversu ótrúlega margt er líkt með fræðisviðunum sem…
3
Gervigreind og Skammtatölvur: Partur 1 Djúpstæð Tauganet Photo by Jason Leung on Unsplash Ég hef sjaldan orðið jafn undrandi eins og þegar ég gerði mér grein fyrir því nýlega hversu ótrúlega margt er líkt með fræðisviðunum sem hafa að gera með gervigreind annars vegar og skammtatölvur hins vegar. Líkindin eru svo mörg að mér finnst dagsljóst að við hljótum að vera á brún algerar byltingar þegar þessi tvö svið renna saman. Eitt af því sem ég hef verið að dunda mér við síðan í haust er að læra undirstöðurnar í vélanámi (e. Machine learning) sem er ein af undirgreinum gervigreindar. Það hafa verið þvílíkar framfarir að eiga sér stað á því sviði undanfarið og það er orðið tiltölulega auðvelt að búa til einföld líkön fyrir vélanám og þjálfa líkönin, ólíkt því sem áður var. Til að kynna sér viðfangsefnið er hægt að líta á röð myndbanda gefin út af Computerphile á Youtube. 3Blue1Brown hefur líka gefið út tvö myndbönd sem gefur frábæra útskýringu á stærðfræðinni á bakvið djúpstæð tauganet. Ef fólk vill prófa sig sjálft áfram þá hefur Google gefið út mjög góðan opin hugbúnað sem heitir Tensorflow sem auðveldar mjög að þróa eigin tauganet. Pythonprogramming.net hefur gefið út seríu með tuga myndbanda sem kennir manni að forrita fyrir vélanám, allt frá einföldustu flokkunaralgóriðmum til þess hvernig maður notar Tensorflow. Án þess að fara of langt í útskýringunum er þess virði að tala aðeins um það hvernig djúpstæð tauganet virka. Ein stakur upplýsingarpunktur er alltaf búin til úr fjölda gagnapunkta. Þannig er ljósmynd til dæmis samansett úr pixlum sem hver um sig samsvarar litnum í einum tilteknum punkti í myndinni ogeitt orð er búið til úr mörgum bókstöfum. Hægt er að halda áfram að einfalda allar upplýsingar þar til þær eru orðnar að safni af allskonar tölum sem lýsa upplýsingunum. Photo by Antoine Dautry on Unsplash Tölva geymir upplýsingarnar með því að geyma þessar hópa af tölum í réttri röð. Eitt stykki af svona röð af tölum er hægt að kalla tensor og það er hægt að gera allskyns útreikninga þar sem unnið er með tensorar sem grunneininguna. Tauganet taka hverja einustu tölu í tensor og breyta henni á fyrirfram ákveðin hátt, til dæmis með að margfalda hana og leggja hana saman við einhvern fasta og leggja svo allar tölurnar saman á ákveðin hátt. Tauganetið getur gert þetta á eins marga mismunandi vegu og þurfa þykir og það sem það fær út er annar tensor (upplýsingapunktur). Það er samt ekkert sem segir að þessir tensor hafi einhverja þýðingu. Hann gæti allt eins verið handahófskennt rugl. Djúpstæð tauganet taka þennan nýja tensor og gera það sama við hann og þeir gerðu við upphaflega upplýsingapunktinn og búa til í leiðinni enn annan tensor. Nú erum við komin með tvö lög af nýjum upplýsingapunktum — en það er ekkert sem stoppar þann sem hannar tauganetið til að halda áfram og bæta við eins mörgum lögum af upplýsingapunktum og þeir vilja. Á endanum vonumst við til að fáum ein lokatensor sem hefur að geyma upplýsingarnar sem við viljum (til dæmis einhverskonar flokkun á upphaflega upplýsingunum.). Svona tauganet eru kölluð djúpstæð því á milli upphaflega upplýsingatensorins og niðurstöðutensorsins eru oft mörg lög af auka tensorum sem eru “faldir” og við sjáum aldrei. Munurinn á hefðbundu tauganeti og djúpstæðu tauganeti Þegar tauganetið býr til nýjan tensor í nýju lagi í tauganetinu þá ræður það hvaða fasta þú velur til að margfalda og leggja saman því algerlega hver nýji tensorinn er. Ef þú velur fasta af handahófi þá færðu bara rugl. Ef þú velur akkúrat rétta fasta þá geturðu fengið akkúrat þá niðurstöðu sem þú vilt. Aðal trikkið er að láta tölvuna finna réttu fastana sjálf sem myndu passa við þær niðurstöður sem við viljum sjá. Við gætum náttúrulega bara prófað allar mismunandi samsetningar af tölum sem hægt er — en það tæki oftast lengri tíma en við eigum afsíðis í sögu alheimsins. Photo by Martin Barák on Unsplash Sem betur fer eru til betri aðferðir. Með því að safna ótrúlega mörgum upplýsingapunktum úr raunveruleikanum og vita fyrirfram hvaða niðurstöður passa við þann tiltekna upplýsingapunkt er hægt að nota það til þess að “þjálfa” tauganetið. Til þess er notuð aðferð sem heitir “back propagation”. Allir fastarnir í tauganetinu eru upphafsstilltir á ákveðinn hátt (stundum einfaldlega handahófskennt) og svo er öllum tölunum breytt lítillega í ákveðna átt. Með því að bera niðurstöðuna saman við raunverulega niðurstöðuna (sem við vitum að er rétt) getur kerfið metið hvort þetta skref hafi verið í rétta átt eða ekki. Svona heldur svo kerfið áfram og fikrar sig í átt að réttri niðurstöðu. Til að kerfið geti stillt sig réttilega þarf oft tugþúsundir upplýsingapunkta og fyrir hvern upplýsingapunkt þarf kerfið að prófa að reikna ótal mismunandi skref. Þessi aðferð tekur ekki álíka tíma og aldur alheimsins, en hún er mjög langdregin engu að síður, enda krefst hún þess að örgjörfi framkvæmi óhemju margar reikinginsaðgerðir. Þetta hefur þýtt að þótt aðferðafræðin hafi verið þekkt lengi þá hafa tölvur hingað til ekki verið nógu öflugar til þess að þessi nálgun væri gagnleg. Á síðastliðnum misserum höfum við hins vegar verið að nálgast nógu hraðar tölvur til að þessi aðferð er farin að vera raunhæfari. Öflug ferðatölva getur þjálfað einfalt tauganet á nokkrum dögum. Með hjálp öflugra skjákorta er hægt að búta vandamálið niður í mörg smá vandamál og reikna þau mörg samhliða á sama tíma. Þetta færir þjálfunarferlið niður í aðeins nokkrar klukkustundir í mörgum tilfellum. Þetta hefur þýtt að það hefur orðið sprenging í allskonar notkun á tauganetum. Það besta við að eftir að búið er að þjálfa tauganetið þá getur það mjög hratt og örugglega notað formúluna sem þa fann til að framkvæma sama verkefnið aftur og aftur. Til að sjá hversu ótrúlega mörg not eru að finna fyrir tauganet í nútímanum er hægt að fylgjast með innleggjunum á youtube rásinni “Two Minute Papers” sem gefur sýnishorn af þeim ótrúlega fjölda af rannsóknargreinum sem eru birtar um viðfangsefnið í hverri einustu viku. Hins vegar má búast við að hin raunverulega bylting Vélarnámsins muni hefjast þegar hægt verður að þjálfa tauganet á einungis brotabroti úr sekúndu. Við munum þá sjá vélar sem geta greint og skilið gífurlegt magn af mismunandi gögnum mörgum sinnum á sekúndu. Þetta er algerlega ómögulegt að ímynda sér með þeirri tækni sem við ráðum við í dag. Maður gæti haldið að það væri aðeins tvennt sem gæti komið okkur á þennan stað 1. Tölvur verða svo fljótar að þær muni brjóta öll náttúrulögmál. 2. Snjallari hönnun á tauganetum mun þýða að það verður miklu auðveldara að þjálfa tauganet. Ég tel að það sé þriðji valkosturinn: 3. Það verður eitthvað annað en tölvur sem munu þjálfa tauganetin. “Computer programmer's single microchip” by Brian Kostiuk on Unsplash Skammtatölvur (eins og við erum að nálgast þær í dag) eru ekki tölvur í hefðbundnum skilningi, þær notast við allt önnur lögmál og eru í raun mjög léleg í að framkvæma nánast ALLT sem tölvur gera svo vel í dag. Það eru þó nokkrir örfáir hluti sem skammtatölvur geta fræðilega gert vel. Ótrúlega vel. Nánast yfirnáttúrulega vel. Og það vill svo ótrúlega til að eitt af því er að þjálfa tauganet. Ég mun skrifa aðeins um skammtatölvur í næsta pistli og svo að lokum taka saman hvernig gerivgreind og skammtatölvur geta runnið saman í framtíðinni.
Gervigreind og Skammtatölvur: Partur 1 Djúpstæð Tauganet
1
gervigreind-og-skammtatölvur-partur-1-djúpstæð-tauganet-14c5500eaf1b
2018-02-09
2018-02-09 09:47:55
https://medium.com/s/story/gervigreind-og-skammtatölvur-partur-1-djúpstæð-tauganet-14c5500eaf1b
false
1,176
null
null
null
null
null
null
null
null
null
Neural Networks
neural-networks
Neural Networks
3,870
Halldór Berg Harðarson
null
ecbb62e0d6d7
halldrberghararson
39
40
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-07-30
2018-07-30 02:45:27
2018-07-31
2018-07-31 02:40:34
1
false
en
2018-10-20
2018-10-20 14:14:33
13
14c870a4afaa
4.803774
1
0
0
Note: The views expressed in this article are my views alone and not the views of my employer or anyone else.
5
STEM -> _T_M Note: The views expressed in this article are my views alone and not the views of my employer or anyone else. With the rising cost of college and the increasing underemployment of college graduates across the country, the question of value from a college degree is becoming more pertinent. STEM is the buzzword now and everyone is pushing their kids to study math, engineering and computer science now rather than a traditional liberal arts degree. On the surface it is true; these industries pay some of the highest starting salaries in the country. Starting salaries at top tech companies are upwards of $100K for student’s right out of college. Similar salary trends exist for other engineering majors. Compare this to most liberal arts majors where their starting salary is less than $50K. When you dig a little deeper though, what you see is that the employment picture for STEM is far more mixed. Let’s break it down a little more below: 1. S (Science) — Pure science degrees such as biology and chemistry have never been the most lucrative career paths. It isn’t just that the salaries are low, there are far fewer openings in these fields than the humanities. People in pure science fields often go to grad school, particularly med school or pharmacy school to become more specialized where the job prospects are better. Even then, they often pursue a post doc or two or completely change fields. In any case, it is not a hot, desirable field for employers. Physics majors do the best among science grads simply because they are seen as highly quantitative and pursue careers in software engineering, data science, banking, etc. 2. T (Technology) — With the rise of machine learning and AI, technology is very much in demand, starting salaries in these fields range anywhere from $70K a year to $135K a year at a company like Apple. Technology is truly in demand and its growth will only likely continue. The thing here too is that the skills are constantly changing and the old players are always in constant competition with new entrants. This is true in every field but it is even truer in technology. 3. E (Engineering) — This one is tricky. Engineers are in demand but not as much as technology graduates. In fact, a lot of core engineering jobs don’t sponsor for undergraduates; they only sponsor for Masters and PhDs. If a company doesn’t sponsor for a job, then it means it can find the talent locally. If a company sponsors for a job, it means 1 of 5 things: · High growth industry (Tech) — This means that there is so much demand for the field, that they need all of the talent they can find, American or not. · Niche, specific skillset (PhD fields, etc.) — Some skills are very niche, very specific and very hard to find. Companies that find the people with these skillsets want to lock them in and often rely on visas to find the right people. · Easy sponsorship process (Australia) — There are some countries like Australia that have an easier sponsorship process to the U.S. with something called an E-3 visa. This means that employers don’t have to go through a struggle to bring someone from that country over here and even if they are reluctant to sponsor in general, they are more likely to make an exception. · Really lame, boring job — Some jobs in IT and QA testing are just plain awful and boring. These jobs are often outsourced but some have to be done locally. It isn’t that Americans can’t do these jobs, it’s that they don’t want to do them. Because of this, employers have to sponsor to bring an immigrant to do the job. · Don’t want to pay the American wage — American salaries are the highest in the world and Americans have very high expectations for their salaries. Immigrants that first come into the country do not have the same salary expectations and as a result might expect less. Those expectations quickly change once they’ve seen what the American rates are but since they’re waiting to get a visa or green card, their mobility is limited. Some employers recognize this arbitrage and abuse it to get cheaper labor. Engineering employment data is still pretty good because the few jobs in core engineering pay well and they are able to move into other field like analytics, finance and consulting. Basic point, engineering as a degree has value both direct and indirect. The direct value of engineering is in question because there are not that many jobs in core engineering. 4. M (Math) — This one is tricky. In the 1980s up till the early 2000s, there was a huge demand for Math, particularly in finance. With the rise of Mortgage Backed Securities and Credit Default Swaps, Quantitative Finance saw a huge boom and being good at math was in demand. After the financial crisis, that demand dwindled and the demand for math shifted to statistics with the rise of big data. There is still a demand for math but it is largely basic math, not stochastic calculus. More on this from another blog post. The reason Technology and Math come out ahead in terms of job prospects is because of the low cost of implementation. It is cheaper to implement a software program or financial model then it is to implement a chemical refinery or plasma reactor. The high cost of implementation in science and engineering means that there are fewer jobs and that the few jobs that are present are maintenance jobs vs. development jobs. People want to build things, not maintain them. Core engineering jobs tend to be about maintaining existing products whereas computer science and math tend to be about building new ones. Even still, there is a value in science and engineering and it has to do with signaling. If you’re an engineering and you decide to get an MBA and go into consulting, there is a perception of credibility that you know what you’re talking about. Even if you don’t actually use your engineering degree in the job, there is perceived value. It also is just simply a differentiating factor. Even if you plan on going into finance, having an engineering degree can help you stand out from all of the other Economics and Finance majors. This isn’t just true for engineers, a lot of fields are converging onto business including medicine. This is at the same time that MBA applications are down for the 3th straight year. In other words, people want business skills but don’t care about the business degree. There is also just the pure desire to be smart. There is a perception in our society that people that have technical expertise are very intelligent. It is why academics are respected even if they are broke. People sub-consciously might want to go into STEM not just for the derived value of a job and a salary but the perceived value of society respecting. That is the common theme in all of this, the gap between derived value and perceived value. Engineering may not be the most lucrative but society still very much respects it and that counts for something.
STEM -> _T_M
15
stem-t-m-14c870a4afaa
2018-10-20
2018-10-20 14:14:33
https://medium.com/s/story/stem-t-m-14c870a4afaa
false
1,220
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Engineering
engineering
Engineering
12,767
Swaroop Bhagavatula
null
b5bb90755fee
swaroopbhagavatula
39
29
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-12
2018-03-12 07:57:25
2018-03-12
2018-03-12 07:57:37
0
false
en
2018-03-12
2018-03-12 07:57:37
1
14c8941f7780
1.65283
0
0
0
Data Science in itself is a very diverse field, and hence many types exist in it. Basing on this and the tools utilized the scientists…
1
5 types of Data Scientists Data Science in itself is a very diverse field, and hence many types exist in it. Basing on this and the tools utilized the scientists themselves are classified into various types. Few types of data scientists: 1. The quantitative data scientists who rely on theories and exploration. They firmly believe that established theories are the best way to analyze data and all of practical implementation relies upon it. 2. Operational data scientists come in the next category. Here you have a set of gentlemen who rely on facts and figures. They wish to classify all data as numbers, and thus carry out further research. For example a coach would rely on various parameters like pass time, running speed, dribbling count per minute etc. to analyse the players in his basketball team. After collecting this raw data he can use tools to sort and order, so special attention can be given to those who under perform. This is more or less what an operational data scientist does at work, with respect to various business principles and strategies basing on the stats. 3. Product management data scientists, this team composes of people who try to enhance the product. In short they are trying to tinker the existing modules in order to provide better interfaces to the user. The example of improving the app store on an android or iOS platform, just to make the users feel at home is a good example in this type of data science. Thorough understanding of what the product is set to serve, is key to conquering the market for these people. 4. Marketing data scientists take up the onus of understanding the market well on their houlders. It is essentially one of the trickier jobs out there, because the market is always dynamic and relies on a variety of factors. For example you are analysing the telephone calls of a particular location over a time period of 1 week, in order to successfully determine what plan would benefit both user and end service provider. You would run into a situation in which data would be haphazard, and making order out of chaos would become a daunting task. Hence these scientists have to work with dedication and adroit skill. 5. Research data scientists need to think out of the box and survive on innovation and inspection. Inspecting how a product is regularly functioning and upgrading it is best possible by the work of this type of data scientist. All in all it is the coming together of various forms of data science that helps in gaining overall successfor various businesses.
5 types of Data Scientists
0
5-types-of-data-scientists-14c8941f7780
2018-03-12
2018-03-12 07:57:38
https://medium.com/s/story/5-types-of-data-scientists-14c8941f7780
false
438
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
rahul singh
null
2b513fbc7b39
rahul.singh2015
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-13
2018-05-13 11:48:49
2018-05-13
2018-05-13 12:09:41
6
false
en
2018-05-13
2018-05-13 12:10:29
8
14c8ba1b1d5f
5.663208
2
1
0
As mentioned in my previous post, I’ve recently taken a great interest in artificial intelligence, but that doesn’t mean I’ve lost interest…
2
Synthetic Datasets for Training AI As mentioned in my previous post, I’ve recently taken a great interest in artificial intelligence, but that doesn’t mean I’ve lost interest in 3D development. In fact, I’ve become fascinated with the concept of using 3D software to create photorealistic, synthetic training datasets for image recognition tasks. The question is, can we create images that are realistic enough to fool artificial intelligence? [Note: This post also appears on my blog, Immersive Limit] The Current Data Problem I’ve found, from both researching and experimenting, that one of the biggest challenges facing AI researchers today is the lack of correctly annotated data to train their algorithms. In order to learn, artificial intelligence algorithms need to see thousands of examples that are correctly labeled. In image recognition tasks, that means picking out which pixels contain the objects you are looking for. Here’s an example: If you hate spiders, you can rest easy knowing that I watched this guy get scooped up by a bird shortly after I took the picture. His cockiness amused the avian gods. In the example above, I took a picture of a spider I found on my patio, then manually drew a bounding box and a pixel mask. This is the kind of annotation an AI would need to learn. While it took me zero seconds to spot the spider in my peripheral vision, it took me a minute to get my phone out and capture a candid shot and another couple minutes to do a pretty sloppy job creating the annotation back at my computer. Note that the orange color is just for illustration, the AI would make it’s best guess on the unaltered image, then compare it to the correct bounding box and the shaded pixels defined in an accompanying annotation file. The Case for Synthetic Datasets Let’s say you have painstakingly collected 1,000 images (actually considered to be a tiny dataset) of spiders you want to classify and it takes you about 2 minutes to properly annotate each one. 1,000 x 2 / 60 = 33.3 hours. Add in breaks and you have a full 40 hour work week, not including the time it takes to find all of those spiders. Not sure about you, but I don’t have that kind of time and even if I did, I wouldn’t want to spend it on spider pics. In fact, one of the biggest real, annotated image datasets, called COCO (Common Objects in COntext) contains >200,000 labeled images and took 70,000 person hours (on Amazon Mechanical Turk) to fully annotate. 70,000 / 200,000 x 60 = 21 minutes per image. Why such a higher number? COCO images label everything in complex scenes. For example, let’s look at a couple photos I took that are similar to what you’d find in the COCO dataset. If you wanted to fully annotate the turtle image below, you’d have to draw a separate mask for the at least 10 overlapping turtles, plus the logs and sticks. For the restaurant scene, you’d need to draw a separate mask for each person, light bulb, plant, chair, table, window, condiment, etc. Sounds awful. One thing that AI researchers often do is called “augmentation”. Augmentation is the automated process of creating many variations of the same image to cheaply create more labeled images. For example, we might flip the image, rotate it a few degrees, zoom in a bit, etc. You can create a lot of variations this way, multiplying the size of your dataset, and it definitely helps to train the AI. Here is the original plus four examples of augmentation that could help the AI to learn what spiders look like in different orientations and sizes. What if we went a bit deeper? Could we put the same spider on a different background? Here’s something I came up with using the GIMP image editor: Same spider, same pixel mask, parallel universe. It’s the exact same spider, cut out and super-imposed over a different background, and it uses the exact same bounding box and mask. It’s not hard to imagine how this technique, combined with augmentation, could be used to generate tons of variations without any manual work, once the original cutout is done. Pretty cool right? Brace yourself while I take it a step further. Imagine that I had created (or paid for) a realistic 3D model of a spider that could be rendered in tons of different poses. I could then automatically create even more interesting variations on different backgrounds. Since I’m rendering the image, no one needs to manually annotate it because the same 3D software that creates the realistic render can also save a mask image. Even cooler. I don’t have any 3D rendered versions to show you, so use your imagination. We’re done with spiders. Real World Examples Synthetic image datasets are not an original or recent idea. I’ve found many examples of people using synthetically generated datasets to train AI with impressive success. Spilly One of my favorites is Spilly, a startup that superimposed 3D rendered human models on random images in tons of different poses to train an AI that could find people in videos. (Note that since a video is merely a set of images played in rapid succession, the task of finding things in videos is pretty much identical to finding things in photos.) They got very impressive results by training with synthetic images, and then fine tuning with a smaller set of real images. Here’s their blog post about it and here’s an absurd video demo they put together: SynthText in the Wild Dataset This is a synthetic dataset of 800,000 images that places fake text on top of real images. Check out the website, and an example: The green boxes are for illustration. The actual images only show the text over the background. A research paper titled “Towards End-to-end Text Spotting with Convolutional Recurrent Neural Networks” utilized the dataset to train an AI find text in the wild on street signs, business signs, etc. with impressive success. They started with synthetic images and then moved on to real world images for further training. Microsoft AirSim AirSim uses Unreal Engine to simulate realistic 3D worlds, then output specially annotated images (depth, segmentation, RGB). It’s designed to be used by artificial intelligence on drones. Here’s their GitHub repo, and here’s a video showing what it looks like: My Own Attempts Obviously I’m pretty intrigued by this concept, so I’ve decided to try it out myself. The task I’ve chosen is to identify cigarette butts in photos. I find discarded cigarette butts extremely irritating, and the fact that smokers nonchalantly litter them EVERYWHERE really pisses me off. If I could teach an AI to recognize cigarette butts, then I could theoretically attach that functionality to a small, autonomous robot that could pick them up. Wouldn’t it be wonderful? First, I had the idea to create the images 100% synthetically. My plan was to learn how to create realistic grass in Blender, then position cigarette butts in the scene and render from lots of different angles. After a couple days making mediocre grass, I realized this was probably overkill, so I decided to superimpose 3D cigarette butts over 2D photos I took down the street from my house (not my weeds!). Here are a couple examples: They don’t look great, but they might be good enough to train an AI. I’m currently working on a pipeline that will generate thousands of them automatically so that I can train an existing, open source AI like Matterport’s Mask R-CNN. I’m sure I can make them look better than this too, since I’m getting better at Blender every day. I’ll share my results once I get a chance to test a decent sized synthetic dataset.
Synthetic Datasets for Training AI
51
synthetic-datasets-for-training-ai-14c8ba1b1d5f
2018-05-16
2018-05-16 17:03:37
https://medium.com/s/story/synthetic-datasets-for-training-ai-14c8ba1b1d5f
false
1,249
null
null
null
null
null
null
null
null
null
Synthetic Data
synthetic-data
Synthetic Data
9
Adam Kelly
Software Engineer working in VR/AR and AI at GM. Fascinated by the implications of new technology. [Opinions expressed are my own]
9d96550e35f4
aktwelve
76
82
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-16
2018-08-16 21:32:04
2018-08-16
2018-08-16 22:43:06
1
false
en
2018-08-16
2018-08-16 22:43:06
7
14cb49b9ea71
1.671698
2
0
0
Adding handl to your browser helps you see which sites have more thorough privacy policies.
4
Using AI to evaluate privacy policies while users browse Adding handl to your browser helps you see which sites have more thorough privacy policies. We’re all just one data breach or one inappropriate data-sharing incident away from online exposure. In addition to the personal information we knowingly convey to website operators, we also reveal seemingly benign information such as our operating system, internet service provider, and browser information. What they can collect, store, and use is governed by the privacy policy they offer us. Privacy policies have become long, complicated legal documents that we often fail to read, even though, deep down, we know they’re important: 90% of Americans said they care about controlling the information others collect about them, and 93% would like to control who can access that information, according to the Pew Research Center. However, sight unseen, we consent to invasive privacy policies. We click OK and I Agree as soon as the banner or popup comes on our screens, and we roll our eyes if we need to scroll down to the bottom of a long passage of text to even find the buttons. As a result, we often permit website operators to track information about us that can be be used to raise the prices offered to us for goods and services we purchase online or even whether we get a loan. Buried in the fine print of each of these policies should be an explanation of how the website or app tracks its users and details regarding the permissions users grant the site or app to share the data they collect. Sometimes the proper explanations and details are there, sometimes they are not. We designed and developed handl to make it easier for users to see which websites offer them more comprehensive privacy policies than their competitors. As you browse the Internet, handl finds each website’s privacy policy and then uses artificial intelligence to evaluate it compared to those policies on similar websites. While several tools exist to help users generate secure passwords, identify dangerous apps, and obscure or disguise a user’s browsing activity, handl is the first that helps users determine, how, in practice, a company’s promised privacy practices compare to others by their competitors. Want to try it for yourself? You can get handl for free through the Google Chrome Store!
Using AI to evaluate privacy policies while users browse
51
using-ai-to-evaluate-privacy-policies-while-users-browse-14cb49b9ea71
2018-08-16
2018-08-16 22:43:06
https://medium.com/s/story/using-ai-to-evaluate-privacy-policies-while-users-browse-14cb49b9ea71
false
390
null
null
null
null
null
null
null
null
null
Privacy
privacy
Privacy
23,226
Mark Potkewitz
Cofounder at Lexloci Inc.
e26dd9a2cf18
Mark_Potkewitz
79
248
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-24
2017-12-24 18:08:04
2017-12-27
2017-12-27 10:27:31
2
false
en
2017-12-28
2017-12-28 07:46:31
33
14ccf7178a4a
2.772013
34
2
1
2017 has been a stunning year for all of us, here at Zelros.
4
4 Machine Learning Trends that Made 2017 2017 has been a stunning year for all of us, here at Zelros. We started 2017 by a technical blog post — that turned out to be very popular among the data science community — and an innovation prize, awarded by our peers. As 2017 is almost ending, we wanted to close these amazing last 12 months with some of our thoughts on what is currently happening on the AI field. Here is a review of the 4 most striking machine learning trends, noticed by our product R&D team this year. Trend 1: ML Frameworks New Machine Learning frameworks are dawning. They are becoming more and more high level, to help users focus on applications and usage — and offload them from low level tasks. Data scientists must adapt quickly and learn to use several of them, to remain up to date. Here are a few examples of what happened in 2017: Keras is now part of the core TensorFlow framework Pytorch is quickly developing popularity amongst top AI researchers, as Tensorflow shows limitations A few days ago, Apple open sourced Turi Create, a framework that simplifies the development of custom machine learning models spaCy v2.0 has been released, and is becoming the preferred open-source library for advanced Natural Language Processing in Python RIP Theano, the former major deep learning framework Machine Learning frameworks are more and more numerous Trend 2: Datasets There is no machine learning without data. This year, several new datasets have been released, helping data scientists to train and benchmark models for various tasks. Here are a few of them, in the Natural Language Processing field: Quora questions pairs dataset: over 400,000 lines of potential question duplicate pairs. Here is the associated Kaggle challenge (won by the awesome BNP Cardif Lab french team, we know well at Zelros ;) ) Google speech commands dataset: 65,000 one-second long utterances of 30 short words, by thousands of different people. Here is the associated Kaggle challenge Salesforce AI WikiSQL, a dataset of 80,654 hand-annotated examples of questions and SQL queries distributed across 24,241 tables from Wikipedia The Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information (inspired by the historical Stanford SNLI corpus) Trend 3: Transparency As AI is more and more used in real-life enterprise processes, and the new GDPR regulation will be soon enforced in Europe, the need for algorithms transparency is raising. In 2017, we have seen several contributions around Interpretable Machine Learning. Here is a selection: a new R package that makes XGBoost interpretable Understanding Black-box Predictions via Influence Functions (best 2017 ICML paper award, here is the code) Google launched Facets, an interactive visualization tool, to inspect datasets and better understand them O’Reilly wrote a complete and interesting article on interpreting machine learning Several researchers are working on this new field, like at Microsoft for example Algorithms transparency, one of the most striking machine learning trends in 2017 Trend 4: AutoML 2017 has seen the advent of automated machine learning, that becomes little by little a commodity. AutoML is the way to automate some parts of the data science process: basic data preparation and feature engineering, model selection, hyper parameters tuning, … This year, new open source libraries have been released, like MLBox, or improved, like auto-sklearn. New commercial solutions have been launched as well, like for example Edge-ml or Prevision.io. What’s more, existing tools have added AutoML capabilities: Datarobot, the pioneer in AutoML field, has raised another $54 million Dataiku added an AutoML feature (btw, check out our integration with this platform) H2O.ai launched a new product: Driverless AI Driverless AI AutoML, by H2O.ai We wish you a happy new year! Stay tuned for an important announcement in the coming weeks ;) And did we mention that we are hiring data scientists and software engineers?
4 Machine Learning Trends that Made 2017
101
4-machine-learning-trends-that-made-2017-14ccf7178a4a
2018-03-22
2018-03-22 15:11:07
https://medium.com/s/story/4-machine-learning-trends-that-made-2017-14ccf7178a4a
false
633
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Zelros AI
Our thoughts about Data, Machine Learning and Artificial Intelligence
67b9b5a1eef4
Zelros
440
1
20,181,104
null
null
null
null
null
null
0
null
0
19fd0cf90e0c
2018-09-26
2018-09-26 20:28:18
2018-09-26
2018-09-26 20:38:22
1
false
en
2018-09-26
2018-09-26 20:38:22
0
14ce9f7cd5
0.241509
0
0
0
#ThingsIBarfedUpAsAKid
5
Upma #ThingsIBarfedUpAsAKid Make a deal with God. What song is this?
Upma
0
upma-14ce9f7cd5
2018-09-26
2018-09-26 22:30:56
https://medium.com/s/story/upma-14ce9f7cd5
false
11
“Prosody is the music of language.” ~ Nandini Stocker, who advocates for sounds of silent solidarity and voices of musical magic makers in scented echo chambers. Make sense? I didn’t think so. I know so. We all do. We all shine on. On and on and on.
null
null
null
Living Language Legacies
living-language-legacies
LANGUAGE,VOICE RECOGNITION,SPEECH RECOGNITION,SPEECH,NATURAL LANGUAGE
captionjaneway
Art
art
Art
103,252
Nandini Stocker
Speaking truth brought me war and peace. Amplifying others set me free.
7e6afdd38d52
sevenofnan
426
438
20,181,104
null
null
null
null
null
null
0
null
0
ead6b775b343
2017-11-15
2017-11-15 11:09:45
2017-11-15
2017-11-15 11:09:34
1
true
en
2017-11-15
2017-11-15 11:12:51
9
14cf7b0fdab
4.441509
5
0
0
null
5
Working Together to Build a Big Data Future To leverage Big Data and build an effective Artificial Intelligence infrastructure, enterprises must embrace collaboration. In a digital economy, the rule of thumb tends to be that the smarter your use of data and technology, the more of a competitive edge your business has. A recent report by Teradata, based on over 260 interviews conducted by research firm Vanson Bourne with senior IT and business decision makers, found that there is widespread enthusiasm for adoption of AI, with 80 percent of enterprises reporting that they were already investing in the technology in some capacity, and over 30% planning to expand their investment in the are over the next 36 months. “C-level executives — namely CIOs and CTOs, maintain they are committed to AI in their enterprise, because of the expected ROI over the next 10 years.” The report therefore concludes that executives will accept those challenges as the long-term benefits clearly outweigh near-term pain. In fact their analysis showed that over a five-year forecast, organisations effectively expected to double their money when investing in AI: For every $1 spent on AI technologies, organisations predicted a return on investment of $1.23 over three years, $1.99 in five years, and $2.87 over a ten-year period. The Economist — in a phrase that has since become rather cliché — put forward the idea that “data is the new oil,” and World Economic Forum has now designated Big Data as a new kind of economic asset, just like currency or gold. A study by the MIT Center for Digital Business confirms that data-driven businesses do indeed have the edge. It surveyed 330 leading U.S. Businesses and found that companies that focused strongly on data-driven decision-making had an average of four percentage points higher productivity and six percentage points higher profits. Yet as these results indicate, it would perhaps be more accurate to say that data is in fact the new oxygen. Whereas it is still true that businesses that best leverage data and AI will gain a significant competitive advantage, it is probably fair to say that those that fail to make their organisations data-centric will eventually not be able to survive at all in the digital age. While most people agree on the essential role that Artificial Intelligence and Data play in their organisation’s success, there are significant challenges. The overwhelming majority of business leaders surveyed in the Teradata report anticipated major barriers for adoption within their organisation, with roadblocks ranging from an inadequate IT infrastructure to shortage of in-house talent. In their book The Sentient Enterprise, The evolution of Business Decision Making — launched at the Teradata Partners’s Conference last month — Oliver Ratzesberger (Teradata’s Chief Product Officer) and Mohanbir Sawhney from the McCormick foundation talk about how agility is key to getting the enterprise to the sentient point where it can analyse data and make decisions in real time. The crux of the problem enterprises face lies not in the difficulty of gathering data, but in extracting insights and then turning these into actionable processes. The reality is that we live in a time of data overload, and companies can easily find themselves trapped in reactive mode, spending most of their time sifting through mountains of data and making decisions only when problems emerge, rather than anticipating them. To tackle these problems, enterprises need access to key talent and infrastructure so as to enable the leveraging of Artificial Intelligence and Big Data. Increasingly, this is not being done “in house” but rather in partnerships with dedicated providers. These go beyond the traditional SaaS and becomes much more of a “Platform as a Service” model that incorporates complex customization and consultancy services. Teradata’s Think Big Analytics team, for example, worked with Danske Bank to create a fraud-detection platform that uses machine leaning to analyse tens of thousands of latent features, scoring millions of online banking transactions in real-time to provide actionable insight regarding true, and false, fraudulent activity. Together, they built a framework within the bank’s existing infrastructure, crating advanced machine learning models to detect fraud within millions of transactions per year, and in peak times, many hundreds of thousands per minute: “Using AI, we’ve already reduced false positives by 50% and as such have been able to reallocate half the fraud detection unit to higher value responsibilities,” explains Nadeem Gulzar, Head of Advanced Analytics, Danske Bank. “There is evidence that criminals are becoming savvier by the day; employing sophisticated machine learning techniques to attack, so it’s critical to use advanced techniques, such as machine learning to catch them.” “All banks need a scalable, advanced analytics platform, as well as a roadmap and strategy for digitalization to bring data science into the organization.” says Mads Ingwar, Client Services Director at Think Big Analytics. “For online transactions, credit cards and mobile payments, banks need a real-time solution — the platform we developed with Danske Bank scores transactions in less than 300 milliseconds. It means that when customers are standing in the supermarket buying groceries, the system can provide immediately actionable insight. This type of solution is one we’ll begin to see throughout organizations in the financial services industry,” he concludes. Fintech company Verifi, also collaborates with banks and merchants to connect the multi-layered datastreams and combat fraud. The work the company does with is based on optimising data transmissions using APIs (Application Programming Interfaces). This is much more efficient because whereas legacy systems rely on processing large numbers of files sent in bulk, APIs can process data in real-time. The Verifi system collates APIs from the merchant shopping cart, customer relationship management (CRM) system, shipping data system, and others, and provides the merchant with better information to handle charges disputed by consumers, allowing them to often resolve the issue directly rather than have the bank issue a chargeback. Merchants benefit as they’re able to control the message to the consumer, and banks are happy because the sale remains and they reduce their operating costs. It the sort of technology enabled, data-driven system that is a true win-win. One of the problems that such systems help tackle is so-called “friendly fraud,” a term used to describe a situation when a customer experiences “buyer’s remorse” and “tries it on” by putting in a claim directly with their bank or card issuer for a refund, when the sale did in fact legitimately occur. Julie Conroy, Research Director of the Aite Group says that in the U.S. friendly fraud and chargebacks would likely near $130 million, in Q1 2017 alone. Read and share the full article on ERP in News Originally published at Tech Trends.
Working Together to Build a Big Data Future
10
working-together-to-build-a-big-data-future-14cf7b0fdab
2018-10-04
2018-10-04 23:59:34
https://medium.com/s/story/working-together-to-build-a-big-data-future-14cf7b0fdab
false
1,124
Showcase for the latest disruptive technology that is changing the education landscape globally
null
EdTechTrends
null
Tech Trends
edtech-trends
TECHNOLOGY,EDUCATION,VIRTUAL REALITY,TECH,STARTUP
alicebonasio
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Alice Bonasio
Technology writer for FastCo, Quartz, The Next Web, Ars Technica, Wired + more. Consultant specializing in VR #MixedReality and Strategic Communications
1663ada6f01e
alicebonasio
7,865
608
20,181,104
null
null
null
null
null
null
0
null
0
647065dcf606
2017-11-20
2017-11-20 21:13:16
2017-11-27
2017-11-27 15:52:25
5
false
en
2017-11-27
2017-11-27 18:51:06
11
14cfc43459dc
3.180503
15
0
0
One of the best parts about working in venture capital is all the amazing technology we see. Over the past the past few years, we’ve been…
5
Two Sigma Ventures’ Holiday Gift Guide One of the best parts about working in venture capital is all the amazing technology we see. Over the past the past few years, we’ve been fortunate to back a few companies at the cutting edge of robotics, artificial intelligence, and consumer hardware. These portfolio companies and the products they’ve built have brought a lot of joy into our office, so we wanted to share some of our favorites with you for the holiday season. We hope these social robots, STEM-inspired toys, and biometric wearables will bring as much happiness to you as they have brought to us. Without further ado, and with a little help from our portfolio companies, here’s the inaugural 2017 Two Sigma Ventures Holiday Gift Guide: Anki: Combining three of our favorite things — robotics, artificial intelligence, and The Fast and the Furious movie franchise — the Anki Cozmo and Anki Overdrive are the perfect toys for the Hot Wheels lovers in your family. Just look to the Amazon FAQ. Question: “Is it REALLY awesome?” Answer: “Yes. It is REALLY awesome.” Canary: Whether you want to watch your dog all day, keep an eye on your family, or need a new home-security system, Canary is your all-in-one camera. Plus, once you introduce that fire hazard of a Christmas tree to your cat, you’re going to love Canary’s HD camera. Jibo: He may be small, but he packs a big personality. As one of Time’s Best Inventions of 2017, you can’t go wrong with everyone’s favorite pet robot. This “cuter than Alexa” social robot will steal your family’s heart. Just don’t forget to bring him home from the office when you tell your kids you will, or there will be tears (yes, we’re looking at you Colin Beirne). #JiboForPresident LittleBits: For the budding electrical engineer or inventor in your family, there’s no better way to get them hooked on circuit boards and robotics than with a build-your-own Star Wars kit from LittleBits. It may be recommended for kids 8–12, but that hasn’t stopped our team from enjoying. Play Impossible: For your nieces and nephews who should be playing less video games and spending more time outside, the Play Impossible Gameball is the perfect way to bridge their love of tech and the physical outdoors. Don’t tell Lindsey Gray’s nephews, but they’re all getting one this year. Tinybop: For the fledgling 6 to 8-year-old scientist in your life, Tinybop offers mobile apps to explore the human body, space, machines, mammals, plants, and more. Get your kids hooked on science, not Angry Birds. Whoop: For the elite athlete in your family, Whoop is a wearable wristband that helps you optimize your performance by quantifying the strain, sleep, and recovery your body needs. Now for the fun part, let’s see those Whoop stats after your company holiday parties. X.ai: For the startup founder or VC in your life, a subscription to x.ai is a must. It’s guaranteed to be the surest way to their heart (or at least their calendar). Plus, no more ignored brunch texts or missed dinners once you have Amy or Andrew on your side. Pro tip: add yourself to their VIP contacts list, and you’ll be able to put invites directly in your loved one’s calendar. Happy Holidays and have fun shopping! Let us know what your reviews are, and make sure to shoot us a note if you’re building something interesting. The views expressed herein are solely the views of the author(s) and are not necessarily the views of Two Sigma Investments, LP or any of its affiliates. They are not intended to provide, and should not be relied upon for, investment advice.
Two Sigma Ventures’ Holiday Gift Guide
203
two-sigma-ventures-holiday-gift-guide-14cfc43459dc
2018-05-31
2018-05-31 23:38:37
https://medium.com/s/story/two-sigma-ventures-holiday-gift-guide-14cfc43459dc
false
622
We support companies using data science and advanced engineering to create the future.
null
null
null
Two Sigma Ventures
two-sigma-ventures
null
TwoSigmaVC
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mickey Graham
Content & Brand Manager at @TwoSigmaVC. Former Comms Manager at @Work_Bench. Developer, marketer & designer. @USC Annenberg & @SidwellFriends Alum.
4d7ae07e0b99
mickey_graham
1,064
1,420
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-26
2017-12-26 13:45:01
2017-12-26
2017-12-26 13:52:17
1
false
en
2017-12-26
2017-12-26 13:53:07
5
14d18f887beb
2.739623
0
0
0
Artificial Intelligence has influenced our lives greatly over last few years down the timeline. Although we are yet to witness full-fledged…
1
How Artificial Intelligence is Influencing Customer’s experience in Banking Sector Artificial Intelligence has influenced our lives greatly over last few years down the timeline. Although we are yet to witness full-fledged applications of Artificial Intelligence, the range of AI has already covered a vast area in the technological space, which has directly or indirectly affected us. Following the Machine Learning applications protocols, AI-powered machines are intelligent enough to learn on their own, and this in fact, has proved to be quite a boon for the mankind. If we look at its progress in last few years, we find that AI has influenced almost every domain, and today, it is no wonder that banks as well are looking for new ways to optimize internal processes easier to improve customer experience, and hence, customer satisfaction. Banks are financial institutions which are required to stay up 24/7 because financial transactions could also happen anytime in a day; though from user’s perspective, the actual problem is lack of 24/7 customer support. But now-a-days, few banks are providing customer support with the help of adoption of Chatbots. AI is being implemented on a large scale by all enterprise, either big or small, and likewise, some of the biggest banking institutions have also identified the potential of artificial intelligence in banking: 1. Understanding Customers’ Behaviour: Understanding your customer is much more than acknowledging their basic statistics. AI can give a profound analysis of customers’ behavioural patterns and financial report rather than taking a gander at fundamental data, for example, age, gender or salary. Banks are becoming customer-centric, by getting to know their customers from closer than ever, with the help of AI. This is the reason why Robo-mentors are in great demand these days. AI has improved banks’ understanding of customers’ insights. Upgrading client’s behaviour, specifically, is one of those territories where AI can truly be utilized to follow up the patterns through the information available, and design organized data structures that have been out of our reach up yet. This data can in turn be used to provide better customer support, process optimization, personalized suggestions, and much more to the customers. 2. Customer Onboarding: By knowing the customer appropriately, AI can help make a seamless, assisted onboarding process that will eliminate various steps that customers as well as banks have to follow. For example, AI powered Chatbots can be used to quickly understand fundamental inquiries that in turn automate frequently repeated queries. This broadens a bank’s quest for conveying more on-demand administration, and supports crosswise over channels. By implementing AI, banks can engage with their prospective customers, at an early stage using voice digital assistant process, the vice versa goes for the customer. 3. Easy Risk Management: For any bank, the most important thing is to manage loopholes and risks involved. And with AI integration, risk management is as good as solved, as due to its predictive nature, risks can be assessed, calculated, and mitigated before they could pop up and bring harm to the chain. In other uses cases, AI is being applied to fraud detection & prevention in banking, and various fraud tools are present that can mine data to uncover meaningful pattern which is basically utilised by banks. But using AI, these risks can be minimised and automated analysis can be performed. Probing such large amounts of data manually without error was just not possible. So, by leveraging the AI factor, no matter how big or small your organization is, virtually any financial services organization can cross routine hurdles in new, innovative ways. Ready to take your organisation to the next level with AI? FuGenX can help you integrate your business with the latest AI Technologies. FuGenX is the best artificial intelligence companies Texas that has helped many global companies increase their business revenue. FuGenX has also been rated as the best automation startups Chantilly & best machine learning development company Dallas. Reach FuGenX at fugenx.com. FuGenX will help you design an app at par with your requirements.
How Artificial Intelligence is Influencing Customer’s experience in Banking Sector
0
how-artificial-intelligence-is-influencing-customers-experience-in-banking-sector-14d18f887beb
2018-06-07
2018-06-07 06:49:13
https://medium.com/s/story/how-artificial-intelligence-is-influencing-customers-experience-in-banking-sector-14d18f887beb
false
673
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Amelia Davies
null
fad15ffbd4a
ameliaadavies
3
4
20,181,104
null
null
null
null
null
null
0
null
0
fc74111198b
2018-09-11
2018-09-11 17:37:57
2018-09-14
2018-09-14 09:45:19
13
false
pt
2018-09-14
2018-09-14 09:45:19
30
14d24e51d7f2
4.539623
10
0
0
Em agosto, a equipe da Operação Serenata de Amor continuou com os trabalhos com foco no novo projeto que faz parte do Programa de Ciência…
3
Relatório mensal da Operação Serenata de Amor — 020 Em agosto, a equipe da Operação Serenata de Amor continuou com os trabalhos com foco no novo projeto que faz parte do Programa de Ciência de Dados para Inovação Cívica da Open Knowledge Brasil (OKBR). E é também o principal acontecimento sobre o qual vamos falar aqui no relatório — mas sem antes contar todas as novidades e acontecimentos desse mês que passou. Se você perdeu os nossos relatos dos últimos três meses, não tem problema, seguem os links: Relatório #19, Relatório #18 e Relatório #17. E no nosso Medium você encontra todos os relatórios que já escrevemos. Parcerias & Colaboração Quem acompanha o nosso projeto desde o começo sabe o quanto o Irio Musskopf e o Felipe Cabral são importantes na história da Serenata. Em parceria, eles escreveram esse texto explicando a nova relação com o projeto. Recentemente eles deixaram de fazer parte da formação fixa da equipe mas ainda contamos com sua colaboração em projetos pontuais. Mozilla Foundation: Estamos dando continuidade ao nosso projeto de pesquisa para Processamento de Linguagem Natural (PLN) junto à Mozilla Foundation. O objetivo é criar bibliotecas de código aberto que irão nos permitir compreender (e descomplicar) a linguagem jurídica por trás dos projetos de leis e outros documentos do governo. 2 anos de Serenata A Serenata de Amor completa mais um aniversário e a gente só tem motivos para comemorar: Somos 70.000 no Facebook! Rosie & Jarbas estão de volta! Vamos falar sobre tudo isso AGORA. Vem com a gente: Oi sumida! Depois de inúmeros pedidos e algumas mensagens de “volta, por favor” ela voltou: nossa ROSIE! A robô passou por um período sabático de tuítes (mas não de análises!) e agora voltou a informar no seu Twitter os gastos suspeitos relacionados à Cota para Exercício da Atividade Parlamentar (CEAP). E ela não voltou sozinha: O Jarbas, depois de passar por melhorias e atualizações para atender a Rosie, também está no ar novamente. Então agora vocês podem voltar com as buscas por lá. Jout Jout Somos fãs da Jout Jout e ela é fã da Rosie. ❤ Ela fez alguns vídeos para mostrar que existem muitas formas de participar da política, para muito além das eleições. O controle social (também conhecido como Gestão Participativa), é a participação popular nos governos e mandatos de todos os poderes do nosso país — tema desse vídeo. Assista já! Programa de Inovação Cívica Natália Mazotte, diretora-executiva da Open Knowledge Brasil, representou o Programa de Inovação Cívica em diversas oportunidades, explicando as diretrizes dos projetos colocando o futuro em pauta: Você muda o Brasil: Como é possível motivar o cidadão a fiscalizar os gastos públicos e se mobilizar para participar ativamente da política? Esse foi o tema do evento “Você muda o Brasil”, que contou com a presença de especialistas de diversas áreas para discutir o assunto. Assista ao vídeo. I Laboratório de Boas Práticas de Controle Externo: Os temas abordados desse evento, realizado nos dias 03 e 04 de setembro, estavam relacionados às atividades de auditoria e fiscalização, ao estímulo à transparência e ao controle social e à melhoria da gestão interna dos Tribunais de Contas. II Congresso — Pacto Pelo Brasil: realizado do dia 20 ao 23 de agosto, foi uma oportunidade para falar de diversos temas relacionados com governança, integridade e tecnologia, apresentando boas práticas e expondo soluções direcionadas para a eficiência do setor público. Na tela da TV… Tivemos duas entrevistas bem legais com os fundadores da Serenata esse mês. A primeira, com participação do Irio, falou sobre fiscalização dos gastos públicos por instituições e voluntários. Claro que ele fala sobre a Rosie e relembra os casos mais marcantes que ela já achou em relação aos gastos da Câmara dos Deputados. Assista aqui. Na segunda entrevista, essa com o (Eduardo) Cuducos, uma série especial do SPTV sobre o Legislativo mostra as ferramentas que o cidadão pode usar para fiscalizar o trabalho dos deputados e senadores — claro que estamos entre elas com a nossa querida robô — assista! O Cuducos também deu uma entrevista para o pessoal do Training Center, nessa conversa ele conta como é trabalhar com tecnologia como voz política. A entrevista está bem legal e detalhada, contando como tudo começou nessa jornada do cofundador da Serenata. O Tony não para Esse menino viaja o Brasil para falar de Serenata. E dessa vez foi lá pra Recife, falar sobre milhões na conferência internacional TEDx, que contou com um line up de 12 palestrantes. Além disso, ele também esteve em Brasília participando da edição de Agosto/2018 da Ossobuco. Essas duas palestras terão conteúdo de vídeo e a gente vai compartilhar com vocês no próximo relatório — e também na nossa página oficial no Facebook. Fiquem de olho. É nosso aniversário e a gente só tem um pedido para essa data especial: espalhe o nosso projeto! Conte para os seus amigos, compartilhe nossos posts, nos ajude a ir cada vez mais longe. Links do nosso projeto:
Relatório mensal da Operação Serenata de Amor — 020
50
relatório-mensal-da-operação-serenata-de-amor-020-14d24e51d7f2
2018-09-14
2018-09-14 09:45:20
https://medium.com/s/story/relatório-mensal-da-operação-serenata-de-amor-020-14d24e51d7f2
false
832
Inteligência artificial para controle social da administração pública
null
operacaoSerenataDeAmor
null
Operação Serenata de Amor
serenata
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,HACKTIVISM,DATA JOURNALISM
RosieDaSerenata
Politics
politics
Politics
260,013
Tatiana Balachova
null
b37854ff76ab
tatianasb
94
1
20,181,104
null
null
null
null
null
null
0
null
0
f36cd3eb764c
2018-02-19
2018-02-19 08:04:14
2017-11-30
2017-11-30 14:30:11
3
false
en
2018-03-06
2018-03-06 08:10:07
1
14d24fb927d5
6.700943
7
1
0
null
2
Forecasting beer sales for HEINEKEN’s customers As one of the world’s leading brewers, HEINEKEN works together with its customers to offer a diverse array of over 250 brands to consumers in over 170 countries, of which 70 have breweries present. One of the leading challenges in serving consumers is to ensure on-shelf-availability of its products in retail outlets. Market research has shown that when consumers consistently notice their favourite brand is missing from the shelves, they may quickly make the choice for a competitor’s brand instead. This blog outlines a tutorial to get started with retail sales forecasting, a necessary component in preventing such situations. Advanced Analytics for sales and stock predictions On-shelf-availability is in the interest of both HEINEKEN and its retail customers, which is why they try to leverage their joined analytics expertise and data sources to predict stock-out events and determine which actions to take to prevent them from happening. Data on stocks, sales, orders and deliveries throughout the supply chain are needed to create adequate predictive models and find the cause of stock-outs. For example, production issues at the brewery may cause stocks to run out, but delivery problems or underestimated orders may cause stock-outs at distribution centers and retail outlets. Due to this complexity, lead times are important. If you anticipate unusually high demand at outlets due to promotions, you will need to know several weeks in advance, in order to brew the necessary extra volumes of beer. The newly established HEINEKEN Insights Lab supports Operating Companies with advanced analytics capabilities. In one of its current experiments, the Insights Lab is creating a forecasting model for stock levels and sales in retail outlets. HEINEKEN on-shelf-availability experts who work internally at customers’ headquarters can then use a dashboard to generate predictive insights, and take action to prevent stock-outs. The tool is meant as an addition to the existing inventory replenishment systems that customers use. In this blog, we outline a simplified version of the retail sales forecasting approach taken by the Insights Lab tool, an approach that leverages the large amounts of data available in modern supply chains. Data protection disclaimer In all of its experiments, the HEINEKEN Insights Lab takes special care to be compliant with current data protection laws, as well as anticipate upcoming regulations, such as the GDPR. Although this blog uses freely available open datasets, in the actual experiment data was shared voluntarily by customers, with agreement on what it would be used for. Creating a sales forecast: the problem For a demonstration, we use data from the Walmart Recruiting — Store Sales Forecasting Kaggle competition. It has 3 years of weekly sales by store and department of Walmart stores. Note that our method can be applied for store — product combinations in the same way. Each store — department combination (3331 combinations in total) can be considered as an individual time series, making the problem extra complex. The traditional approach is to fit a simple model to each time series individually, but this does not leverage the information coming from shared trends and seasonality, neither does it allow the more time-consuming training of complexer models. Our approach here is to train a single model for all time series, including store and department (and indeed other features) as explicit features in the model (besides the usual engineered time series features), and selecting an algorithm that can deal with such a high-dimensional categorical feature space. We also use a cross-validation strategy that mimicks daily retraining in real-life situations. We start by reading in the raw data, and joining the sales data with the provided feature set. These can be found on the Kaggle competition’s page. Feature engineering The strength of Machine Learning methods when applied to time series problems is that they don’t suffer as much from the usual challenges of time series forecasting. With serial correlation, non-stationarity and heteroscedasticity, the assumptions of Ordinary Least Squares often don’t hold, and it can be time-consuming to determine the right specification of ARIMA models. These issues can at least in part be tackled using Machine Learning with the somewhat ‘brute force’ approach of adding many features that capture such behaviours, such as lags and datetime indicators. Note that multicollinearity or the curse of dimensionality may still occur, so techniques such as recursive feature elimination or dimensionality reduction should be considered when actually tuning a model like this. As the sales appear to be highly seasonal, we include week numbers as a categorical feature. Each feature engineering step is wrapped in a suitable class, so as to allow the use of pipeline methods. A commonly used but very effective feature is the exponentially weighted moving average. In fact, many traditional sales forecasting systems use this to forecast sales directly. Rather than a normal moving average, which weights all observations in the rolling window equally, it uses all past observations, but with exponentially declining weights the further one goes back in time. This way, it captures a short-term average, without suffering too much from (recent) outliers. The selected alpha value determines how fast these weights decay. The following class adds several smoothed features based on a list of alphas. Of course, the most common features in time series Machine Learning are lagged features. The below class creates lagged features based on a dictionary that specifies which columns should be given what lag. It contains both past and current (negative and zero) and future (positive) lagged features. The latter (positive lags 1 and 2) include the target variable, as well as information about the future which we may reasonably expect in real life, such as temperatures (from a weather forecast) or whether it is a holiday. The following uses a class from the BDR public repository: PdFeatureChain allows the chaining of preprocessing steps, without converting data frames to numpy arrays every time, as sklearn does. Note that the lc1 step creates lags by weeknumber, meaning that they show the sales of last year during that week, which turns out to be a quite powerful feature. lc2 then creates the usual week-on-week lags. Creating time series splits In real applications, it is often desirable to retrain a forecasting model regularly, in order to capture the effect of short-term trends as often as possible. A cross-validation strategy that has test splits for every consecutive week can mimick this behaviour and thus show how the model would perform in reality. The IntervalGrowingWindow class from the BDR public repository allows us to create a growing window of train and test splits on a given time interval (1 week in our case). Note that if you would want to use such cross-validation for extensively grid-searching optimal model parameters, you should probably increase test_size so such an extent that only 5–10 splits remain. All that remains is to convert our data frame to a numpy array and prepare it for cross-validation. The following function forces columns to floating dtype, and factorizes them if it encounters strings. It’s important you do this transformation, because mixed dtype (dtype=object) numpy arrays tend to get copied unecessarily accross joblib workers as sklearn parallelizes the cross-validation procedure. This brings our feature set to: Initializing the model As we are implementing a very costly cross-validation strategy (84 splits!), we need an algorithm that is both accurate and trains fast. In this example, we use the LightGBM library for gradient boosting, which achieves accuracies close to XGBoost, but with greater speed. It also supports the use of categorical features, through a procedure in which multiple categories can be selected in each tree split. A defining feature of sales forecasting is the fact that we’d rather overforecast than underforecast, because if we are using the sales forecast to plan restocking, we need to make sure that we have enough and stores don’t run out over time. To this end, we employ an asymmetric objective function that slightly penalizes negative error, by tweaking the squared loss: We can feed it to the gradient booster by providing the gradient and hessian (first and second order derivatives): We can further wrap our estimator in sklearn’s MultiOutputRegressor, meaning that we will use the same feature set to forecast multiple target variables separately. This is useful in sales forecasting, because due to production and delivery lead times, we may want to forecast multiple weeks (or indeed days) ahead. Note that in a well-tuned approach one would rather select an individual featureset per day/week-ahead prediction. For the cross-validation procedure, we use a slightly adjusted version of sklearn’s cross_val_predict method, as by default this does not allow train-test splits that are not the same size as the full dataset (which is inherently the case for growing window CV splits. The adjustment can be found below the post. Evaluating results We can now append the predictions to the original dataset, and compare them to the target using metrics such as MSE (or MAE, although this makes less sense since our objective function was a form of squared loss) or visualize selected store — department combinations. As it turns out, last year’s sales were by far the most relevant features, especially for predicting sales during holiday peak periods. Concluding: how to create an inventory management system Following these steps, you now have a working sales forecasting model for retail outlets! Extending this into an inventory management tool is relatively easy, all one needs is an opening stock for the first day of making predictions, and by cumulatively adding the expected sales, one can make suggestions for deliveries each week (or indeed multiple weeks in advance). Additional material: custom CV predict method
Forecasting beer sales for HEINEKEN’s customers
58
forecasting-beer-sales-for-heinekens-customers-14d24fb927d5
2018-05-23
2018-05-23 16:25:52
https://medium.com/s/story/forecasting-beer-sales-for-heinekens-customers-14d24fb927d5
false
1,630
DATA SCIENCE | BIG DATA ENGINEERING | BIG DATA ARCHITECTURES
null
null
null
bigdatarepublic
bigdatarepublic
DATA SCIENCE,DATA ENGINEERING,DEEP LEARNING,MACHINE LEARNING
bigdatarep
Big Data
big-data
Big Data
24,602
Dennis Ramondt
null
b45ebf7995ac
daramondt
5
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-06
2017-11-06 20:41:55
2017-11-06
2017-11-06 21:13:49
0
false
en
2017-11-06
2017-11-06 21:13:49
0
14d32e927b05
2.815094
0
0
0
There’s something really interesting about where we are going nowadays in terms of technology. Companies have started investing in the next…
5
The Process: Us and the future of AI There’s something really interesting about where we are going nowadays in terms of technology. Companies have started investing in the next big thing which, according to them, is AI. Every new release of every new product is just a bit smarter, a bit more friendly and slowly learns to be better at it’s function. A common example of AI done very well is Google Photos. The concept is simple: you backup your photos to the cloud and they are automatically sorted, tagged and organized for you. Some very robust algorithms will identify your friends, family, pets, places, etc; and you will be able to search for your memories like a normal person. Queries like “Pictures in Paris”, “Me and Brothers” or “Party in Las Vegas” all return more accurate results than ever. Google Photos is a great product. It is one that I use every day and can’t really name an alternative for. It just works, and works particularly right. There is one thing about this kind of use for AI that bothers me: no one has got a clue of how it works. No, I’m not talking about neural networks or the statistical models that these algorithms are based of, I’m talking about the most important thing: the process. Do you know how to build a car? I certaintly don’t, but some people out there do, for sure. We have had cars for a while now and there’s no doubt that we, as humans, know the physics behind how a car works. We know which parts go in what places and what they are going to do if they work correctly. They might fail, yes, but they will never change their behavior from one or two predictable things. On the other hand, AI-based algorithms don’t share this caracteristic. Sometimes we don’t know how they are made or what they look for in the data it is given. That could lead to all sorts of results. Imagine an algorithm to recognize a human face. We could look for the shape of eyes, nose, mouth, etc. That is pretty straight forward and we have had algorithms that work like this for a long time. Now imagine an algorithm that recognizes a party. We could look for a lot of people, right? But how many people are required so that we can consider it a party? Maybe look for dancing? But what if it just a dinner? As it turns out, a party is not something we can describe like that: it is a lot more complicated. That’s a perfect application for AI. Feed it a bunch of pictures of parties, of all sorts, and a bunch of pictures of “not-parties”. Let it train and eventually you’ll get a very accurate selector of “parties” and “not-parties”. In this case, AI will solve the problem for us with little effort while a descriptive algorithm could be developed for years and still not get acceptable results. The worst thing that will happen is we are going to get the wrong result. But what if we start using AI to do things for us? What if we ask a machine to pick our food? To build our houses? To fight our wars? It is not about what we asked them to do, it is about the sorts of abilities they will learn in the process. We have reached a point where a machine will not only give the wrong answer if it fails, it might make someone sick, it might build unsafe buildings or it might kill the wrong people. This might be a very negative view on AI, the truth is so far we have done great things with it. The greatest concern is that we are giving too much to the machine and taking too much from the human race. If they can do everything, how long will it take before we are useless? Before we can be replaced? Machines don’t get sick, are not affected by feelings, they don’t get paid and they don’t die. We suddenly became a lot more useless, right? I don’t want people to stop discovering what AI is capable of, of course not. I just want to remind people that we should not forget how things are done or why they work, after all, “humanity” is something the machines will not learn very soon. We are still a part of the process.
The Process: Us and the future of AI
0
the-process-us-and-the-future-of-ai-14d32e927b05
2017-11-06
2017-11-06 21:13:51
https://medium.com/s/story/the-process-us-and-the-future-of-ai-14d32e927b05
false
746
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Rafael Copstein
Computer Science student from Brazil. Passionate about app development, design and entrepreneurship. Android Developer.
18717e8e9173
rcopstein
1
2
20,181,104
null
null
null
null
null
null
0
null
0
2730b23c70c6
2017-11-27
2017-11-27 04:48:50
2017-11-27
2017-11-27 05:39:23
1
false
en
2017-11-27
2017-11-27 05:54:49
7
14d3ba58dbeb
9.350943
4
0
0
By Shweta Doshi and Paul Meinshausen
3
Thoughts on Developing an Executive Data Science Workshop By Shweta Doshi and Paul Meinshausen A few weeks ago a team of four from GreyAtom, including two founders and a mentor and instructor traveled to London to conduct an executive data science workshop for the Chief Data Officer of a top UK bank and his entire leadership team. This workshop was conducted in parallel with a four week practitioner level data science bootcamp held in Pune to upskill a class of analysts and technologists in the bank. This was GreyAtom’s first executive workshop and this blog post will share some of our learnings from developing and conducting it. Why did GreyAtom create an Executive Data Science Workshop? GreyAtom’s core mission is to create practical, hands-on training based on real work for aspiring data scientists. This mission is carried out through our core Commit.Live product. An executive workshop wasn’t part of the original plan. However, two strong reasons motivated us to develop it. Data Science remains a broad term for a whole lot of fairly disparate problems, skills, and capabilities. To train data scientists using real work we need to be constantly learning and updating our awareness of what real data science work involves. We have a lot of combined experience in data science at GreyAtom and we know we still need to be constantly updated and connected to core problems being met in industry. Engaging with leadership at top-tier businesses lets us learn where and how they need their data scientists to add value and focus their efforts. We want to train data scientists to meet specific needs, not to fit mythical stereotypes. The second reason is both complementary and a flip-side to the first. As our team has worked with organizations and corporates to recruit GreyAtom graduates, we’ve realized that a data scientist’s success is not just determined by their individual skills and knowledge. It also greatly depends on whether their leadership understands how to enable them to be effective as data scientists. In other words, companies need their leaders to learn to deploy and manage data science as much as they need their analytical employees to learn how to do data science. GreyAtom will therefore dedicate time to spend with business leadership and help them refine and improve their ability to deploy data science. We’ll also help our students identify companies where the leadership has a strong grasp of data science and therefore where our students can be successful. So what did we learn? We developed the workshop with a few key thoughts in mind. We knew that the workshop participants would not be people who write code on a regular basis or as a core part of their job. On the other hand we knew that the participants were an informed audience and since they were technical leadership, they would understand the basics. The participants were looking for a workshop that would provide practical knowledge that they could apply to the actual problems they were facing within the bank. Here are some of the principles that we applied and learned to apply through our first iteration of the workshop. Keep things adaptive and interactive. We had prepared enough content to more than fill the workshop’s allotted time. Dale Carnegie called this developing Reserve Power: “Assemble a hundred thoughts around your theme, then discard ninety.” In our case we discarded plenty and also still kept more than enough for the full three-day workshop. To decide what to actually use we could have compiled our own list of the top N topics that would fill each day most valuably. We knew that would be the wrong way to go because value is contextual. Despite all we did to understand what the team wanted before the workshop began, we still didn’t expect that our list of the most important topics would exactly match a list they would compose. We also knew that our role was to be trusted advisors to the bank. As advisors we did not expect them to compile an exact list of topics (although we of course did ask them for their initial input). They were looking to learn from us and part of that learning consisted of learning what they needed to learn. So instead we compiled all the sessions that we believed would merit attention in a five day workshop (even though the workshop would only be three days). And then we put ourselves in a position where throughout each day we could regularly update our schedule of material. If a topic came up that clearly deserved more time and attention we would push back other material. If a topic was simpler than we imagined or didn’t resonate as importantly with the team as we thought it might, then we would shift away from that topic in a direction guided by the participants. To put it briefly, we designed the workshop so that it would run like a Choose Your Own Adventure course. This worked superbly and the participants really engaged in keeping the sessions focused on areas that mattered most to them. We believe it succeeded because we were prepared and did the work ahead of time to map out all areas of potential interest and develop material in those areas. The key thing is not just to be willing to adapt; it’s to be prepared to adapt. Be as connected as possible to real work being done in the organization It was helpful to have a training session for the bank’s data scientists running in parallel with the executive workshop. This happened for us more by chance than by design, but it’s definitely something we’ll try to engineer purposefully in the future. Reviewing ongoing progress in the data science training with the data scientists’ leadership merged the operator and executive viewpoints together clearly and concretely. It also made everything feel more compelling and important; and that feeling makes a big difference in how engaged participants get. We also made an effort to connect the workshop content directly to the leadership team’s own real work. For example, at one point in the workshop we explored different architectural patterns and designs for data platforms. During that session we invited some of the participants who worked on the bank’s data platform to walk through their own designs in light of some of the principles and ideas we had discussed. We did something similar when reviewing the topic of machine learning models. Material resonates more when the examples come directly from the participants. Instead of just lecturing and seeing the participants as passive recipients, we structured the workshop so that they were regularly taking the reins and applying the concepts we discussed to their own problems. Those sessions gave us a chance to give feedback directly relevant to the participants’ responsibilities. That was valuable for them. This also gave us a chance to understand their responsibilities and problems more clearly. We were then able to come back to Mumbai and incorporate our findings into GreyAtom’s curriculum development. When you’re running an educational workshop it can be scary to ask participants to take the driver’s seat. It turned out that those sessions were some of the most valuable and memorable of the weekend for both sides. Get hands-on for a part of the workshop Even when senior executives come from technical, engineering, and even computer science backgrounds, they may have last written code half a decade or more ago. Some (or most?) may have always worked in business functions and never wrote code in a professional (or otherwise) capacity. So an executive data science workshop was going to be quite different than GreyAtom’s core data science training where students are spending most of their time elbows deep in the work of writing code. However, we decided that it would be useful to give our executive participants a chance to experience the hands-on world of data science for a part of the workshop. And it turns out they enthusiastically agreed and enjoyed this part of the workshop. We believe strongly that data scientists need to regularly step away from their terminals and code to see and explore the real world context of their business problem. If they’re building a user-facing product, they should use the product themselves and talk to users. If they’re building a tool to help business decision-makers, they should go and talk to those decision-makers and understand their context as deeply as possible. This same principle applies to those who lead and manage data scientists. You will be a far more effective data science leader if you develop a basic understanding of the tools data scientists use and the engineering/developer environments they work within. During the workshop we gave participants the chance to build a couple of toy models in a jupyter notebook. Participants have probably heard of jupyter, but they may not have had the chance to see for themselves how the tool enables rapid iteration, visual data exploration, and sharing and collaborating on ongoing work and analysis. Recognizing the right questions to ask — e.g. know your data generating process For all the correct and helpful answers that data scientists can provide to business decision-makers and leaders, they can also provide a lot of wrong, misleading, or ultimately impractical and unhelpful answers. Leaders can avoid those situations by getting better at identifying the kinds of questions within their organizations their data scientists are in a position to answer. You save a lot of time by not sending your data scientists on months-long goose chases. Then when data scientists return with answers, it’s also important to know what you need to be asking in order to help validate and verify their results. The way we think about it, the right position to take in data science is “Trust, but Verify”. A good example of how and why this is important is “The Replication Crisis” happening in the field of psychological science. For those unfamiliar with this ongoing story, it basically amounts to an evolution in the methods of science wherein simpler statistical methods are being recognized as insufficient and potentially misleading and are being replaced by more rigorous and precise methods. This is a good process for the discipline of psychological science and it’s being driven by more careful and informed (and skeptical) consumption of scientific findings. Business leaders have much to learn from this and should apply a similar degree of careful and skeptical reading of the work done by data scientists in their organizations. This was a critical part of the workshop. One particular example was our discussion of what in data science is called the ‘data generating process’. In a world beset by the marketing material of Big Data, executives should be wary when results appear to emerge from vague, amorphous datasets and sources. Leaders should be ready to ask a systematic set of questions that prompt data scientists to clearly identify and document the processes that generate the data they used. This includes the technological routes and procedures that enabled the data to reach their database. It also includes an informed and clear presentation of the statistical and probabilistic models that fit the phenomena they’re investigating (whether it’s customer complaints to a call center, or transactions on a website, or credit histories for loan applicants). Even when machine learning algorithms that are difficult to interpret are ultimately being used in production, business leaders should expect to see statistical and visual explorations of the underlying data precede more complex and black-box methods. Some additional and concluding thoughts As GreyAtom has developed its core data science training programs and Commit.Live product in the past year, we’ve thought a lot about the balance between effectively teaching the foundations of data science and catering to our students’ keen interest in the cutting-edged methods and techniques of deep learning and artificial intelligence, etc. We’ll keep refining and navigating that balance. We also think it’s fascinating and important to think about the skills we currently consider fairly unique to data scientists which might become a more standard body of knowledge for analytical professionals over the course of the next few years. Back in the 18th century the German philosopher/poet Goethe called double-entry bookkeeping “one of the finest inventions of the human mind”. That’s a pretty generous description for something that most of us probably consider a fairly banal part of business. More recently the development of spreadsheets comes to mind. To understand how spreadsheets so significantly changed the nature of business it’s worth checking out this fascinating Planet Money podcast on the topic. This quote by a journalist over thirty years ago from the end of the podcast and linked article sums it up: “The spreadsheet is a tool, and it is also a world view — reality by the numbers.” Spreadsheets remain important and useful even for data scientists. For example, John Foreman the head of data science at MailChimp, wrote an excellent book on data science with examples entirely built in spreadsheets (“Data Smart”). But more importantly, knowing how to use spreadsheets almost equates to literacy for many modern professions. So we ask ourselves regularly: what skills that are mostly practiced by data scientists today will soon become a far more universal skill in business and industry? Maybe something like Pandas for basic data cleaning and transformation? Ultimately at GreyAtom we want to build an educational experience that recognizes the value that comes from the broad democratization of some parts of data science as well as the value that comes from areas of extreme specialization and focus that will bring deep innovation. We also believe that as we help to train and develop more data scientists, more work needs to be done to help executive leadership and management enable their data scientists to succeed. We thoroughly enjoyed our chance to do our first executive data science workshop. We’ve received a lot of interest from some other companies in doing a workshop with them. We’re not sure yet exactly how this kind of program fits with the core GreyAtom business. But we’re excited to learn and grow and excited to see where our journey takes us. Shweta Doshi is Co-Founder and Head of Strategic Partnerships at GreyAtom and Co-Founder at DataGiri, the largest Data Science community in Mumbai. Paul Meinshausen is an Advisor at GreyAtom and is Data Scientist in Residence at Montane Ventures, an early stage Venture Capital investor based in India and the U.S.
Thoughts on Developing an Executive Data Science Workshop
17
thoughts-on-developing-an-executive-data-science-workshop-14d3ba58dbeb
2018-01-12
2018-01-12 14:59:26
https://medium.com/s/story/thoughts-on-developing-an-executive-data-science-workshop-14d3ba58dbeb
false
2,425
GA DS
null
GreyAtomSchool
null
GreyAtom
greyatom
null
GreyAtom_School
Data Science
data-science
Data Science
33,617
Paul Meinshausen
Data Scientist in Residence at Montane Ventures, Co-Founder & former Chief Data Officer at @gopaysense, @Housing, @Teradata, @datascifellows, @Harvard
d6f3c0efdf33
PMeins
154
112
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-11
2018-09-11 23:56:45
2018-09-11
2018-09-11 23:58:19
1
false
en
2018-09-11
2018-09-11 23:58:19
2
14d69ab2610d
1.90566
0
0
0
Good news! Free GPU does exist!
4
How to Get A Free GPU for Your Deep Learning Research? Good news! Free GPU does exist! If you need a free GPU for your small research program like me, the good news is here: there is a free GPU resource available- Google Colab: https://colab.research.google.com/ Google released this collaboration environment they have been used internally early 2018. Pros Easy to use JupyterBook Offer 15GB Google Cloud Storage Free GPU Easy to collaboarate with your team Well designed UI Data visualization tools Cons You can not always connect to a GPU 15 GB may not be enough How To Set Up The Environment It is quite a tedious process, but I have collected all steps you need, just follow the steps and you will get there. !apt-get install -y -qq software-properties-common python-software-properties module-init-tools !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null !apt-get update -qq 2>&1 > /dev/null !apt-get -y install -qq google-drive-ocamlfuse fuse from google.colab import auth auth.authenticate_user() from oauth2client.client import GoogleCredentials creds = GoogleCredentials.get_application_default() import getpass !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL vcode = getpass.getpass() !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} Please, open the following URL in a web browser: https://accounts.google.com/o/oauth2/auth?client_id=32555940559.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&response_type=code&access_type=offline&approval_prompt=force Please enter the verification code: Access token retrieved correctly. !mkdir -p drive !google-drive-ocamlfuse drive Run A Sample TensorFlow Program !pip install -q keras from __future__ import print_function import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import backend as K batch_size = 128 num_classes = 10 epochs = 12 # input image dimensions img_rows, img_cols = 28, 28 # the data, split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() if K.image_data_format() == ‘channels_first’: x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype(‘float32’) x_test = x_test.astype(‘float32’) x_train /= 255 x_test /= 255 print(‘x_train shape:’, x_train.shape) print(x_train.shape[0], ‘train samples’) print(x_test.shape[0], ‘test samples’) # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation=’relu’, input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation=’relu’)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation=’relu’)) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation=’softmax’)) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=[‘accuracy’]) model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) score = model.evaluate(x_test, y_test, verbose=0) print(‘Test loss:’, score[0]) print(‘Test accuracy:’, score[1]) Connect To GPU Simply select “GPU” in the Accelerator drop-down in Notebook Settings (either through the Edit menu or the command palette at cmd/ctrl-shift-P). import tensorflow as tf tf.test.gpu_device_name() ‘/device:GPU:0’
How to Get A Free GPU for Your Deep Learning Research?
0
how-to-get-a-free-gpu-for-your-deep-learning-research-14d69ab2610d
2018-09-11
2018-09-11 23:58:19
https://medium.com/s/story/how-to-get-a-free-gpu-for-your-deep-learning-research-14d69ab2610d
false
452
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Jia Le Xian
Machine Learning Engineer In Healthcare (Merging Blog From Wordpress)
221f1f59b79
jiale.xian
1
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-31
2018-01-31 10:44:00
2018-06-22
2018-06-22 10:01:01
1
false
en
2018-06-22
2018-06-22 10:01:01
1
14d78177598d
3.871698
0
0
0
It’s the Age of Engagement in Customer Experience. What does that mean? In a nutshell, it means that in an increasingly digitized world…
5
‘The Age of Engagement’: High-touch Customer Experience in a Digital World It’s the Age of Engagement in Customer Experience. What does that mean? In a nutshell, it means that in an increasingly digitized world, the need for companies to form a deep and meaningful human connection with their customer base is more crucial now than ever before. The delivery of Customer Experience has been utterly transformed in recent years by leaps in technological advancement. Nowhere has this been more apparent than in the first-touch connections that customers are making with companies. AI and Machine Learning have completely changed the game. But in the race to embrace revolutionary new technologies, often in order to increase volume and reduce cost, it’s easy to lose sight of one very important factor: the value of human connection. The numbers show that humans still prefer to deal with other humans rather than with digital channels — 83 percent, to be precise. When a customer has a great experience with a brand, they stay loyal for longer and spend more. In Voxpro’s view, great customer experiences are no longer delivered by humans or technology, but rather by humans and technology together. We call it ‘Blended AI’. Voxpro Co-Founder and CEO Dan Kiely explains: “2018 will be the year of blended AI — even though interest in AI and chatbots is increasing, human interaction will never go away. While some brands have goals of decreasing their call volume by 50%, real-time interaction will increase (as evident by the Forrester prediction that more brands will phase out email in favor of real-time customer engagement communications). Human interaction will be at the epicenter driving AI in customer service, and is imperative for it to be successful and not see satisfaction levels drop. We still need cognitive thinking that can tweak the algorithms and step in to help customers when needed — brands shouldn’t leave everything to a bot.” This is where things get really interesting. While a customer’s desire to deal with other humans has not changed, the type of human they want to deal with has changed radically, as have the type of issues they want those humans to address. Customers want to deal with agents that already know their name and their entire case history by the time they pick up the phone. They also want them to be able to solve complex issues comprehensively and quickly, and to ensure that these issues will not arise again. In a nutshell, customers want highly individualized interactions with highly skilled and empowered agents. This is the essence of the Age of Engagement. According to Jeffrey Puritt, President and CEO of TELUS International, the future of AI in customer service is focused on directly supporting, rather than replacing, contact center agents, because fundamentally, there are things humans can do that machines just can’t. “Where machines have made very little progress is in tackling novel situations. They can’t handle things they haven’t seen many times before. The limitation of machine learning is that it needs to learn from large volumes of past data. Humans don’t. We have the ability to connect seemingly disparate threads to solve problems we’ve never seen before.” IDC research supports this notion, citing that overall customer service agent numbers are anticipated to go up over the next 2–3 years, with many firms reporting an expected increase between 10–50 percent. “Not only will we need more agents, we will see a need for more highly-skilled agents who can solve more complex customer cases and provide cross-channel consistency of service,” continues Puritt. “More than ever before, customer service agents are the key to providing a world-class brand experience.” In the Age of Engagement, humans are rapidly travelling up the value chain, and it is AI and Machine Learning that are fuelling the journey. Here are four examples of how: 1. Machines can handle basic CX tasks at a volume and speed never seen before, thereby freeing up humans to solve more complex problems for customers, while building deeper relationships with them. 2. AI has the power to collect and aggregate vast amounts of customer data across business systems, websites, past customer interactions, and many other sources. This data allows the agent to have a highly personal, informed and time-efficient interaction with the customer. The data also allows companies to provide excellent self-service options, again, freeing up customer service agents for the most complex problems. 3. When customers call for help with minor issues, AI can handle them far more quickly than humans, thereby reducing customer wait times and increasing first-time resolution rates. This allows agents to spend more time with customers when necessary, in turn deepening that human connection. 4. Finally, AI has the power to support that most precious human quality of all: empathy. Xavient, a global IT consulting and software services company, has built an AI-powered analytics platform called AMPLIFY. By coupling voice recognition software with customer data, AMPLIFY can determine when there’s an issue that should be escalated to a live agent. The agent is then fully prepared to deal with the customer in a highly empathetic way, and they are equipped with all of the data they need to provide a swift yet personal resolution. The Age of Engagement will see a revolution in Customer Experience: incredibly powerful technology empowering agents to deliver the most high-touch, high-care experience for a whole new generation of customers. The deep human connection that this will forge between customers and brands will bring about longer-lasting and more valuable relationships between them. Voxpro — powered by TELUS International is leading the way in combining the best of both humans and machines so that the global brands we partner with are perfectly positioned to thrive in this new era.
‘The Age of Engagement’: High-touch Customer Experience in a Digital World
0
the-age-of-engagement-high-touch-customer-experience-in-a-digital-world-14d78177598d
2018-06-25
2018-06-25 17:45:29
https://medium.com/s/story/the-age-of-engagement-high-touch-customer-experience-in-a-digital-world-14d78177598d
false
973
null
null
null
null
null
null
null
null
null
Customer Service
customer-service
Customer Service
18,984
Voxpro
International award winning provider of Multilingual Customer Experience #CX & Technical Support solutions to giant global brands. www.voxprogroup.com
28bf8a6ae737
Voxpro
298
640
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-06
2018-04-06 09:40:00
2018-04-06
2018-04-06 09:54:30
1
false
en
2018-04-06
2018-04-06 09:54:30
0
14dabf8516e3
1.833962
0
0
0
Intelligence . Allegiance. Regenerence
5
©GenisysRobotics Principles for Autonomous Robots Intelligence . Allegiance. Regenerence First things first . Let us define what is an autonomous bot. An autonomous bot is one which has the capability to think on its own. It has the processing power and memory that enables it to conduct itself in the environment without the need to connect to a central remote identity to get the commands and directions to perform. Intelligence The bot needs to have its own spatial intelligence which can allow it to conduct itself in any situation without the need for a central server connection to dictate it. In other words, the stimulus and response needs to be embedded within the bot which has its own physical identity. The ability to learn from experience is a must for such a bot. The connection to the network needs to be accessed only in case of an unknown situation or in case of regenerance. Allegiance The ownership of the bot and therefore its Allegiance must transfer to the owner. With self driving cars being the first examples of autonomous bots hitting the roads, the key point to note is that these cars are owned and run by the large corporates. The case of 2018 Facebook data leak suggests that these companies cannot be trusted. With drones and bots becoming mainstream as we progress through the 21st century, the model of subscription may be cost effective but will end up concentrating the power in the hands of the few that create it. In such a scenario, allegiance of the bot who is responsible to care for you becomes very important. Key question to ask is “Are we actually inviting a spy into our house?” Regenerance Consider the robotic pets, or robotic nurses or robotic servants, or drones and all the Siris, Alexas and Googles all present in your 2025 home. Admittedly, it would be frustrating to have an army of purpose built bots which are capable of only one function effectively. All of them could be invading your privacy every single moment of your presence and existence. In such a scenario, it may be useful to have a single bot which can repurpose itself into doing different task at different times of the day and different stages of the owners life. Using 3d Printing and plug and play technologies, it would be possible for a bot to build itself newer parts that allows it to regenerate and repurpose itself to suit the needs of the owner. Sudeep Mathur is the founder of genisys robotics and botchain solutions. Follow Sudeep at @theblockchainguy on twitter and youtube.
Principles for Autonomous Robots
0
principles-for-autonomous-robots-14dabf8516e3
2018-04-06
2018-04-06 09:54:31
https://medium.com/s/story/principles-for-autonomous-robots-14dabf8516e3
false
433
null
null
null
null
null
null
null
null
null
Bots
bots
Bots
14,158
Sudeep Mathur
null
b78b7a0eaa43
sudeepmathur
3
8
20,181,104
null
null
null
null
null
null
0
null
0
863f502aede2
2017-10-19
2017-10-19 22:01:13
2017-10-19
2017-10-19 22:01:32
4
false
en
2017-10-19
2017-10-19 22:01:32
4
14dc85788351
3.888679
3
0
0
Media partner Synced attended the AI and Society Symposium held October 10–11th in Tokyo, where speakers from industry and academia…
1
Tokyo: AI and Society Symposium Media partner Synced attended the AI and Society Symposium held October 10–11th in Tokyo, where speakers from industry and academia exchanged ideas for practical applications and possible future developments of AI technologies. The symposium sought to bridge the gap between cutting-edge research in academia, state-of-the-art AI applications in business, and safety concerns about AI development; and to stimulate worldwide discussion on social impacts arising from new AI technologies. Speakers covered a wide spectrum of backgrounds, from researchers on machine learning and artificial general intelligence (AGI) to philosophers on ethnic and legal issues; from artificial consciousness, development entrepreneurs to venture investors; from computer scientists investigating artificial life evolution to artists working with AI. There were five keynote speakers: Prof. Hiroaki Kitano from Sony Computer Science Lab (CSL), Prof. Oren Etzioni from the Allen Institute for Artificial Intelligence, Dr. Akira Sakakibara from Microsoft Japan, Marek Rosa from GoodAI and Prof. Hod Lipson from Columbia University. Prof. Kitano shared his thoughts on the Nobel Turing Challenge. As a system biology scientist, he found that knowledge discovery is widely applied in biology research, which is typically a non-linear, complex process difficult to execute using human intelligence. He, therefore, has developed an online AI platform called Garuda, which hybridizes human and machine approaches on system biology research. He argued that the complexity of living systems entails the study of biology in multiple dimensions of time and space. Motivated by this high-dimension complexity, the Garuda platform gathers data and gadgets from biology researchers around the world and captures the complexity at a system level. Prof. Kitano also advanced an ambitious proposal: Instead of evaluating a machine’s intelligence using the Turing Test, a new intelligence challenge would be building a computer system able to make scientific discoveries by using AI systems’ knowledge discovery ability. This would give us a deeper insight into and perhaps redefine the scientific process, while the resulting discoveries could even qualify for a Nobel Prize in physiology or medicine. The authentic Nobel prize can only be awarded to humans at present, might Kitano’s challenge change that? This challenge may serve as an alternative to the Turing test for the AI intelligence, like the Nobel Prize for the human intelligence. Dr. Oren Etzioni spoke on the topic “AI good or evil?” Dr. Etzioni is the CEO and Chief Computer Scientist of the well-known independent research institute AI2. He asserted that the depiction of an “evil AI” is far from the current reality of today’s AI technology, and portrays AI technologies in a negative way. He defined a harmful intelligence as an autonomous machine. For example, a computer virus is usually autonomous but not intelligent enough to harm humans on a great scale, while game-playing AI AlphaGo is intelligent but not autonomous because Go is the only task it’s good at. He said his 6-year-old son “is more autonomous than any AI system. He can make his choices, he can cross the street, he can explain himself, he can understand English when he wants to. That’s the difference.” All in all, Etzioni argued that despite the hype and headlines of AI doomsday scenarios, it is already positively impacting the real world and can save thousands of lives in the coming years. “AI is a technology, and the choice is ours.” Prof. Hop Lipson from Columbia University predicted six development waves in an AI timeline: Established symbolic computing, the current application of predictive analytics to cognitive computing, creative machines, physical embodiment machines, and finally sentient computing. We have developed AI from rule-based to data-driven, and with the availability of big data (yes, data is precious) and the development of machine learning techniques, machines have learned to distinguish dogs and cats in the last five years. Dr. Lipson expects the next AI breakthrough will be driven by cloud computing. Having worked on self-aware and self-replicating robots which challenge conventional views of robotics, he predicted that physical embodiment of AI will happen in parallel with development in material sciences such as the artificial muscle recently invented at MIT. Finally, he said he would not be surprised to see sentient robots/machines realized in the next few decades. In other keynote talks, Dr. Sakakibara covered the latest trends in AI and the “Democratizing AI” strategy, and Marek Rosa presented his roadmap to building general AIs based on a hierarchical skill development. There were quite a few other interesting talks. Prof. Kenneth Stanley from Uber and Central Florida University discussed his attempts at designing an open-ended learning system mimicking the learning of a biological system; Youichiro Miyake from Square Enix talked about how his team designed the intelligent gaming system in Final Fantasy XV; Prof. Shun’ichi Amari proposed his hypothesis and concerns about the problem of artificial consciousness and free will from a mathematician’s perspective; and Prof. Selmer Bringsjord showed how his team is attempting to solve the artificial general moral intelligence (AGMI) problem with logical formulations. Journalist: Joni Chung | Editor: Hao Wang, Michael Sarazen
Tokyo: AI and Society Symposium
3
tokyo-ai-and-society-symposium-14dc85788351
2018-05-07
2018-05-07 17:32:29
https://medium.com/s/story/tokyo-ai-and-society-symposium-14dc85788351
false
845
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
null
SyncedGlobal
null
SyncedReview
syncedreview
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
Synced_Global
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Synced
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
960feca52112
Synced
8,138
15
20,181,104
null
null
null
null
null
null