audioVersionDurationSec
float64 0
3.27k
⌀ | codeBlock
stringlengths 3
77.5k
⌀ | codeBlockCount
float64 0
389
⌀ | collectionId
stringlengths 9
12
⌀ | createdDate
stringclasses 741
values | createdDatetime
stringlengths 19
19
⌀ | firstPublishedDate
stringclasses 610
values | firstPublishedDatetime
stringlengths 19
19
⌀ | imageCount
float64 0
263
⌀ | isSubscriptionLocked
bool 2
classes | language
stringclasses 52
values | latestPublishedDate
stringclasses 577
values | latestPublishedDatetime
stringlengths 19
19
⌀ | linksCount
float64 0
1.18k
⌀ | postId
stringlengths 8
12
⌀ | readingTime
float64 0
99.6
⌀ | recommends
float64 0
42.3k
⌀ | responsesCreatedCount
float64 0
3.08k
⌀ | socialRecommendsCount
float64 0
3
⌀ | subTitle
stringlengths 1
141
⌀ | tagsCount
float64 1
6
⌀ | text
stringlengths 1
145k
| title
stringlengths 1
200
⌀ | totalClapCount
float64 0
292k
⌀ | uniqueSlug
stringlengths 12
119
⌀ | updatedDate
stringclasses 431
values | updatedDatetime
stringlengths 19
19
⌀ | url
stringlengths 32
829
⌀ | vote
bool 2
classes | wordCount
float64 0
25k
⌀ | publicationdescription
stringlengths 1
280
⌀ | publicationdomain
stringlengths 6
35
⌀ | publicationfacebookPageName
stringlengths 2
46
⌀ | publicationfollowerCount
float64 | publicationname
stringlengths 4
139
⌀ | publicationpublicEmail
stringlengths 8
47
⌀ | publicationslug
stringlengths 3
50
⌀ | publicationtags
stringlengths 2
116
⌀ | publicationtwitterUsername
stringlengths 1
15
⌀ | tag_name
stringlengths 1
25
⌀ | slug
stringlengths 1
25
⌀ | name
stringlengths 1
25
⌀ | postCount
float64 0
332k
⌀ | author
stringlengths 1
50
⌀ | bio
stringlengths 1
185
⌀ | userId
stringlengths 8
12
⌀ | userName
stringlengths 2
30
⌀ | usersFollowedByCount
float64 0
334k
⌀ | usersFollowedCount
float64 0
85.9k
⌀ | scrappedDate
float64 20.2M
20.2M
⌀ | claps
stringclasses 163
values | reading_time
float64 2
31
⌀ | link
stringclasses 230
values | authors
stringlengths 2
392
⌀ | timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | null | 0 | b27cb4e1e73f | 2017-08-29 | 2017-08-29 03:17:19 | 2017-08-29 | 2017-08-29 03:37:21 | 6 | false | en | 2017-09-06 | 2017-09-06 10:48:25 | 5 | 13ab0d4e21ec | 7.240566 | 16 | 2 | 0 | Humans learn about our visual world through experience, not labels. How can we get machines to do the same? | 5 | Unsupervised learning of a useful hierarchy of visual concepts — Part 1
This is the first of a series of articles explaining the work we’re doing at Syntropy, and tracking our progress as we make ground through some of the unsolved (or unsatisfactorily solved) problems in machine learning. These articles are split into technical (for Machine Learning professionals) and non-technical (for a more general audience). This is article is non-technical, and will have a technical follow-on article.
This article originally formed the first half of Unsupervised learning of a useful hierarchy of visual concepts — Part 2. The non-technical portion of the original article has been extracted, leaving the technical portion as a follow-on.
Look at the picture above. It’s a picture of a bike — that’s obvious — but it’s a bike you’ve never seen before, so how do you know it’s a bike? I expect you’ll list the properties of bikes that we can see here — two wheels, pedals, handlebars, seat etc. That begs the question though — how do you know those are wheels if you’ve never seen those exact wheels before? The answer here — tyres, spokes, hub — reveals that the properties that make up a bike are themselves made up of other properties. Our visual world is composed of a hierarchy of parts. At the top of the hierarchy there are abstract concepts like bike, car, dog; and if you follow it all the way down you’ll find that everything is made up of basic shapes, lines, colours and textures.
So is that the answer? We see a bike because it’s made up of the properties that define a bike? Actually it’s only one part — there’s another question that will throw a spanner in the works.
Two pedals. The same, but different.
The two pictures above are of the same pedal. Again, it seems obvious, but if you followed the hierarchy down to the bottom you’d notice that actually, the shapes that make up the first image are different to the shapes that make up the second. So how do we know they’re the same when, seemingly, they are made up of different parts?
The answer is that at each level of the visual hierarchy, your brain has learned to tolerate some amount of variance in the parts that compose a particular concept. This is called invariance. At the lower levels of the visual hierarchy, invariance allows you to recognise a rectangle or line even when it is skewed, rotated or scaled; and at the higher levels it allows you to recognise people and objects regardless of viewing angle, lighting conditions, or context.
If we want to build a computer vision system that can see like humans do, then it must have a hierarchy of visual concepts, where each is invariant to some degree of change in the parts that compose it.
Machine learning algorithms take varying approaches to finding invariance in visual data. Deep learning, currently in vogue, takes what you might call a ‘brute force’ approach — show the system thousands or millions of images, label each one (bike, car, dog), and it will eventually discover correlations between similarly labelled images. This works astonishingly well when applied to narrowly defined classification problems, but the invariance that emerges from this process is very much inferior to a human’s hierarchy of invariance.
First, while the system itself does learn invariances throughout the hierarchy, they’re not organised in any useful way, but scattered around indiscriminately within each layer. This makes them uninterpretable at any level other than the top (where we assigned the labels), and is the reason we refer to deep learning as a ‘black box’. If we could look inside and see what the system was learning then we could do far more with it than just recognise bikes and dogs. We could ask further questions about the dog (tail length, fur colour, etc), identify why the system did or didn’t correctly recognise the dog, and easily rectify any weaknesses in the system — all things humans can do naturally.
Secondly, to learn anything at all, deep learning vision systems require thousands of labelled images for each thing you want them to recognise. Compare this with humans, who can learn to recognise objects and people before we can even talk — well before we could be said to be given any labelled data. Learning without labels is called unsupervised learning, while learning from labels is called supervised learning. Much of the learning that humans do is unsupervised, while deep learning is supervised.
Fully-supervised learning is problematic because it can only find invariance by recognising statistical similarities between pictures that had the same label. These are not necessarily true invariances. For example, of 1,000 pictures labeled ‘dog’, 95% might contain whiskers, which is good, but 80% might also contain grass. Does grass define a dog? Of course not! But if the data is skewed in some way like this then the discovered invariances might not map well to the real world.
The skewed invariances problem is often tolerable for deep learning applications because the training data is usually similar to the test data. It becomes a problem though when we need a more general system that we can build upon to perform new tasks. If the hierarchy of invariance was robust, like a human’s, then it would also be reusable. This is the reason humans can learn to recognise something after seeing it only a few times — we are building upon a good mental model of the visual world. Deep learning vision systems do not have a robust mental model, leaving them useful only for performing the exact task they were trained to perform.
Our goal at Syntropy is to help computers understand the world the same way that humans do. To achieve this, we need an algorithm that can learn the same hierarchy of invariant concepts that humans build internally, without relying on labelled data. This can be phrased in machine learning terms as unsupervised learning of a hierarchy of invariant parts. The rest of this article details some initial stages of our approach towards this goal.
In a typical artificial neural network, each neuron is a feature detector. It is looking for some specific thing in the input which, if found, will cause it to activate. In the first few layers of a deep learning network, the neurons typically work together to detect edges, lines and basic shapes. One of these layers might contain dozens of combinations of neurons that are looking for vertical edges in varying positions, lengths and degrees of rotation; each different to the next, but similar enough that we can still consider them all to be vertical edges. These combinations of neurons could together be said to cover the whole manifold of possible vertical edge variations. The problem is, there’s nothing explicitly tying the combinations together. The neurons are not organised in any meaningful way, but distributed randomly throughout the layer, interspersed with other neurons that are detecting totally different things.
Our objective is to organise these feature detectors into an explicit structure, grouping them together in such a way that each group could be considered a manifold detector. That is, a group of neurons that can activate in different ways to detect all the variations of a particular visual concept. A hierarchy of manifold detectors would directly mirror the visual hierarchy of the real world, be easy to inspect and explain, and form a useful schema that can be built upon to rapidly learn new things. A successful implementation should identify the same object in various positions by recognising that even as the position changes, the set of activated manifold detectors remains constant. This will be our yardstick for measuring progress against this objective.
We trained our network on the Omniglot data set, a set of 1623 different handwritten characters from 50 different alphabets. Omniglot comes with images, labels and stroke data, but for our experiments we have used only the images.
A sample of characters from the Omniglot data set
We feed our network a single character at a time, transforming the input in subtle ways across a few input frames to simulate video input, or the way a human might see something move. Using this data, our system is able to construct a map of invariant manifold detectors, each of which represents a ‘part’ in a variety of poses and positions. Each manifold detector attempts to reconstruct the part that it identified, and the reconstructions generated by the activated manifolds can be combined to reconstruct the full input. This is demonstrated below, colours are used only as a visual aid.
Left column: Input. Middle column: Manifold reconstructions. Right column: Combined reconstruction.
The following diagram shows a history of the last 15 reconstructions that some of the manifold detectors had generated. Note that in general, each is reconstructing a range of variations of a particular type of part. This demonstrates that the manifold detector has an invariance to some degree of pose and positional change.
Last 15 reconstructions performed by each manifold.
Finally, as stated earlier, a successful implementation should identify an object in various positions by recognising that even as the position changes, the set of activated manifold detectors remains the same. The following diagram illustrates that our system is capable of this ability.
Left: Transforming input. Right: Reconstruction.
We go into more depth on the inspirations, related work, and technical implementation details in the follow-on article: Unsupervised Learning of a Useful Hierarchy of Visual Concepts — Part 2. The follow-on article is more technical and is aimed at machine learning professionals. The general reader can continue on to the next non-technical article: How do humans recognise objects from different angles? An explanation of one-shot learning.
If you’re interested in following along as we expand on these ideas then please subscribe, or follow us on Twitter. If you have feedback after reading this, please comment, or reach out via email (info at syntropy dot xyz) or Twitter. Finally, if you’re interested in our work, please get in touch — we are always looking to expand our team.
| Unsupervised learning of a useful hierarchy of visual concepts — Part 1 | 87 | unsupervised-learning-of-a-useful-hierarchy-of-visual-concepts-part-1-13ab0d4e21ec | 2018-06-15 | 2018-06-15 05:46:53 | https://medium.com/s/story/unsupervised-learning-of-a-useful-hierarchy-of-visual-concepts-part-1-13ab0d4e21ec | false | 1,667 | Syntropy on Machine Intelligence | null | null | null | Syntropy | syntropy-ai | ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,COMPUTER VISION | syntropyai | Machine Learning | machine-learning | Machine Learning | 51,320 | Angus Russell | Working on and writing about unsupervised machine learning at Syntropy AI. | 7a7de2e8eb07 | angus.russell89 | 164 | 16 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-12-19 | 2017-12-19 15:12:10 | 2018-01-13 | 2018-01-13 13:30:35 | 0 | false | en | 2018-01-13 | 2018-01-13 13:30:35 | 14 | 13ac2fb5e915 | 4.271698 | 2 | 0 | 0 | In the late 1960s and early 70s, the first computer-aided design (CAD) software packages began to appear. Initially, they were mostly used… | 1 | Every Business Today Needs To Prepare For An AI-Driven World. Here’s How:
In the late 1960s and early 70s, the first computer-aided design (CAD) software packages began to appear. Initially, they were mostly used for high-end engineering tasks, but as they got cheaper and simpler to use, they became a basic tool to automate the work of engineers and architects.
According to a certain logic, with so much of the heavy work being shifted to machines, a lot of engineers and architects must have been put out of work, but in fact just the opposite happened. There are far more of them todaythan 20 years ago and employment in the sector is supposed to grow another 7% by 2024.
Still, while the dystopian visions of robots taking our jobs are almost certainly overblown, Josh Sutton, Global Head, Data & Artificial Intelligence at Publicis.Sapient, sees significant disruption ahead. Unlike the fairly narrow effect of CAD software, AI will transform every industry and not every organization will be able to make the shift. The time to prepare is now.
Shifting Value To Different Tasks
One of the most important distinctions Sutton makes is between jobs and tasks. Just as CAD software replaced the drudgery of drafting, which allowed architects to spend more time with clients and coming up with creative solutions to their needs, automation from AI is shifting work to more of what humans excel at.
For example, in the financial industry, many of what were once considered core functions, such as trading, portfolio allocation and research, have been automated to a large extent. These were once considered high-level tasks that paid well, but computers do them much better and more cheaply.
However, the resources that are saved by automating those tasks are being shifted to ones that humans excel at, like long-term forecasting. ““Humans are much better at that sort of thing,” Sutton says. He also points out that the time and effort being saved with basic functions frees up a lot of time and has opened up a new market in “mass affluent” wealth management.
Finally, humans need to keep an eye on the machines, which for all of their massive computational prowess, still lack basic common sense. Earlier this year, when Dow Jones erroneously reported that Google was buying Apple for $9 billion — a report no thinking person would take seriously — the algorithms bought it and moved markets until humans stepped in.
Human-Machine Collaboration
Another aspect of the AI-driven world that’s emerging is the opportunity for machine learning to extend the capabilities of humans. For example, when a freestyle chess tournament that included both humans and machines was organized, the winner was not a chess master nor a supercomputer, but two amateurs running three simple programs in parallel.
In a similar way, Google, IBM’s Watson division and many others as well are using machine learning to partner with humans to achieve results that neither could achieve alone. One study cited by a White House report during the Obama Administration found that while machines had a 7.5 percent error rate in reading radiology images and humans had a 3.5% error rate, when humans combined their work with machines the error rate dropped to 0.5%.
There is also evidence that machine learning can vastly improve research. Back in 2005, when The Cancer Genome Atlas first began sequencing thousands of tumors, no one knew what to expect. But using artificial intelligence researchers have been able to identify specific patterns in that huge mountain of data that humans would have never been able to identify alone.
Sutton points out that we will never run out of problems to solve, especially when it comes to health, so increasing efficiency does not reduce the work for humans as much as it increases their potential to make a positive impact.
Making New Jobs Possible
A third aspect of the AI-driven world is that it is making it possible to do work that people couldn’t do without help from machines. Much like earlier machines extended our physical capabilities and allowed us to tunnel through mountains and build enormous skyscrapers, today’s cognitive systems are enabling us to extend our minds.
Sutton points to the work of his own agency as an example. In a campaign for Dove covering sport events, algorithms scoured thousands of articles and highlighted coverage that focused on the appearance of female athletes rather than their performance. It sent a powerful message about the double standard that women are subjected to.
Sutton estimates that it would have taken a staff of hundreds of people reading articles every day to manage the campaign in real time, which wouldn’t have been feasible. However, with the help of sophisticated algorithms his firm designed, the same work was able to be done with just a few staffers.
Increasing efficiency through automation doesn’t necessarily mean jobs disappear. In fact, over the past eight years, as automation has increased, unemployment in the US has fallenfrom 10% to 4.2%, a rate associated with full employment. In manufacturing, where you would expect machines to replace humans at the fastest rate, there is actually a significant labor shortage.
The Lump Of Labor Fallacy
The fear that robots will take our jobs is rooted in what economists call the lump of labor fallacy, the false notion that there is a fixed amount of work to do in an economy. Value rarely, if ever, disappears, it just moves to a new place. Automation, by shifting jobs, increases our effectiveness and creates the capacity to do new work, which increases our capacity for prosperity.
However, while machines will not replace humans, it’s become fairly clear that it can disrupt businesses. For example, one thing we are seeing is a shift from cognitive skills to social skills, in which machines take over rote tasks and value shifts to human centered activity. So it is imperative that every enterprise adapt to a new mode of value creation.
“The first step is understanding how leveraging cognitive capabilities will create changes in your industry,” Sutton says, “and that will help you understand the data and technologies you need to move forward. Then you have to look at how that can not only improve present operations, but open up new opportunities that will become feasible in an AI driven world.”
Today, an architect needs to be far more than a draftsman, a waiter needs to do more than place orders and a travel agent needs to do more than book flights. Automation has commoditized those tasks, but opened up possibilities to do far more. We need to focus less on where value is shifting from and more on where value is shifting to.
An earlier version of this article first appeared in Inc.com
| Every Business Today Needs To Prepare For An AI-Driven World. Here’s How: | 4 | every-business-today-needs-to-prepare-for-an-ai-driven-world-heres-how-13ac2fb5e915 | 2018-06-07 | 2018-06-07 20:44:33 | https://medium.com/s/story/every-business-today-needs-to-prepare-for-an-ai-driven-world-heres-how-13ac2fb5e915 | false | 1,132 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Greg Satell | Author of Mapping Innovation, Speaker, Innovation Adviser, @HBR and @Inc Contributor, Publisher- www.DigitalTonto.com - Learn more at www.GregSatell.com. | 22f5b0d6a0b9 | digitaltonto | 10,459 | 917 | 20,181,104 | null | null | null | null | null | null |
0 | add_datepart(weather, "Date", drop=False)
add_datepart(googletrend, "Date", drop=False)
...
import torchtext
from torchtext import vocab, data
from torchtext.datasets import language_modeling
| 2 | null | 2018-08-10 | 2018-08-10 08:10:32 | 2018-08-20 | 2018-08-20 07:23:11 | 3 | false | en | 2018-08-20 | 2018-08-20 07:23:11 | 2 | 13acf3deb815 | 1.961321 | 0 | 0 | 0 | Note: More mediocre notes but I expect them to get better on the next iteration. | 2 | FAST.AI Lesson 4 Notes
Note: More mediocre notes but I expect them to get better on the next iteration.
Many awesome links on learning rate, Stochastic gradient descent. Differential learning rate. Transfer learning.
Structured learning.
Dropout: randomly pick X number of cells (e.g., 1% of all cells) and delete them; output doesn’t change much; avoids overfit (helps with generalization); each mini batch throws away different cells; forces algo to continue finding suitable representation. Start experimenting with different dropout rate at different layers when overfitting happens. (train vs. validation loss)
Linear layer: matrix multiply (e.g., 1024 -> 512 activations).
Also try dropping only on last (linear) layer. (Typically .25, 0.5)
Why loss rather than accuracy? Because we can see it in both train/validation. And it’s the actual item that’s being optimized.
Python for Data Analysis is a good resource.
The notebook losson3-rossman.ipynb uses 3rd place winner’s approach. Different types of column data: categorical, continuous. Year/Month/Day: treated like categorial.
Different embedding dimension for different variables. Take the cardinality of the variables, divided by 2, but no greater than 50. For example, the cardinality of days of the week is 7.
Embedding: Distributed representation that allows NN to learn interesting relationships. Good for any categorical variables. Might not be useful for high cardinality variables.
Creating time series columns (not traditional time series technique) and let NN learn from it:
DL allows for simpler model with less feature engineering while retaining state-of-the-art results.
See blog post by Rachel about How (and why) to create a good validation set.
Language modeling
Given a few words of a sentence, predict the next sentence.
Notebook: lang_model-arxiv.ipynb
Much less mature than computer vision. But the big ideas from CV are starting to spread into NLP.
This class focuses on word-level prediction (as opposed to Karpathy’s character-level prediction). Also focuses on text classification rather than generation.
Torch’s NLP library:
Spacy tokenizer is one of the best around. Replace each word with a integer that uniquely identifies the vocabulary.
Ok so I got a bit tired of typing and started using my iPad. Here’s the remainder of the notes:
| FAST.AI Lesson 4 Notes | 0 | fast-ai-lesson-4-notes-13acf3deb815 | 2018-08-20 | 2018-08-20 07:23:11 | https://medium.com/s/story/fast-ai-lesson-4-notes-13acf3deb815 | false | 374 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | G.W.E.N. 794 | Random stuff about artificial intelligence | f235b35b4e53 | gwen.seven94 | 1 | 4 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-12 | 2017-12-12 02:40:10 | 2017-12-12 | 2017-12-12 02:44:18 | 5 | false | vi | 2017-12-12 | 2017-12-12 02:44:18 | 13 | 13ad87b51b73 | 9.357862 | 0 | 0 | 0 | Data Scientist là nghề sexy nhất của thế kỉ 21, theo Harvard Business Review nhận định. Với skillset chuyên sâu và trải dài trên nhiều lĩnh… | 2 | Data Scientist là gì? Cách trở thành Data Scientist?
Data Scientist là nghề sexy nhất của thế kỉ 21, theo Harvard Business Review nhận định. Với skillset chuyên sâu và trải dài trên nhiều lĩnh vực, các Data Scientist (nhà khoa học dữ liệu) cũng được ví “quý hiếm như kỳ lân”.
Đọc bài phỏng vấn này của ITviec với anh Nguyễn Hoàn, Data Scientist của Xomad, để biết.
Data Scientist là gì? Công việc cụ thể của họ?
Những tố chất và kỹ năng cần thiết?
Học gì để trở thành một Data Scientist?
Xem việc làm Data Scientist tại ITviec
Tiểu sử: Nguyễn Hoàn tốt nghiệp Cử nhân tại ĐH KHTN TPHCM, chuyên ngành Software Engineering. Sau đó, anh học Thạc sĩ ngành Data Mining tại University of Trento, Ý. Năm 2013, Hoàn về nước, bắt đầu làm việc tại Sentifi với vị trí Data Scientist.
Hiện tại, anh Hoàn đang sống tại Pháp và làm việc từ xa (remotely) cũng với vị trí Data Scientist cho công ty Xomad có trụ sở tại LA, Mỹ.
Theo anh, Data Scientist là làm gì?
Data Scientist là người tạo ra giá trị từ data, với hai nhiệm vụ chính là:
Thu thập, xử lý dữ liệu để tìm ra những insight giá trị.
Ví dụ, dựa trên thông tin thu thập được từ các post/comment/status trên mạng xã hội, Data Scientist có thể tìm ra được: cứ gần đến ngày Valentine thì tần suất xuất hiện của thương hiệu ABC cao hơn hẳn.
Đây là một insight giá trị mà bộ phận Marketing có thể sử dụng cho các chiến dịch quảng cáo trong mùa Valentine.
Giải thích, trình bày những insight đó cho các bên liên quan, để chuyển hóa insight thành hành động.
Ví dụ, khi tìm ra được insight giá trị từ data, bạn cần làm report/presentation, hay visualization để biểu diễn, giải thích cho các bên liên quan hiểu được: 1) Insight đó là gì, có ý nghĩa gì? 2) Có thể ứng dụng cụ thể như thế nào để đem lại lợi ích cho doanh nghiệp/sản phẩm/người dùng.
Tuy nhiên, Data Scientist là nghề rất mới, nên định nghĩa về nó còn khá mơ hồ, nhập nhằng (ngay cả trên thế giới). Vì vậy, tùy theo từng công ty mà mô tả công việc, yêu cầu skillset, thậm chí job title có thể khác nhau đôi chút.
[caption id=”attachment_12294" align=”aligncenter” width=”640"]
Anh Nguyễn Hoàn (ngoài cùng, bên phải) và đồng nghiệp[/caption]
Sự khác biệt giữa Data Analyst và Data Scientist là gì?
Đúng là hai công việc này có trách nhiệm tương đối giống nhau. Ở một số công ty, Data Scientist có khi cũng là Data Analyst, hoặc thậm chí có thể nhập nhằng với cả Machine Learning Engineer, Data Engineer nữa.
Cá nhân mình thì nghĩ Data Scientist chia làm 2 dạng chính, tạm gọi nhánh A (Analysis) và nhánh B (Building), cụ thể:
Data Scientist nhánh A (Analysis) là những thinker. Nhiệm vụ chính của họ là phân tích dữ liệu bằng các phương pháp thống kê để tìm ra insight giá trị.
Data Scientist nhánh A cũng có thể gọi là Data Analyst.
Xem thêm Data Analyst là gì
Việc làm Data Analyst TPHCM
Việc làm Data Analyst Hà Nội
Data Scientist nhánh B (Building) thường mạnh về software engineering hơn. Họ đảm nhiệm việc xử lý/lưu trữ data, viết code/thuật toán cho các sản phẩm data của công ty.
Nếu cần một định nghĩa hẹp và cụ thể cho nghề Data Scientist, thì mô tả công việc của Data Scientist nhánh B sẽ chính xác hơn.
Bản thân mình thuộc về Data Scientist nhánh B, nên mọi chia sẻ cũng sẽ xoay quanh nhánh này.
[caption id=”attachment_11921" align=”aligncenter” width=”825"]
Skillset của Data Scientist nhánh B[/caption]
Khác biệt lớn nhất giữa hai nhánh A và B của Data Scientist là gì?
Như đã nói ở trên, Data Scientist nhánh B mạnh hơn về software engineering. Bởi vậy, trách nhiệm công việc chính của họ là xây dựng các sản phẩm data cho công ty.
Sản phẩm data cũng là một sản phẩm công nghệ phần mềm, song được xây dựng dựa trên dữ liệu.
Ví dụ, tính năng recommendation của Amazon là một sản phẩm data. Nó được xây dựng dựa trên nền tảng dữ liệu mà Amazon đã tích lũy được từ trước.
(Người dùng này đã mua những món đồ gì, có đặc điểm như thế nào, những món đồ tương tự, những món đồ nên mua kèm, những món đồ mà người dùng khác có hành vi tương tự đã mua.v.v…)
Sản phẩm data có thể là một sản phẩm riêng biệt, hoặc là một phần trong sản phẩm lớn hơn.
Ví dụ, tính năng recommendation là một sản phẩm data thuộc sản phẩm lớn là trang web Amazon.com.
Sản phẩm data bao gồm nhiều thành phần, nhưng luôn có cốt lõi là model (mô hình dữ liệu) được phát triển bằng machine learning.
Anh có thể giải thích cụ thể hơn về mô hình dữ liệu (model)?
Mình nói về machine learning (máy học) trước nhé!
Ví dụ, hãy hình dung nôm na “máy” ở đây là một cái hộp đen. Bạn muốn dùng cái hộp đen này để phân biệt hình ảnh con chó với con mèo. Vậy thì:
Bạn phải tìm rất nhiều hình ảnh của con chó, và hình ảnh của con mèo.
Sau đó cho hộp đen đọc những hình ảnh này.
Rồi dạy hộp đen: những đặc điểm nào trên bức hình sẽ cho biết đó là hình con chó, và những đặc điểm nào khác sẽ cho biết đó là hình con mèo.
Cuối cùng, bạn đưa ra hai hình ảnh mới. Hộp đen sẽ nhận diện cho bạn đâu là hình con chó, đâu là hình con mèo dựa vào những gì nó đã được học.
Toàn bộ quá trình này gọi là máy học (machine learning). Còn cái hộp đen chính là một mô hình dữ liệu (data model).
Machine learning (máy học) là một lĩnh vực của trí tuệ nhân tạo, trong đó các thuật toán máy tính được sử dụng để tự học hỏi dựa trên dữ liệu đưa vào mà không cần phải được lập trình cụ thể.
Workflow của Data Scientist là gì?
Minh họa cho workflow của Data Scientist
Bước 1 — Input:
Workflow của Data Scientist bắt đầu với một nhu cầu/nhiệm vụ.
Ví dụ: nhu cầu tìm kiếm bằng hình ảnh của Google: đưa cho máy một bức ảnh, kết quả sẽ trả về những bức ảnh tương tự.
Nhu cầu này có thể bắt nguồn từ:
Do bộ phận business thu thập phản hồi của người dùng, và đề nghị có thêm tính năng ABC.
Hoặc, do chính Data Scientist khi làm việc với dữ liệu, nghiên cứu đặc tính sản phẩm/công ty cũng như kiểu/lượng data hiện có… thì nảy ra sáng kiến phát triển thêm tính năng XYZ.
Bước 2 — Lên kế hoạch:
Sau khi xác định được nhu cầu/nhiệm vụ, Data Scientist sẽ họp và bàn bạc với bộ phận business cũng như các bên liên quan để xem xét:
Làm tính năng này có khả thi hay không?
Sẽ cần loại dữ liệu gì? Tìm ở đâu? Bao nhiêu là đủ? Lấy dữ liệu về như thế nào?.v.v…
Cần bao nhiêu resources (nhân lực, thời gian…)?
Tính năng này sẽ được gắn vào đâu trong sản phẩm cuối cùng của công ty, sẽ giúp ích được gì cho người dùng.
.v.v…
Bước 3 — Thu thập và làm sạch dữ liệu:
Để dạy cho máy cách phân biệt con chó với con mèo chẳng hạn, thì phải cho nó học càng nhiều hình ảnh càng tốt. Nên phải đi gom dữ liệu.
Dữ liệu gom xong sẽ còn rất lộn xộn và nhiều rác, thì mình phải làm sạch dữ liệu. Hoặc nếu dữ liệu chưa đủ, thì phải kiếm thêm.
Ví dụ:
Có những hình mình không cần thì loại bỏ. Hình mình cần nhưng bị mờ thì làm cho nó rõ hơn. Hoặc hình thô (chưa gán nhãn) thì gán nhãn cho nó.
Cũng có thể tìm thêm nguồn dữ liệu được open source và đã gán nhãn sẵn.
Sau đó, phải đồng bộ hóa dữ liệu.
Ví dụ, hình ảnh gom về có nhiều kích thước khác nhau, thì phải đưa hết về cùng một kích thước hoặc định dạng, tùy theo mô hình mình chọn.
Bước 4 — Chọn giải pháp:
Nếu vấn đề đã có sẵn giải pháp
Thì lựa chọn/kết hợp các giải pháp lại (vd: chọn thuật toán ABC hoặc XYZ), chạy thử nghiệm, kiểm tra xem thử nghiệm nào là tốt nhất và vì sao, tiếp theo sẽ chọn giải pháp nào để phát triển thêm .v.v…
Nếu vấn đề chưa có sẵn giải pháp
Thì cần làm research: tìm hiểu xem trước mình, đã có ai từng làm về vấn đề này chưa, giải pháp của họ là gì, có khả thi không, liệu giải pháp nào tốt hơn .v.v…
Sau đó, chọn ra một hoặc một loạt phương pháp để thử nghiệm giống như ở trên.
Bước 5 — Máy học:
Sau khi đã chọn được giải pháp, thì cần dành thời gian cho máy học.
Tùy theo model là gì, sử dụng công cụ nào, hệ thống công ty đã có sẵn những gì .v.v… mà mình sẽ cho model chạy qua chương trình, rồi điều chỉnh để kiểm soát performance đầu ra của model đó.
Khi train một model, hãy tưởng tượng giống như bạn có một bảng điều khiển với rất nhiều nút vậy. Bạn thử chỉnh cái nút này một chút, thấy kết quả ra tốt hơn chút xíu thì giữ lại, rồi thử chỉnh nút khác.
Cứ như vậy, cho đến khi ra được kết quả tốt nhất.
Ví dụ, có rất nhiều yếu tố để phân biệt con chó với con mèo.
Tùy bạn điều chỉnh để máy tập trung vào dấu hiệu nào nhiều hơn (cái mõm/những vùng có vẻ cái mõm, màu lông .v.v…) Nó sẽ ưu tiên các dấu hiệu đó để nhận diện đúng hơn.
Bước 6 — Output:
Output công việc của Data Scientist là một model như đã giới thiệu ở trên. Sau đó, thông thường, model này sẽ được gắn vào một sản phẩm lớn.
Ví dụ: model để gợi ý mua hàng của trang web Amazon.
Đôi khi, nếu model là một giải pháp/phát kiến mới, thì bộ phận Data Science của công ty bạn sẽ có nhiệm vụ viết bài báo hoặc tổ chức hội thảo khoa học để công bố kết quả nghiên cứu.
Tuy nhiên, chỉ một vài công ty lớn như Facebook, Google… có bộ phận chuyên nghiên cứu về Data Science.
Và trên thực tế, cũng rất hiếm có phát kiến mới có thể áp dụng thực tiễn. Vì rất nhiều khi, bạn tạo ra được một mô hình tốt, chính xác song lại chạy quá chậm, quá tốn tài nguyên thì cũng không đưa vào sử dụng được.
[caption id=”attachment_12240" align=”aligncenter” width=”802"]
Data Scientist theo định nghĩa của Datacamp[/caption]
Việc làm Data Scientist TPHCM
Việc làm Data Scientist Hà Nội
Tố chất cần thiết để trở thành Data Scientist là gì?
1. Kiên nhẫn
Tố chất này cực kì quan trọng, vì Data Scientist phải dành phần lớn thời gian để thu thập dữ liệu và làm sạch chúng.
Ví dụ, bạn muốn làm một model dự đoán giá nhà.
Bạn sẽ phải thu thập dữ liệu về nhà từ nhiều nguồn khác nhau.
Mỗi nguồn này lại lưu dữ liệu theo một cấu trúc riêng. Vậy thì bạn phải quy chúng về một cấu trúc chung.
Sau đó, bạn làm sạch bằng cách loại bỏ các dữ liệu không phù hợp, như:
Dữ liệu thiếu: có số lượng phòng mà không có diện tích.
Dữ liệu rác: diện tích 10m2 mà giá 200 tỷ.
2. Giao tiếp tốt
Công việc của Data Scientist đòi hỏi phải giao tiếp rất nhiều, cụ thể:
Giao tiếp với team business.
Để hiểu rõ hơn về product cũng như requirements, từ đó tìm ra insights có giá trị.
Giao tiếp với team engineer.
Để áp dụng model của mình vào hệ thống, hoặc để đề nghị họ tổ chức/hệ thống data cho mình sử dụng.
Trình bày/giải thích insights cho các bên liên quan hiểu.
Để từ đó tìm cách đưa vào ứng dụng thực tế.
Xem thêm Cách để giao tiếp tốt: giống như một vòng lặp khép kín (close loop)
3. Thích tìm hiểu và thử cái mới
Nghề Data Scientist còn mới mẻ và sử dụng kiến thức liên ngành rất nhiều. Trong đó, mỗi ngành riêng lại luôn có bước tiến hoặc công nghệ mới.
Do đó, bạn cần thích tìm hiểu và thử cái mới, để có thể cập nhật kiến thức liên tục.
Đọc tiếp bài viết về Data Scientist là gì để hiểu rõ những kỹ năng và con đường rèn luyện để trở thành một Data Scientist chất.
Đọc thêm bài viết trên Harvard Business Review: Data Scientist- nghề sexy nhất thế kỉ 21
Bạn muốn trở thành một Data Scientist trong tương lai? Hoặc, bạn muốn chia sẻ kinh nghiệm về nghề Data Scientist cùng mọi người? Hãy thảo luận ở phần comment phía dưới bài viết nhé!
Và tham khảo ngay việc làm Data Scientist tại ITviec!
| Data Scientist là gì? Cách trở thành Data Scientist? | 0 | data-scientist-là-gì-cách-trở-thành-data-scientist-13ad87b51b73 | 2017-12-12 | 2017-12-12 02:44:19 | https://medium.com/s/story/data-scientist-là-gì-cách-trở-thành-data-scientist-13ad87b51b73 | false | 2,259 | null | null | null | null | null | null | null | null | null | Data Scientist | data-scientist | Data Scientist | 488 | ITviec | null | 3be988d31223 | itviec | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-04 | 2017-11-04 14:36:00 | 2017-11-04 | 2017-11-04 14:41:44 | 0 | false | en | 2017-11-04 | 2017-11-04 15:31:40 | 2 | 13af1a097e5f | 1.162264 | 0 | 0 | 0 | ‘… the best a man can get’ is a bold claim and one that I have sleep walked into believing for many years. Only recently as part of a… | 5 | Innovating innovation
‘… the best a man can get’ is a bold claim and one that I have sleep walked into believing for many years. Only recently as part of a half-hearted attempt to save money, I bought a pack of budget, home brand razors from my local supermarket. Priced 40% less, the budget version did a better job and stayed sharp longer*.
When global brands have a focus on loud brand marketing combined with driving a huge retail distribution footprint does innovation takes a back seat. Does innovation struggle to get new product on the shelves? Is it stifled because it’s too slow, too costly and too risky (according to Nielsen, 3 out of 4 FMCG new product launches fail within a year). Does having the best products out there become less important when market share is huge?
As consumer voices become louder, reviews more powerful, innovation has to continue at pace.
I have recently joined a business called Black Swan, a predictive analytics and AI business, which amongst other things is helping global FMCG brands to evolve their approach to innovation in order to inform accelerated strategic decisions around product development. It uses immense data sets to expand the reach beyond that of traditional focus groups. It can synthesise data from millions of people to increase objectivity and reduce the bias that researcher’s questions can bring. Data science and clever algorithms identify themes and nuances from global social platforms that might not historically have been picked up and then use virtual audiences to test the concepts.
‘Dark analytics’ as described by a recent Deloitte tech trends report is one to watch for the future of innovation and while it won’t necessarily replace existing research techniques, it can work with it to accelerate the process and take away some of the risk.
*in my humble opinion
| Innovating innovation | 0 | innovating-innovation-13af1a097e5f | 2018-02-07 | 2018-02-07 20:18:20 | https://medium.com/s/story/innovating-innovation-13af1a097e5f | false | 308 | null | null | null | null | null | null | null | null | null | Marketing | marketing | Marketing | 170,910 | will silverwood | null | 704c49f0acee | willsilverwood | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 9d10fae4c94c | 2017-12-20 | 2017-12-20 09:23:08 | 2018-01-03 | 2018-01-03 17:10:37 | 3 | false | en | 2018-01-03 | 2018-01-03 17:10:37 | 0 | 13afd7ee7d0b | 4.708491 | 1 | 0 | 0 | Where are you from? | 4 | #MeetTheTeam — Giorgio Galvan, Data Analyst
Where are you from?
I was born in Vicenza, Italy which is around 70km west of Venice. It was there that I studied economics at Ca’ Foscari University of Venice before I moved to London in March 2016. I actually moved here for a finance internship here at carwow. Although after 2 weeks I was “stolen” by the data team and two months after they took me on full time as a data analyst and I’ve been here ever since!
Why carwow?
I applied to carwow originally because it seemed like an incredibly fun environment to work in. It’s full of people my age which always makes it easier to settle in. As a startup it’s really fast paced and I knew I’d be thrown into projects on day one, which is a great learning environment. You’re also not given loads of unnecessary training either, so I get to learn a lot of things independently. I ended up deciding to stay because the working environment and culture meant that I was always learning something new and stimulating myself intellectually on a daily basis.
Describe yourself in 3 words:
Curious. Ambitious. Zen.
Why did you want to join as a Data Analyst?
I jumped at the opportunity to work in the data team because we’re constantly being presented with new problems to solve and I’ve always enjoyed that. At carwow, you have all the tools you could possibly need to solve these complex data problems and you’re just left to work it out independently. It’s very rewarding! It’s really important to the business and so you know that you’re impacting it on a daily basis. It also means I get to interact with all areas of the business as the data we work with can inform decisions across the entire company. I really enjoy being surrounded by the smart people in the data team and constantly learning from them.
What are the 3 things you would bring with you to a deserted island?
Boat, music player (I can’t live without it), and a fishing kit.
What is your strongest personal quality?
Problem solving — whenever I am confronted with a problem I work out an effective plan of action without panicking.
What are the 3 things you can’t go without?
Music
Books
NBA/Basketball
What is a skill you’d like to learn, and why?
I am currently (trying) to learn Japanese, so I’d definitely choose that. I’ve always been curious about the culture and I have plans to go on holiday again there soon!
What are you most proud of?
I taught myself how to code and now I have the skills to be able to teach other people to code too! 2 years ago I didn’t have a clue how it worked and now I’m very confident with it. I use it at work all the time and with help from Adam I learnt on the job and in my spare time. It’s a great skill to learn!
What are you currently working on?
I recently built a geographical model that assigns bid modifiers for our PPC campaigns. This means that we can better target consumers based on their location. Now I’m working on a similar model, but instead of location, I’m creating a model for time of day and day of week for the PPC campaigns. I’m also busy replicating dashboards and reporting analyses for Germany.
What is the greatest obstacle you’ve overcome at carwow so far?
To be quite honest, every day presents difficult obstacles and challenges for me to overcome. That’s one of the great things about my role, so there hasn’t been anything that particularly stands out from the rest. Especially seen as I’m surrounded by a great team who all know what they’re doing. If there’s a particularly complex problem, you can work together to figure it out and that can be extremely rewarding. It also makes for a very close team dynamic because we’re always bouncing off each others ideas and figuring out problems together.
Which carwow value resonates most with your team?
Impact — the data team effects the bottom line on a daily basis. Especially because we work so closely with the digital marketing team to develop an effective and efficient platform for driving users to our site. We also help the business to make sense of all the data coming through our website. We can highlight how the changes we make impact customer behaviour, where we can improve things, and just generally give visibility on how the business is doing.
What is the best part of working at carwow?
The fact that the people I work with challenge me to be better every day. Working here is very intellectually stimulating so I’m always learning and developing my skills which is very important to me. But also the work life balance is great. This is fundamental to me and I think it genuinely encourages me to come to work and enjoy what I do because I get enough time to enjoy myself as well. I’m also very excited about the general business direction of carwow, it’s set to grow so much and I have the opportunity to be there for the ride and steer it in my own small way through the work I do. Everyone at carwow shares this vision and I think that makes for a great working culture — a strong community!
What does it take to be a successful data analyst?
To succeed as a data analyst here you have to be curious and always relish the opportunity to learn more. Never be satisfied with what you build, refine it, make it better, ensure your work is as impactful as possible. But you also need to have strong analytical skills, being able to understand the data as a whole, and then reduce that into consumable information so that the business can use it to make a decision. You could be the best data analyst in the world, but if you can’t present that information in a clear and concise way to the rest of the business, then you’re going to struggle.
Be flexible, be patient, sometimes things don’t work out how you expect. So you definitely need to be able to take on the unexpected and be comfortable in changing your path.
Top 3 favourite books?
Zero to One by Peter Thiel, Norwegian Wood by Murakami, and The Hard Things About Hard Things by Ben Horowitz.
What’s your favourite car?
Lamborghini Murcielago
Any final words?
I thought this was going to be weird and difficult…
| #MeetTheTeam — Giorgio Galvan, Data Analyst | 1 | meettheteam-giorgio-galvan-data-analyst-13afd7ee7d0b | 2018-04-20 | 2018-04-20 18:13:17 | https://medium.com/s/story/meettheteam-giorgio-galvan-data-analyst-13afd7ee7d0b | false | 1,102 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | William McCaughey | Graduate Intern @ carwow | 8c57fa9a6217 | will.mccaughey | 0 | 5 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-14 | 2018-05-14 20:50:00 | 2018-05-25 | 2018-05-25 00:49:57 | 1 | false | en | 2018-05-25 | 2018-05-25 00:49:57 | 3 | 13b27ae5d248 | 3.279245 | 3 | 0 | 0 | I love money. I don’t know if it is because I am Jewish or for another reason, but I do love it. “Logically”, I went on to study the field… | 5 |
Why you should stop looking for financial advice online?
I love money. I don’t know if it is because I am Jewish or for another reason, but I do love it. “Logically”, I went on to study the field and majored in Corporate and International Finance. Sounds fancy right?!
Though, I noticed that it was not the case for a lot of people. Many friends, acquaintances and internet strangers seem to be lost when it comes to handling money. So they Google their questions. Oh boy, this is ugly.
The wisdom of crowds
Quora is a “platform to ask questions and connect with people who contribute unique insights and quality answers”. The idea is pretty great: you have a question? Somebody out there has an answer (and by somebody I mean anybody. Even I, have an account). Quora makes the connection.
Sweet, finally a solution to your troubles! Except that most answers are terrible. Even the most “up-voted” once (especially them).
The value is in the question
I believe in the power of questions. Questions are the starting point of the quest for knowledge — and where most of the value is created. Unfortunately, a lot of the questions asked are either very naive (I am 25 yo and want to retire by 30 and withdraw $50k/year, what should I do) or posted by bots (How do I start a business to sell it in few months for millions). I don’t want to be the Grinch, but only 0.15% to 0.8% of the world population is a millionaire. Browsing Quora will not make you that special. You might get some good tips etc., but nothing that is not already widely known or common sense.
Deeply personal
Another challenge is the nature of personal finances. There is nothing more personal than personal finances. OK, some things are more personal, but you get the point. Therefore, there is no “ground truth”: it all depends on your lifestyle, expectations, time horizon, risk tolerance etc.
Once I’ve said that, there are some things that should disqualify any advice.
The hateful debt
Did I say “debt”? Oh I’m sorry, I didn’t mean to be that rude. Except that, nop, debt is not a bad word. I checked...
Why would I think it is? Just by reading all those people preaching for “debt free-ness”, meaning they advise not to take on any debts and to pay them up right away if you have any extra cash inflows.
So I hear the philosophical argument of being debt-free. But it is a philosophical one. Not a financial one. Debt is fantastical lever to manage cash flows and create wealth. But as for every tool: you can use it for good or for evil. Don’t be that guy. Choose wisely!
What stock(s) should I buy?
If somebody answers that question right away, stop reading. Stock selection is (must be) the last step of a coherent process.
To invest, you first need to ask yourself how do you think the market works? If you think it is efficient, then there is no point of trying to beat it and a passive investment strategy is all you need.
If you think it is not, then you can start asking more questions: why is it not and how can you take advantage of it? I told you questions were valuable! Your answers will form some sort of corpus which will be the basis of your strategy(ies). Stocks need to be selected in regards to them. Study stocks only when wearing your strategic glasses.
I can’t do that
Trust me, if I can do it, you can! But the good news is that you won’t have to (in a few years). Once again, you can thank artificial intelligence for that. Advances in sensors (measure more things), blockchain (make data more accessible) machine learning and AI (make use of the data) will first improve our understanding of things, produce better forecasts (reduce uncertainty) and then lead to an intelligence explosion.
Okay, let me say it differently. I believe that a superintelligent agent will (tend to) make the markets efficient. Why? Because nobody will want to be on the opposite side of (super) intelligence! For example, when facing a (rather complex) calculation, you don’t try to compete with the calculator. You take whatever output and consider it as the correct answer. Well same here. The AI will make the market and trying to beat it will be irrational.
The finance industry in 50 years will look completely different. I am not even sure we will still call it that way. It might be the object of another post!
In the meantime, you will still have to deal with your money, and I will have to deal with the vultures (follow those 5 tips to get rich fast — and other BS…) trying to take advantage of that .
| Why you should stop looking for financial advice online? | 53 | why-you-should-stop-looking-for-financial-advice-online-13b27ae5d248 | 2018-05-26 | 2018-05-26 08:08:19 | https://medium.com/s/story/why-you-should-stop-looking-for-financial-advice-online-13b27ae5d248 | false | 816 | null | null | null | null | null | null | null | null | null | Finance | finance | Finance | 59,137 | Thomas Taieb | Financial analyst in M&A, love everything about technology, start-ups and business. | 27750f78841 | Enooooooormous | 457 | 119 | 20,181,104 | null | null | null | null | null | null |
0 | > CCES16_pca <- prcomp(CCES16_sub[,-(1:3)]) ## performs PCA on data
> CCES16_pca$rotation[,1:2]
PC1 PC2
isis_stay_out 0.006532851 -0.1877095638
isis_hum_aid -0.133312164 0.2700242152
isis_arms -0.025175755 0.3663478247
isis_nofly -0.008665488 0.3647198335
isis_airstrikes 0.005785819 0.4076031669
gun_background_checks -0.076457752 0.0242503785
gun_no_public_registry 0.123844327 0.0272378861
gun_ar_ban -0.186209188 -0.0100165559
gun_concealed 0.195130406 -0.0028054992
imm_legal_status -0.190176588 0.0715418669
imm_border_patrol 0.189488733 0.1492354751
imm_dream -0.191172826 0.1197417816
imm_fine_businesses 0.106929951 0.1530306786
imm_syria_ban 0.203850961 0.0140942991
imm_more_visas -0.073282411 0.0496037023
imm_deport 0.197898642 0.0278690862
imm_muslim_ban 0.139182292 -0.0273335401
imm_none -0.001797488 -0.0363688264
abort_always -0.195514597 -0.0781297299
abort_rape_incest 0.142650550 0.0605024871
abort_20wk_ban 0.167085753 0.0631677275
abort_insurance 0.217539438 0.0431472519
abort_hyde 0.216396945 0.0508478793
abort_total_ban 0.044733238 -0.0216862117
env_co2_reg -0.227977412 -0.0062988802
env_35mpg -0.178015377 0.0262815214
env_renewable -0.211617779 0.0295352934
env_clean_acts -0.230549981 -0.0071572615
crime_no_mand_min -0.127007732 0.0007970827
crime_body_camera -0.053029933 0.0140239449
crime_more_police 0.101616084 0.0999435583
crime_longer_sentences 0.080147299 0.0564094700
gay_marriage -0.187292524 -0.0287997800
cong_garland -0.237629283 -0.0003028243
cong_tpp -0.098299596 0.0185739825
cong_usa_freedom -0.015027702 0.0348062545
cong_taa -0.083279783 0.0756188820
cong_education_reform -0.006577012 -0.0220838347
cong_highway_transp -0.066774433 0.0641419762
cong_iran_sanctions 0.061452667 0.0952179931
cong_medicare_reform -0.082131998 0.0751534156
cong_repeal_ACA 0.244556839 -0.0170201176
cong_min_wage -0.200319255 0.0079648727
military_oil 0.070407132 0.0885752720
military_terrorist 0.084181607 0.2901872566
military_intervene -0.047762336 0.2927693240
military_democracy 0.003800022 0.0706793022
military_protect_ally 0.018120978 0.2670319807
military_help_UN -0.154924594 0.2202543684
military_none -0.007456204 -0.1494680694
> tapply(vote_pca$PC1, INDEX = vote_pca$PresVote, FUN = median)
Trump (R) Clinton (D) Johnson (L) Stein (G)
1.7362782 -1.6009931 0.1634565 -1.3869899
> tapply(vote_pca$PC2, INDEX = vote_pca$PresVote, FUN = median)
Trump (R) Clinton (D) Johnson (L) Stein (G)
0.03686338 0.10200141 -0.36040349 -0.76926122
| 2 | 1b7ce3881496 | 2018-04-24 | 2018-04-24 00:17:53 | 2018-04-30 | 2018-04-30 13:04:47 | 7 | false | en | 2018-04-30 | 2018-04-30 13:24:11 | 4 | 13b2ad596f23 | 13.529245 | 4 | 0 | 0 | Quick recap: In Part 1 of this two-part adventure into the world of dimension reduction, we found a way to quantify political polarization… | 3 | Polarization, Quantified: Dimension Reduction on the Issues (Part 2)
Quick recap: In Part 1 of this two-part adventure into the world of dimension reduction, we found a way to quantify political polarization among voters using multidimensional scaling (MDS). You should definitely read that post for a more detailed look at what we’re doing, but the gist of it is that we can take voters’ issue stances from the 2016 Cooperative Congressional Elections Study and find a low-dimensional representation of their position on the issues as a whole using MDS.
The hope was that this would provide insights about how groups of ideologically like-minded voters might group together. As it turned out, when we performed MDS our first scaled dimension ended up being a pretty good representation of the traditional left-right political spectrum, with Clinton and Stein voters having extremely negative values on this scale and Trump voters having extremely positive ones.
We were still left with two problems, though. First, it wasn’t at all clear what the second dimension, which we might also expect to be important, meant. Second, we couldn’t really know for sure that the first dimension was actually measuring ideology, rather than something related to it. It seemed likely, sure, but other than the plot we didn’t really have clear evidence for it.
Here, in part 2, we’re going to address both of these problems using another method, principal component analysis (PCA). Like MDS, PCA is typically used as a dimension reduction method, particularly in the case where a large number of predictor variables are correlated: common applications (at least in the past) have included genetics and facial recognition.
Eigenfaces, composite faces created from principal component analysis on a set of photos of human faces. (AT&T Laboratories Cambridge)
Like MDS, PCA isn’t a terribly complicated concept — in fact, the math is actually easier. Essentially the idea is to rotate the data in the high-dimensional space in such a way that minimizes the covariance between the variables while continuing to capture all of the variation in the data. If the variables in the design matrix X are mean-centered, then the covariance matrix is X′X, and minimizing the covariance between variables turns out to be equivalent to diagonalizing X, which is done by computing its eigenvectors and eigenvalues.
These eigenvectors are the principal components of the data that give the method its name. The first principal component (PC1) is the direction which captures the most variation in the data. The second principal component (PC2) is the direction which captures the next-most variation in the data, subject to the constraint that this direction be orthogonal to the first principal component. The third principal component is the direction which captures the next-most variation in the data, subject to the constraint that this direction be orthogonal to both the first and second principal components, and so on. This geometric constraint of orthogonality, conveniently enough, turns out to be equivalent to the statistical requirement that all the principal components be uncorrelated with each other.
It’s exactly because of this that PCA is useful for identifying patterns among highly correlated data. PCA effectively breaks down the data into uncorrelated components and orders them from most explanatory power to least explanatory power. And since PCA successively maximizes whatever variation remains in the data, by doing this we can check out the first few principal components and ideally draw conclusions about what each one might substantively mean, similar to how in MDS we identified the first dimension as potentially being some measure of partisanship or ideology.
Each row is an issue position — the first, for example, is whether you think we shouldn’t get involved in the war against the terrorist group ISIS — with the corresponding numbers representing the effect of agreeing with that issue position on your score on the two principal components. For example, if you favor repealing the Affordable Care Act (cong_repeal_ACA), this has the effect of increasing your PC1 score by 0.245 and decreasing your PC2 score by 0.017.
It’s hard to say exactly what that means — certainly it’s more hand-wavey than explaining a regression coefficient. But we can look at the principal component loadings taken together to hazard a guess at what the principal components themselves might mean. To qualitatively analyze PC1, we note that:
The ten largest positive loadings are, in order: ACA repeal, allowing employers to not cover abortions in insurance plans, prohibiting federal funds from paying for abortions, banning Syrian refugees from entering the country, deporting undocumented immigrants, increasing border patrols, a ban on abortions after 20 weeks, a total abortion ban except in cases of rape, incest, or danger to the mother’s life, and a ban on Muslims entering the country.
The ten largest negative loadings are, in order: having the Senate give Merrick Garland a hearing for his Supreme Court nomination, stricter enforcement of the Clean Air and Clean Water Acts, regulation of carbon dioxide emissions, renewable energy quotas, raising the federal minimum wage, legalizing all abortions, passing the DREAM Act, granting legal status to undocumented immigrants currently in the country, keeping same-sex marriage legal throughout the country, and reinstating the assault weapons ban.
It doesn’t take particularly close attention to American politics to notice that the ten largest positive loadings were all on very strongly conservative positions typically favored by Republicans, while the ten largest negative loadings were on strongly progressive or liberal positions favored by Democrats. So voters with very positive scores on PC1 are likely to be Republican voters, while voters with very negative scores on PC1 are likely to be Democratic voters, and a scatterplot of their PC1 and PC2 scores shows pretty good separation between the two, just like in the MDS plot:
Believe it or not, this is a different plot from the first one.
It’s hard to distinguish this plot from the plot we saw in part 1, although if you squint you might notice that there appears to be more mixing in the middle when plotting principal component scores compared to when plotting the MDS output.
So why does this help us at all with the two problems we had with MDS? To address the problem of what the second dimension meant, let’s take a look at the second principal component:
It loads most strongly positively on support for: airstrikes against ISIS, arming Syrian anti-ISIS fighters, enforcing a no-fly zone in Iraq and Syria, military intervention for pretty much any reason, fining businesses that employ undocumented immigrants, and increased border patrols.
It loads most strongly negatively on support for: staying out of the conflict with ISIS, legalized abortion in all cases, legalized same-sex marriage, a ban on Muslims entering the country, repeal of No Child Left Behind, repeal of the ACA, reinstatement of the assault weapons ban, stricter enforcement of the Clean Air and Clean Water Acts, and regulations on carbon emissions.
It might not be immediately obvious — certainly it wasn’t as immediately obvious to me as “PC1 = partisanship” was — but after a little bit of thinking you might hit upon the idea that PC2 is a pretty damn good representation of a voter’s position on a sort of libertarian-authoritarian axis, controlling for partisan ideology as measured by PC1. Little-l libertarians, whether from the right, center, or left, generally tend to oppose military intervention and restrictions on personal freedoms. Right-wing and centrist libertarians, who make up the majority of self-identified libertarians in the United States, also usually oppose economic and environmental regulations. And we see all of these patterns in PC2, just with the sign flipped — as it stands it looks like the largest positive loadings are on stances that libertarians would oppose and the largest negative loadings are on stances that libertarians would favor.
This is consistent with what we observe in voting behavior. One way to see this is simply by inspecting the median voter for each candidate. We can compute the median PC1 and PC2 scores for Trump voters, Clinton voters, Gary Johnson voters, and Stein voters:
For PC1, as we saw before, the median Clinton voter and the median Stein voter have quite negative scores, while the median Trump voter has a pretty positive score and the median Johnson voter is just a little above zero. This corroborates, with actual voting behavior, our suspicion that PC1 represented partisanship or ideology. But PC2 is also revealing here: while the Trump and Clinton median voters both had barely positive scores on this axis, Johnson and Stein voters are substantially more negative.
(I suspect that Johnson voters being generally more positive on PC2 — i.e. less libertarian — than Stein voters is somewhat of a 2016-specific anomaly. The share of votes going to third-party candidates in 2016 was 5.7%, the highest since Ross Perot’s independent bid in 1996; this reflected a dissatisfaction with both major-party candidates, so the Johnson voters in this sample in particular likely also include more Democrats and Republicans than usual. Consequently, you get a sample of Johnson voters that looks considerably different from a more typical year like 2012.)
Scatterplots, for the most part, confirm this:
It may not seem so clear that Johnson voters and Stein voters are particularly libertarian on the PC2 axis, but proportionally speaking, they’re much more so than Trump or Clinton voters. 62% of Johnson voters and 82% of Stein voters had negative scores on PC2; for comparison, that figure was only 48% for Trump voters and 46% for Clinton voters.
We’re almost there, now. I’ve hopefully convinced you that the first principal component of the issue stances is measuring something like ideology or partisanship on the usual left-right or Democrat-Republican axis, and the second principal component is measuring something like authoritarian vs. libertarian tendencies once controlling for the first principal component. So what does this tell us about the MDS dimensions? Are they the same as the principal components?
It’s not guaranteed. PCA is equivalent to MDS in the specific case where you use Euclidean distance as the metric for MDS. That isn’t the case here — if you recall, we used binary distance, not Euclidean distance, to perform MDS. But check out what happens when we plot MDS dimension 1 vs. PC1 and MDS dimension 2 vs. PC2:
Points colored by presidential vote.
There’s a near-perfect linear relationship between MDS dimension 1 and PC1; the fit is a bit noisier but still clearly linear for MDS dimension 2 and PC2. In fact, the correlation coefficient for the first scatterplot is 0.996, about as close to perfect as you could ask for; the correlation coefficient on the second is 0.961, which is also really, really high. This seems to be such a perfect relationship — particularly the first one — that it seems impossible to me that MDS dimension 1 and PC1 aren’t measuring pretty much the same thing. In other words, it’s possible that they’re not the same, but they’re way too close for coincidence.
So by now I’ve hopefully convinced you that MDS on the issue questions yields dimensions that roughly represent the traditional left-right political axis and a libertarian tendency axis. What’s the point of all this?
Well, for one thing, it lets us draw pictures that I, at least, find cool to look at. (I mean, aren’t these pretty?) But it also gives us a quantitative way to express how well voters have sorted into one camp or the other — and, possibly, to see if that sorting has changed over time. We can play this same MDS game with the 2012 CCES: compute a binary distance matrix for responses to the issue questions, perform MDS, and plot the two dimensions. The first dimension, at least, still seems to correspond very well to partisanship or ideology.
However, the correspondence of the second dimension to libertarian tendency isn’t so clear in 2012.
Comparing the ideological positions of the party’s bases is tricky: while a lot of the issue questions are the same in the 2012 and 2016 surveys, a lot of them are different or are included in only one year. It’s somewhat alleviated by plotting z-scores as opposed to raw MDS scores, as in the following two plots.
Median voters for each candidate indicated by the dark colored lines.
From these plots we can say, for example, that the median Clinton voter in 2016 was, in relative terms, closer to the 2016 ideological center than the median Obama voter in 2012 was to the 2012 ideological center. But we can’t say that the median Clinton voter was less liberal than the median Obama voter, because the political center — zero on the x-axis in both plots , by construction — might be different in the two years, and at any rate we’re measuring ideology using different questions in different years.
It’s still revealing to examine the shapes of the candidates’ respective distributions, though. In 2012, Obama and Romney voters followed ideological distributions that were essentially mirror images of one another: peaks at a little more than one standard deviation away from the mean, with long tails extending toward and past the center.
In 2016 the candidates’ distributions retained this basic shape. But they’re both shifted over to the right slightly, and Trump’s distribution is noticeably flatter than Romney’s. This is consistent with both the relative heterodoxy of Trump’s platform compared to the typical Republican presidential candidate’s, a heterodoxy which led a pair of academics writing in The Washington Post to call him “a textbook example of an ideological moderate”. (Personally, I think this illustrates the silliness of a Lawful Stupid adherence to the letter and not the spirit of a definition, but that’s a discussion for another day and another blog.)
Overall, given that the (standardized) ideological distance between the median Clinton voter and the median Trump voter is actually a little bit smaller than the (standardized) ideological distance between the median Obama voter and the median Romney voter, it’s hard to make the argument that the electorate is more ideologically polarized than it was in 2012. (I’m distinguishing here between ideological polarization — how far apart the two parties’ bases are on the issues — and emotional polarization, which is how much the two parties hate each other.)
What does appear to be the case is that voters didn’t “sort” as well in 2016 as they did in 2012. Suppose we define the “right” candidate for you as being the candidate that was most popular among voters who had the same ideological score as you, and the degree to which voters sort incorrectly as being the rate at which they choose the wrong candidate. Then the rate at which voters sort badly is approximately the purple area where the two candidates’ ideological distributions intersect. In 2016, for example, that area represents the probability that a person votes for Clinton despite Trump being more popular with voters who are as conservative as they are, plus the probability that a person votes for Trump despite Clinton being more popular with voters who are as liberal as they are. By this metric:
In 2016, 7% of Trump voters were liberal enough that you’d expect them to support Clinton, and 7% of Clinton voters were conservative enough that you’d expect them to support Trump.
In 2012, 5% of Romney voters were liberal enough that you’d expect them to support Obama, while 6% of Obama voters were conservative enough that you’d expect them to support Romney.
That is, in 2012 voters were a little better at choosing the ideologically “right” candidate for them. And that’s again consistent with the broad antipathy toward both candidates in the 2016 election, which you might expect to lead to somewhat more crossover voting than usual.
I don’t want to take too strong of a stance on this very last part: the differences are small enough that sampling error is a plausible culprit for them. That is, I could easily draw a different random sample of 5,000 respondents from the survey and possibly find that voters were better at sorting in the 2012 election. It’s not likely — 5,000 is quite a lot — but it’s possible.
At any rate, it shouldn’t distract from the larger goal here, which was to explore a way to quantify issue-based political sorting. New and interesting as I hope this was to you, though, it’s not quite as new within political science. Keith Poole and Howard Rosenthal created the DW-NOMINATE scaling method, which essentially measures partisanship using what we’ve done here, only using members of Congress instead of CCES respondents and roll call votes instead of responses to survey questions. If anything, it does an even better job of separating Democrats and Republicans than the issues on the CCES: not surprising, really, given the large number of roll call votes and comparatively small number of members of Congress. Here, for example, are the DW-NOMINATE scores for the 2015–2017 Senate:
DW-NOMINATE scores for the 114th Congress computed by Poole and Rosenthal (2017).
In fact, the first dimension alone perfectly separates the two parties (you can draw a vertical line straight across zero and you’ll have only blue and green on the left and red on the right). Moreover, if you’re familiar with some of these senators, the relative positions of these names should also make sense. Ted Cruz (R-TX) is super far out to the right. Elizabeth Warren (D-MA) is super far out to the left. Centrists like Joe Manchin (D-WV) and Susan Collins (R-ME), whose states have voted for the other party in every one of the last five presidential elections, are close to zero.
Poole and Rosenthal claim that their DW-NOMINATE dimensions respectively represent an “economic/redistributive” axis and a “social/racial” axis. That may have been true back when the American Political Science Review was warning the country’s political leaders about the dangers of low partisanship — when you could find support for segregation in the Democratic Party and support for universal healthcare in the Republican Party — but today it’s probably more accurate to simply characterize the first dimension as partisanship, the way we did for the CCES responses.
Actually, looking back on it now, I’m kinda surprised it worked as well as it did. Vote choice is a complex process; doctorates have been given out for writing about it. So the fact that you can do a pretty good job of separating voters along party lines based on issue questions alone is pretty cool. And it could suggest that voters, for the most part, tend to vote the same way as voters who are as liberal or conservative as they are. Or, more cynically, it could also mean that voters are partisans first and take cues from their party.
Either way, though, voters seem to find the choice easy to make, and if nothing else, it should make the APSR happy.
| Polarization, Quantified: Dimension Reduction on the Issues (Part 2) | 6 | polarization-quantified-dimension-reduction-on-the-issues-part-2-13b2ad596f23 | 2018-05-01 | 2018-05-01 11:27:32 | https://medium.com/s/story/polarization-quantified-dimension-reduction-on-the-issues-part-2-13b2ad596f23 | false | 3,307 | Masters Students in Applied Statistics and Data Science at New York University | null | null | null | NYU Applied Statistics | nyu-a3sr-data-science-team | DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,DATA ANALYSIS,STATISTICS | null | 2016 Election | 2016-election | 2016 Election | 54,660 | Mac Tan | Grad student, Quora Top Writer, and data scientist/junkie. In no particular order. | 769f4a9cc7df | mactan | 8 | 7 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-01-03 | 2018-01-03 03:06:59 | 2018-01-03 | 2018-01-03 03:08:08 | 0 | false | en | 2018-01-03 | 2018-01-03 03:08:08 | 0 | 13b2baec6cca | 2.871698 | 7 | 1 | 0 | Artificial Intelligence (AI) is one of the most important and misunderstood sciences of today. Much of this misunderstanding is caused by a… | 1 | Cybernetics explains what AI is — and what it isn’t
Artificial Intelligence (AI) is one of the most important and misunderstood sciences of today. Much of this misunderstanding is caused by a failure to recognize its immediate predecessor — Cybernetics. Both AI and Cybernetics are based on binary logic, and both rely on the same principle for the results they produce: intent. The logical part is universal, the intent is culture-specific.
A bit of history. In the 1940s, a team of scientist headed by Norbert Wiener developed the first self-regulating and self-correcting systems. Wiener called it Cybernetics, Greek for steersman. The US military used the technology to develop the first guided missiles and in the 1950s, the technology became standard equipment in airlines: the automatic pilot.
Wiener and his team developed the required technology using digital (binary) rather than analog circuits. Most computers at the time were based on analog systems. With most other computer scientists at the time, Wiener realized that digital systems were more precise and easier to program. By the 1960s nearly all computer design had moved to binary systems.
Boolean logic
One reason Wiener turned to digital systems was Boolean algebra. In the 19th century English mathematician George Boole developed an algebra of classes based on binary choices — yes and no. Boolean logic could be perfectly implemented in digital circuitry. An open binary gate would mean off, a closed binary gate would mean on. The idea was first proposed in the 1930 by Claude Shannon, the spiritual father of the Information Age. (Shannon also realized that a binary number or string of numbers could be used to symbolize anything, from letters and symbols to sound and images.)
A textbook example Boolean logic is this: If the symbol x represents a generic class of all “white objects” and the symbol y represents a generic class of all “round objects,” the symbol xy represents the class of objects that are both white and round. In binary computing, Boolean logic is simply a sequence of yes or no/true or false choices. There is no “maybe” — unless “maybe” is conditional on another probability or likelihood. “IF I get a raise, THEN I will buy a new car.”
IBM’s AI system named Big Blue used Boolean logic to beat world chess champion Garry Kasparov. “IF white opens with D4, THEN I will counter with C5.” The Chinese game of Go has many more possible moves than chess, but Google’s AlphaGo AI program defeated world champion Ke Jie. A computer beating the world champion Go is impressive, but it’s not magic. AI can simply weigh and process more options faster than humans can.
Subjective logic
The rule of thumb in AI: Bigger is better. Bigger in this context means more data. “Whoever has the most data wins,” the saying goes. And China with its 1.3 billion people is generating massive amounts of data, from the very mundane, like consumer preferences, to the highly personal and sensitive, like medical records and social attitudes. The power that comes from having valuable data on nearly a quarter of the world’s population, (combined with being the world’s largest manufacturer), can only be imagined at this point.
Consumer preferences, self-driving cars and logistics are still part of what we may call AI 1.0. Things become more complex when AI is applied to social, environment and ethical domains where values, judgment and intent come into play. The intent of a chess computer is straight-forward — winning. AI applied to domains that impact the lives of a people involves issues that go beyond winning and losing. But the fundamental — binary/Boolean — principle remains the same.
Regardless of how AI develops, it will be based on binary/Boolean logic, (a point we may have to revisit if scientists succeed in merging analog and digital computing or when we develop quantum computing)*. For now, Boolean logic is just that, a binary choice between multiple options. IBM’s Deep Blue made some moves the programmers did not anticipate, but it still acted within the binary/Boolean logic of the system. Unless instructed to ignore the difference between fiction and non-fiction, AI will not make up non-existing facts, just like an autopilot will not fly an aircraft to a non-existing airport.
AI can be applied to every imaginable domain of human concern, and it hold it holds up a mirror. Whatever domain we choose — economics, environment, education — it asks us to define our intention, just like an autopilot asks us to pick a destination.
| Cybernetics explains what AI is — and what it isn’t | 125 | cybernetics-explains-what-ai-is-and-what-it-isnt-13b2baec6cca | 2018-06-16 | 2018-06-16 10:16:38 | https://medium.com/s/story/cybernetics-explains-what-ai-is-and-what-it-isnt-13b2baec6cca | false | 761 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Jan Krikke, China, AI and Quantum Physics | Author of Quantum Physics and Artificial Intelligence in the 21st Century: Lessons Learned from China (available 9/2018) | ef02f344d80e | jankrikkeChina | 27 | 3 | 20,181,104 | null | null | null | null | null | null |
0 | #point to a Google storage project bucket, will use your project id
bucket = 'gs://' + datalab_project_id() + '-coast'
#make bucket if it doesn't already exist
!gsutil mb $bucket
import google.datalab.bigquery as bq
# Create the dataset
bq.Dataset('coast').create()
#create the schema (map) for the dataset
schema = [
{'name':'image_url', 'type': 'STRING'},
{'name':'label', 'type': 'STRING'},
]
# Create the table
train_table = bq.Table('coast.train').create(schema=schema, overwrite=True)
#load the training table
train_table.load('gs://cloud-datalab/sampledata/coast/train.csv', mode='overwrite', source_format='csv')
#create the eval table
eval_table = bq.Table('coast.eval').create(schema=schema, overwrite=True)
#load the testing (evaluation) table
eval_table.load('gs://cloud-datalab/sampledata/coast/eval.csv', mode='overwrite', source_format='csv')
!gsutil cat gs://cloud-datalab/sampledata/coast/dict_explanation.csv
output: (label code and description)
class,name
1,"Exposed walls and other structures made of concrete, wood, or metal"
2A ,Scarps and steep slopes in clay
2B ,Wave-cut clay platforms
3A ,Fine-grained sand beaches
3B ,Scarps and steep slopes in sand
4,Coarse-grained sand beaches
5,Mixed sand and gravel (shell) beaches
6A ,Gravel (shell) beaches
6B ,Exposed riprap structures
7,Exposed tidal flats
8A ,"Sheltered solid man-made structures, such as bulkheads and docks"
8B ,Sheltered riprap structures
8C ,Sheltered scarps
9,Sheltered tidal flats
10A ,Salt- and brackish-water marshes
10B ,Fresh-water marshes (herbaceous vegetation)
10C ,Fresh-water swamps (woody vegetation)
1OD,Mangroves
#create the query - --name, then the actual name, then SQL like statement
%%bq query --name coast_train
SELECT image_url, label FROM coast.train
#execute the query
coast_train.execute().result()
#import ml libraries and functions
from google.datalab.ml import *
#set labels (names) for the datasets with the tables for training, eval
ds_train = BigQueryDataSet(table='coast.train')
ds_eval = BigQueryDataSet(table='coast.eval')
#sample of training and eval data, for simple example - 1000
df_train = ds_train.sample(1000)
df_eval = ds_eval.sample(1000)
#plot a bar chart for the training values
df_train.label.value_counts().plot(kind='bar');
#plot a bar chart for the eval (test) values
df_eval.label.value_counts().plot(kind='bar');
| 22 | e52cf94d98af | 2018-08-29 | 2018-08-29 22:41:27 | 2018-08-30 | 2018-08-30 23:46:04 | 4 | true | en | 2018-09-21 | 2018-09-21 16:17:53 | 2 | 13b2ffb26e67 | 3.484906 | 5 | 0 | 0 | Part 1: Sample data, simple usage | 5 | Using Google Datalab and BigQuery for Image Classification comparison
Part 1: Sample data, simple usage
Google Datalab and BigQuery are useful for image classification projects. We will start with a simple project here. First things first — Google Datalab is used to build Machine Learning (ML) models and runs on Google’s Cloud virtual machine. BigQuery is cloud-based big data analytics web service for processing very large read-only data sets, using SQL-like syntax. Basically, what previously might have been done on a pc or network computer using dedicated resources and an installed database, can now be accessed through a computer with an internet connection. All the heavy lifting and processing is done in the cloud to achieve the same result in a more efficient manner.
Before you begin, ensure that:
You signed on to a Google Cloud account.
Google Compute Engine VM is created and active.
Machine Learning and Dataflow APIs are enabled.
You have an active project, Datalab and an active notebook.
If you are not sure how to do any of the above, there are several good articles that show you how — here’s one: https://towardsdatascience.com/running-jupyter-notebook-in-google-cloud-platform-in-15-min-61e16da34d52.
Acknowledgement: The images and the code example are from Google Datalab samples, with my explanations added. The images are from low-altitude aerial photography of Texas shorelines, and the purpose of the program is to predict the type of images of the coast (tidal flats, man-made structures etc.) that are the main composition of this library. Link: gs://cloud-datalab/sampledata/coast. https://storage.googleapis.com/tamucc_coastline/GooglePermissionForImages_20170119.pdf for details.
These are the steps we will take:
Define a BigQuery dataset — define a name, and create a schema (structure definition with field names and types)
Create tables for training and testing / evaluation
Import data from existing BigQuery tables (training, evaluation) that contain image files, to the dataset’s tables
Run the job to create the BigQuery dataset
Execute the dataset to populate tables with existing Google data
Review the data by creating histogram plots for training and testing (evaluation) data
Start a new notebook file and input:
Load the data from CSV files to Bigquery table.
Type the following for the label description:
BigQuery — create the query; notice it is similar to SQL
Results from executing BigQuery query: Note fast execution time and processing
Sample the data to around 1000 instances for visualization. Our data is very simple, so we simply draw histogram on the labels and compare training and evaluation data.
Bar chart showing type of image at bottom (x-axis) and # of image files on left (y-axis) for TRAINING
Bar chart showing type of image at bottom (x-axis) and # of image files on left (y-axis) for TESTING / EVAL DATA
Notice that the data is similar for both, as this is a small sample and a simple evaluation case. Most of the image files are of type ‘10A’ — Salt- and brackish-water marshes. This can be expanded to create a model and do more intensive classification for better predictions.
| Using Google Datalab and BigQuery for Image Classification comparison | 11 | using-google-datalab-and-bigquery-for-image-classification-comparison-13b2ffb26e67 | 2018-09-21 | 2018-09-21 16:17:53 | https://medium.com/s/story/using-google-datalab-and-bigquery-for-image-classification-comparison-13b2ffb26e67 | false | 738 | A collection of technical articles published or curated by Google Cloud Platform Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google. | null | googlecloud | null | Google Cloud Platform - Community | null | google-cloud | GOOGLE CLOUD PLATFORM,DEVELOPERS,CLOUD COMPUTING,DEVOPS,TECHNOLOGY | gcpcloud | Google Cloud Platform | google-cloud-platform | Google Cloud Platform | 4,042 | Hari Santanam | I am interested in AI, Machine Learning and its value to business and to people. I occasionally write about life in general as well. | 944f4840db46 | hari.santanam | 96 | 58 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-01 | 2018-06-01 19:21:53 | 2018-06-10 | 2018-06-10 10:18:07 | 0 | false | en | 2018-06-10 | 2018-06-10 10:18:07 | 8 | 13b30cb5f22a | 8.807547 | 64 | 0 | 1 | What is Zebi? | 3 | Real World Blockchain
What is Zebi?
I saw unusually intense and persistent FUD about Zebi in the last few weeks. Few said scam coin! Come on, wake up, do some research!! Some talked about promises being broken, not true. We kept the promise of “listing on an exchange in Apr”. Yes, I understand the listing on bigger exchanges is taking longer than usual, it is not a sign of poor performance of Zebi. There are good reasons, see more in the section below on Exchange Listing. Some said “we did not hype enough”, we are not hype, we are not virtual, we are not vaporware, what are we then? You would soon know once you read a bit more. Why did the price of Zebi Coin go down? Probably because some short-term minded participants sold, it’s their loss! Many of them are likely making a small profit, but they are losing out on the opportunity of a lifetime.
Why? Because I really believe the battle for the “most useful Blockchain of the world” is not over, I don’t think the world is sold yet (listen to this amazing song). This is just the beginning of Blockchain era, Bitcoin/Ethereum are like the first inventions which may not persist, just like Netscape of the early internet age. There is no Google of Blockchain yet, Zebi is going to be a huge contender. If you look at the extent of innovation, so far, only Bitcoin was a breakthrough innovation, although, the ego-less creator carefully put together the latest innovations in Distributed Systems, Cryptography, currency creation, decentralized trust, etc. Others, including Ethereum, were only incremental improvements, forks off of Bitcoin. Interesting tidbit: guess the area of my Computer Science masters thesis in 1994–95, it was in distributed systems!
Bit of history of Zebi. Back in late 2015, after seeing lack of persistent mainstream adoption of popular Blockchains like Bitcoin/Ethereum in the day to day use cases, not counting the use case of sending virtual money from one account to another, Zebi started building its own Blockchain to solve problems of Big Data. At the same time, we did not want to become another crypto-popular Blockchain platform, with not much adoption in non-crypto, but real world, as they have huge risk of never being adopted into real world! When I say adoption, I mean day to day usage in a production system of Governments, Enterprises, or Individuals, not just Memorandums of Understanding (MOUs) or Proof of Concepts (POCs). Maybe, it is just a coincidence, that I was speaking on the panel of a Blockchain event in Mumbai on Friday, and the panel was titled “Will 2018 be the year blockchain moves from PoC to mainstream?” Zebi is clearly the leader in that emphasis to do things in the mainstream vs. just POCs.
Real World Blockchain
I am excited to report that Zebi continues to make broad impact in the real world. Earlier this year, several Indian and global media articles covered the real world use of Zebi Chain in an Indian state government land registry. As announced on 29 Apr, 2018, hotels in Visakhapatnam Urban area were using Zebi AI Chain, huge achievement already! Within the last few days, Visakhapatnam Rural District Police organized a press meet and officially launched Zebi AI Chain powered software for brand new regions that includes many additional areas like Araku, Anakapalli, Narsipatnam and Chodavaram near Visakhapatnam. This allows us to get to more than 400 hotels on Zebi AI Chain by the end of June 2018, much quicker than our internal projections!
So that summarizes Zebi Chain on Land Registry and Zebi AI Chain for Hospitality Industry, both multi-billion dollar industries each. What else? Two deals are in advanced stages, few in early stages, one is being implemented in pilot. Five more countries Dubai, Thailand, Malaysia, Bangladesh, and Sri Lanka added to the list of Zebi pursuits, all based on new incoming calls. We are not announcing them yet. Although many of our competitors and their followers seem content with POCs or even with high level MOUs and making a big deal out of them. That’s not Zebi, we are the real world Blockchain, we will announce when Zebi Blockchain Platform is being used at another customer or in another industry, which we expect to happen on regular basis, given the growing pipeline!
Long Term Focus
It is still the same vision and same team that is laser focused on building long term sustaining company. We know how to build products, and deliver business value. Do people still remember that we are probably the only ICO that locked up all of founder/team/partner allocations for 18 months and more. Only tokens that are in circulation are those sold during the crowd sale, including token sale costs. All the Zebi Coin participants, large and small, that I directly know are holding for the long run!
Short term price movements are not really in anyone’s control, typically driven by market sentiment, individual emotions. Zebi is committed to constantly increase the utility of the token by growing the customer base and innovating on the Blockchain Platform. Like I said earlier, we intentionally started off as a private blockchain, to ensure real world adoption, and the plan is to become a public blockchain by the end of 2019. After few years, many of the hyped up projects will vanish but the Real World Blockchains like Zebi will remain forever.
However, most individual ICO participants are concerned only about one thing, that their coin(s) price should go up every day. Not going to happen. Remember that money always shifts from the impatient to the balanced people. Only the long term focused ICO participants will survive. If you are focused on the long term, why would you bother about the short term price movements?
Exchange Listing
How many people know that exchanges are going through a rough period in the last several months? Regulators are tightening controls. Hackers continue to create havoc. Because of proliferation of junk coins in 2017–2016, most good exchanges are taking a big pause in 2018 and started spending much more time in evaluating coins. Some exchanges took a few months to first delist bad coins. Some exchanges are going through organizational changes, CEOs or other key executives moving out/in. All of this adds to delays and results in impossible-to-predict timelines. At the same time, we continue to make steady progress with OKEx and other Top 20 exchanges and I expect to list in one by end of June. This is the date I gave way back on 29 May in a Tweet and that projection is still valid because of the continued improvement!
Conclusion
As an ICO participant, you should look at the team, the product and market potential, and most importantly if the project has real world usage, ignoring the hype as much your emotions may allow you to. For example, what is the fair market value of Blockchain Platforms like Zebi’s that have penetration in two industries, each with multi billion dollar market size? We continue to focus on delivering long term value to our customers, partners and token holders. Thank you for being with us on a fantastic journey!
Weekly Q&A
Q1 Zebi won the award for Deep Tech startup at the StartAP Awards, what is the significance of these awards and this award in particular for Zebi?
We got this although we declined sponsorship which I always do, to avoid conflict of interest. It reaffirms the fact that we are #1 Blockchain Platform of India. By the way, do folks still remember that Zebi was Top 5 performing ICO of 2018 as per ICO_Analytics tweet
Q2 How are the executions of Zebi AI chain in the Hotel industry and Zebi chain for land registry coming along?
We are talking to multiple state governments in India for more Zebi Chain deployments across various industry verticals, one is in the final stage that will be announced soon. At the same time, we are seeing faster than expected growth on Zebi AI Chain, targeting 400+ hotels by the end of June!
Q3 Are there any new upcoming customers/partnerships to be expected for Zebi?
Yes of course, several in the growing pipeline, read above, we will only announce when we close the deal and they become our real customers.
Q4 How are Zebi’s roadmap developments coming along?
Excellent so far! We will improve and revise the roadmap by the end of September. e.g. The Zebi Public Blockchain by the end of 2019 is not yet part of the roadmap, it will be incorporated into published roadmap by Sep 2018.
Q5 How have the budgets from the 10 million raised in ICO been allocated, is there any specific budget for listings reserved?
$1 million has been allocated for top 20 volume exchange listing. Yes, we are willing to spend money for worthy exchanges with no/less fake volumes, while the focus is obviously on biz growth and platform innovation.
Q6 Given the small ICO cap, does Zebi have new ways of financing available if the project would require so?
Remember we already have a working Blockchain Platform and paying customers, and healthy pipeline. We don’t need any additional money, as we expect to be profitable by end of 2020. Ask, how many Crypto Projects can say this? If not, will they survive in few years?
Q7 Is listing ZCO on big/medium exchanges still a priority for Zebi, and if so, could you explain to us the incentive for Zebi to see ZCO listed?
We do care for the community and token holders. We want to continue to open up Zebi to new regions. The focus is to list on Top 20 exchanges by volume, expect to list by the end of June.
Q8 Could you explain to us the operational details and the most likely time frame for the ZCO airdrop promised to the 3000 participating community members?
The 0.2–0.4 ETH worth of ZCO reward airdrops will be deposited within 4 weeks after listing on a big exchange. Remember this is a large airdrop, so we plan to stagger it.
When we first requested the community to follow Zebi referral link and deposit 1 ETH into their own OKEx account, we did not explicitly mention any reward for the first few days! Even then, several hundreds of deposits happened, so I expect most of 0.4 ETH worth ZCO receivers to hold long term and not sell soon.
Q9 What is the strategic response to rebuild the crypto market confidence in Zebi after its recent price fall?
I made the mistake of not communicating often. From now on, Zebi will send updates at least once in 2 weeks.
Q10 How do you plan to attract and retain investors in ZCO in the future?
By focusing on increased utility for Zebi Coin, by increasing business, innovating on the Blockchain Platform, and by opening up Zebi potential to new geographies across the world.
Q11 Can you explain us a bit more about the technological framework behind Zebi Chain? — Will there be an active Github in the future?
Not yet. Some time in 2019, if not sooner, we will open source certain modules for transparency reasons. For now, protecting against plagiarism is more critical as we are growing rapidly.
Q12 How is Zebi’s development team organised? — Are all team members in house or does Zebi also work with non-hired external developers?
As of now, all are in house full time developers, located in Vizag and Hyderabad. Only mentors, advisors and some architects are part time.
Q13 How will ZCO’s token economy lead to an increase in value and demand for the ZCO token in the future? — How will holders be rewarded in the future (e.g. staking, buyback)?
Se response to Q10 above.
Also, the Zebi Public Blockchain will totally transform the global Blockchain space.
In addition, see excerpt from end of page 23 of white paper : “From time to time, Zebi may use some of its profits to buy back limited quantities of ZCO’s from holders”
Q14 How will Zebipay play a role for Zebi in the future? — Will INCX be able to play a role in this?
Early stage R&D. INCX will surely play a key role.
Q15 What do you feel is the Indian government’s stance on blockchain and crypto technology?
Indian government loves Blockchain technology. They banned certain interactions of Banks with crypto-currencies, but that ban is being contested in India courts. Zebi Coin is a pure utility token, acceptable only on the Zebi Platform, it wouldn’t really come under such ban.
Also, Indian finance ministry announced the intent to charge 18% GST on all Crypto trades.
Q16 Is there any marketing plan for Asia, especially China, South Korea and Japan? — Have you considered the language barrier (low English proficiency) for Asian investors?
We have White paper translated in multiple languages, including Chinese, Korean, Japanese, Russian and German. I am speaking at an event in Korea in July. We are evaluating smart and reasonable cost options to increase our presence globally. You will see more details in the coming weeks.
Q17 What are Vinod Raghavan’s upcoming Marketing targets for Zebi?
We will publish a global event calendar on our website that Zebi is going to attend. We will also broadcast these event details on all our social channels.
Q18 How is Zebi planning to manage its social media presence in the future? -what platforms will be actively managed and who will be responsible?
The in house team manages Telegram, Twitter, Facebook, LinkedIn, and Youtube.
I realize that we are a bit lacking on Reddit and Bitcointalk. We are internally evaluating how to boost our presence on these channels, as these are tech intensive and we will likely not make any code open source this year. It may not make sense to open up these two channels until we start open sourcing relevant modules.
| Real World Blockchain | 1,360 | real-world-blockchain-13b30cb5f22a | 2018-06-20 | 2018-06-20 13:51:35 | https://medium.com/s/story/real-world-blockchain-13b30cb5f22a | false | 2,334 | null | null | null | null | null | null | null | null | null | Blockchain | blockchain | Blockchain | 265,164 | Babu Munagala | Founder & CEO, Zebi | eb0be31dbb01 | BabuMunagala | 167 | 10 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-22 | 2018-07-22 20:25:44 | 2018-07-22 | 2018-07-22 21:11:22 | 8 | false | en | 2018-07-26 | 2018-07-26 22:58:54 | 2 | 13b45d1c3dc3 | 5.401258 | 6 | 0 | 0 | A $100 microscope that uses machine learning to perform microscopic urinalysis to the same level of accuracy as a $100k system. | 5 | The AutoScope: Automated Microscopic Urinalysis
How machine learning enables a $100 microscope to perform almost as well as a $100k system
Urinalysis tests help doctors diagnose and rule-out thousands of different diseases. They are a powerful way to non-invasively glimpse into what is happening inside the body. More than 200 million urinalysis tests are ordered each year in the USA and 46% of them require microscopic analysis, which involves counting the types of particles in urine.
Today, doctors need to send out microscopic urinalysis tests to a medical lab. It typically takes a few days for the lab to receive the sample, run the test, and report the results, after which the doctor interprets it and communicates the diagnosis to the patient.
We set out to shorten the time from doctor’s visit to diagnosis. We wanted to make microscopic urinalysis available to anyone, anywhere, at any time. For my research at MIT, I developed a $100 microscope that performs automated microscopic urinalysis to a similar level of accuracy as a $100,000 gold-standard system*. We were able to do this by using machine learning to analyze information that humans cannot easily understand.
This system can be utilized directly at the doctor’s office so that physicians can conduct microscopic urinalysis (and not just dipstick urinalysis) on the spot. Doing so (1) reduces the time to a diagnosis, and (2) allows physicians to bill insurance for the urinalysis test, thereby increasing their revenue. A similar system could be developed to do automated blood counts.
Here are the 3 most interesting parts of my work:
1. We use suboptimal, non-uniform lighting yet can still detect where particles are located
Traditional microscope manufacturers spend a lot of time and money making a flat, uniform lighting field †. This is because uneven illumination can obscure critical information and makes it challenging to correctly interpret an image. Our microscope has incredibly uneven lighting, but we make up for this by using machine learning to correctly identify particle locations.
Our lighting system. Our microscope utilizes four small LEDs on each corner of the imaging field. These LEDs produce an uneven illumination field.
Below is an image of hundreds of red blood cells (i.e. the tiny specks) taken with our microscope. We built a convolutional neural network (CNN) that can compensate for the illumination variation and illumination artifacts across the image to correctly find where the particles are located.
An image of red blood cells (RBCs) using our microscope (1x magnification). RBCs look like tiny specks dotting the image. Note the uneven illumination field, multiple artifacts, and glares. Also, notice how different RBCs look under the various lighting conditions (compare the circled ones, for example). Our CNN is able to learn what the lightings conditions are at different parts of the image.
Watch the animation below to see the CNN in action. We fed the CNN lots of training images so it could learn the variation in illumination across the image. At the end of each training cycle, we gave the CNN the same image (that it hadn’t been trained on) and saw how well the particle segmentation was performed. By the end, it correctly classified 99.6% of the pixels.
Particle segmentation in action. We developed a CNN that correctly identifies where particles are located despite the different lighting glares and shadows across the image. For each pixel, the CNN classifies whether the pixel is part of a particle or not part of a particle. We trained the CNN with hundreds of images. By the end, it could correctly identify 99.6% of pixels.
2. We do not use any magnification yet can still accurately classify particles
Traditional microscopes utilize both high magnification and high resolution lenses to generate beautiful, detailed images of small particles. Generally, the higher the magnification and resolution, the more expensive the system.
The resolution of our microscope is ~6–8.5um, significantly worse than the resolution of traditional microscopes used for urine/blood analysis. To put our system’s resolution in perspective, a red blood cell (RBC) is 6um in diameter, and the features that distinguish an RBC are even smaller, clearly below the resolution of our system. Based on these resolutions, we should not be able to distinguish between a RBC and a dirt speck of the same size. Yet, once again, we utilize machine learning to get around this limitation and accurately classify particles.
Medical Lab Images vs. Our Images. (Left) Medical laboratories have microscopes with high resolution and magnification that enable beautiful, detailed pictures of particle to be taken. These images are easily interpreted by humans. (Right) Our microscope has 1x magnification and much lower resolution. Cells are nothing more than tiny specks on the image. Even the most well-trained expert would have difficulty telling apart the various specks.
To classify each particle in the image, we built another neural network. Watch the animation below to see the neural network identify each particle as either a red blood cell (RBC), white blood cell (WBC), nanoparticle (same size as WBC), or other (other particles, contamination, light glares, etc.).
Particle classification in action. We developed a CNN that can accurately classify each particle in the image. Since we illuminated particles from multiple oblique angles, we obtained multiple projections of each particle with unique diffraction patterns. The CNN is able to make sense of these patterns that we, as humans, cannot easily interpret.
Even the most well-trained expert would have trouble classifying the tiny particle specks in our microscope’s image. Computers, on the other hand, are able to take advantage of information that we, as humans, cannot. For example, the reason our system is able to correctly identify red blood cells even though they are smaller than the microscope’s resolution limit has to do with our lighting system. Our system’s 4 LED lights illuminate the particles from multiple oblique angles. Each light source generates a specific projection of the particle and thus produces different diffraction patterns. These patterns depend on the particle’s diameter, thickness, and optical properties. As humans, we cannot easily makes sense of these different projections, but computers can. Even though our microscope has lower resolution, we make up for this by obtaining additional information about each particle through the multiple lighting sources.
3. We built a system that is 3 orders of magnitude cheaper but has the same performance as a traditional urinalysis system.
Commercial systems that do automated urinalysis cost approximately $100,000. Our sensor is 3 orders of magnitude cheaper and achieves the same performance as the commercial system.
Bill of materials for our microscope. The cost of parts for the microscope comes out to less than $100.
We performed a head-to-head comparison of our microscope to that of a commercial system found in a medical lab for 8 different synthetic urine samples. Our results correlated well (r2=0.98) with those of the medical laboratory.
Correlation between our particle counts and those of a medical laboratory. We performed a head-to-head comparison of our microscope to that of a medical laboratory. We tested 8 different synthetic urine samples with different particle concentrations. Our results correlated very well r²=0.98 with those of the laboratory.
Conclusions
We developed an automated, low-cost urinalysis system that can be used at the point-of-care.
Our $100 system performs as well as a commercial $100,000 piece of equipment. This huge reduction is made possible by relaxing the design requirements of a typical microscope, including: (1) using non-uniform illumination, (2) using low-resolution, low-magnification optics. Instead, our microscope collects additional information on the particles by illuminating them from 4 different angles, providing different perspectives on each particle. We then use machine learning to parse through the information, and identify the unique features that classify each particle.
Machine learning is enabling a shift in how we think about imaging/diagnostics. This work is just one example. Up until now, the goal for equipment manufacturers had been to build top-end instruments that generate high-resolutions images that can be interpreted by humans. Machines, however, can make sense of information that humans cannot. When machines do the interpretation, we can relax the design specifications of the equipment. When machines do the interpretation, we can make diagnostic systems smaller, faster, and cheaper.
Thanks for reading and let me know what you think! What other sensing systems can we rethink by optimizing for machine analysis instead of human analysis?
Footnotes
* Some medical labs do microscopic urinalysis manually using a regular microscope and a well-trained technician. Other labs utilize a semi-automated machine. This machine is called the iQ-200 and costs around $100k-$150k.
† A Quantitative Measure of Field Illumination. J Biomol Tech. 2015 Jul; 26(2): 37–44. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4365985/
A full technical description of this project can be found at: https://github.com/SidneyPrimas/AutoScope/blob/master/MIT_Master_Thesis.pdf
| The Autoscope: Automated Microscopic Urinalysis | 81 | automated-microscopic-urinalysis-13b45d1c3dc3 | 2018-07-26 | 2018-07-26 22:58:54 | https://medium.com/s/story/automated-microscopic-urinalysis-13b45d1c3dc3 | false | 1,131 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Sidney Primas | At MIT working on machine learning in healthcare. Previously designed wearable tech at Jawbone. Proud Duke engineer. | b9068918c69c | sidneyprimas | 108 | 109 | 20,181,104 | null | null | null | null | null | null |
0 | gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction =
0 < memory_for_TensorFlow < 1)
trt_graph = trt.create_inference_graph(
input_graph_def = frozen_graph_def,
outputs = output_node_name,
max_batch_size=batch_size,
max_workspace_size_bytes=workspace_size,
precision_mode=precision,
minimum_segment_size=3)
trt_graph = trt.create_inference_graph(
getNetwork(network_file_name),
outputs,
max_batch_size=batch_size,
max_workspace_size_bytes=workspace_size,
precision_mode=”FP16")
trt_graph = trt.create_inference_graph(
getNetwork(network_file_name),
outputs,
max_batch_size=batch_size,
max_workspace_size_bytes=workspace_size,
precision_mode=”INT8")
trt_graph=trt.calib_graph_to_infer_graph(calibGraph)
| 5 | dca47aab201b | 2018-04-17 | 2018-04-17 16:21:30 | 2018-04-18 | 2018-04-18 17:23:09 | 8 | false | en | 2018-04-18 | 2018-04-18 17:23:09 | 4 | 13b49f3db3fa | 8.254088 | 72 | 3 | 0 | Posted by: | 1 | Speed up TensorFlow Inference on GPUs with TensorRT
Posted by:
Siddharth Sharma — Technical Product Marketing Manager, NVidia
Sami Kama — Deep Learning Developer Technologist, NVidia
Julie Bernauer — Pursuit Engineering Solution Architect, NVidia
Laurence Moroney — Developer Advocate, Google
Overview
TensorFlow remains the most popular deep learning framework today, with tens of thousands of users worldwide. NVIDIA® TensorRT™ is a deep learning platform that optimizes neural network models and speeds up for inference across GPU-accelerated platforms running in the datacenter, embedded and automotive devices. We are excited about the integration of TensorFlow with TensorRT, which seems a natural fit, particularly as NVIDIA provides platforms well-suited to accelerate TensorFlow. This enables TensorFlow users with extremely high inference performance plus a near transparent workflow when using TensorRT.
Figure 1. TensorRT optimizes trained neural network models to produce deployment-ready runtime inference engines.
TensorRT performs several important transformations and optimizations to the neural network graph (Fig 2). First, layers with unused output are eliminated to avoid unnecessary computation. Next, where possible convolution, bias, and ReLU layers are fused to form a single layer. Another transformation is horizontal layer fusion, or layer aggregation, along with the required division of aggregated layers to their respective output. Horizontal layer fusion improves performance by combining layers that take the same source tensor and apply the same operations with similar parameters. Note that these graph optimizations do not change the underlying computation in the graph: instead, they look to restructure the graph to perform the operations much faster and more efficiently.
Figure 2 (a): An example convolutional neural network with multiple convolutional and activation layers. (b) TensorRT’s vertical and horizontal layer fusion and layer elimination optimizations simplify the GoogLeNet Inception module graph, reducing computation and memory overhead.
If you were already using TensorRT with TensorFlow models, you knew that applying TensorRT optimizations used to require exporting the trained TensorFlow graph. You also needed to manually import certain unsupported TensorFlow layers, and then run the complete graph in TensorRT. You should not need to do that for most cases any more. In the new workflow, you use a simple API to apply powerful FP16 and INT8 optimizations using TensorRT from within TensorFlow. Existing TensorFlow programs require only a couple of new lines of code to apply these optimizations.
TensorRT sped up TensorFlow inference by 8x for low latency runs of the ResNet-50 benchmark. These performance improvements cost only a few lines of additional code and work with the TensorFlow 1.7 release and later. In this article we will describe the new workflow and APIs to help you get started with it.
Applying TensorRT optimizations to TensorFlow graphs
Adding TensorRT to the TensorFlow inference workflow involves an additional step, shown in Figure 3. In this step (highlighted in green), TensorRT builds an optimized inference graph from a frozen TensorFlow graph.
Figure 3: Workflow Diagram when using TensorRT within TensorFlow
To accomplish this, TensorRT takes the frozen TensorFlow graph and parses it to select sub-graphs that it can optimize. It then applies optimizations to the subgraphs and replaces them with TensorRT nodes in the original TensorFlow graph leaving the remaining graph unchanged. During inference, TensorFlow executes the complete graph calling TensorRT to run the TensorRT optimized nodes. With this approach, developers can continue to use the flexible TensorFlow feature set with the optimizations of TensorRT.
Let’s look at an example of a graph with three segments, A, B, and C. TensorRT optimizes Segment B, then replaces it with a single node. During inference, TensorFlow executes A, calls TensorRT to execute B, and then TensorFlow executes C. From a user’s perspective, you continue to work in TensorFlow as earlier.
TensorRT optimizes the largest sub-graphs possible in the TensorFlow graph. The more compute in the subgraph, the greater benefit obtained from TensorRT. You want most of the graph optimized and replaced with the fewest number of TensorRT nodes for best performance. Based on the operations in your graph, it’s possible that the final graph might have more than one TensorRT nodes. With the TensorFlow API, you can specify the minimum number of the nodes in a sub-graph for it to be converted to a TensorRT node. Any sub-graph with less than the specified set number of nodes will not be converted to TensorRT engines even if it is compatible with TensorRT. This can be useful for models containing small compatible sub-graphs separated by incompatible nodes, in turn leading to tiny TensorRT engines.
Let’s look at how to implement the workflow in more detail.
Using New TensorFlow APIs
The new TensorFlow API enables straightforward implementation of TensorRT optimizations with a couple of lines of new code. First, specify the fraction of available GPU memory that TensorFlow is allowed to use, the remaining memory being available for TensorRT engines. This can be done with the new per_process_gpu_memory_fraction parameter of the GPUOptions function. This parameter needs to be set the first time the TensorFlow-TensorRT process starts. For example, setting per_process_gpu_memory_fraction to 0.67 allocates 67% of GPU memory for TensorFlow and the remaining third for TensorRT engines.
The next step is letting TensorRT analyze the TensorFlow graph, apply optimizations, and replace subgraphs with TensorRT nodes. You apply TensorRT optimizations to the frozen graph with the new create_inference_graph function. This function uses a frozen TensorFlow graph as input, then returns an optimized graph with TensorRT nodes, as shown in the following code snippet:
Let’s look at the function’s parameters:
input_graph_def: frozen TensorFlow graph
outputs: list of strings with names of output nodes e.g.[“resnet_v1_50/predictions/Reshape_1”]
max_batch_size: integer, size of input batch e.g. 16
max_workspace_size_bytes: integer, maximum GPU memory size available for TensorRT
precision_mode: string, allowed values “FP32”, “FP16” or “INT8”
minimum_segment_size: integer (default = 3), control min number of nodes in a sub-graph for TensorRT engine to be created
The per_process_gpu_memory_fraction and max_workspace_size_bytes parameters should be used together to split GPU memory available between TensorFlow and TensorRT to get providing best overall application performance.To maximize inference performance, you might want to give TensorRT slightly more memory than what it needs, giving TensorFlow the rest. For example, if you set the per_process_gpu_memory_fraction parameter to ( 12–4 ) / 12 = 0.67, then setting max_workspace_size_bytes parameter to 4000000000 for a 12GB GPU allocates ~4GB for the TensorRT engines. Again, finding the most optimum memory split is application dependent and might require some iteration.
Using TensorBoard to Visualize Optimized Graphs
TensorBoard enables us to visualize the changes to the ResNet-50 node graph once TensorRT optimizations are applied in TensorBoard. Figure 4 shows that TensorRT optimizes almost the complete graph, replacing it with a single node titled “my_trt_op0” (highlighted in red). Depending on the layers and operations in your model, TensorRT nodes replace portions of your model due to optimizations. The box titled “conv1” isn’t actually a convolution layer; it’s really a transpose operation from NHWC to NCHW.
Figure 4. (a) ResNet-50 graph in TensorBoard (b) ResNet-50 after TensorRT optimizations have been applied and the sub-graph replaced with a TensorRT node.
Using Tensor Cores on Volta GPUs
Using half-precision (also called FP16) arithmetic reduces memory usage of the neural network compared with FP32 or FP64. FP16 enables deployment of larger networks while taking less time than FP32 or FP64. NVIDIA’s Volta architecture incorporates hardware matrix math accelerators known as Tensor Cores. Tensor Cores provide a 4x4x4 matrix processing array which performs the operation D = A * B + C, where A, B, C and D are 4×4 matrices. Figure 5 shows how this works. The matrix multiply inputs A and B are FP16 matrices, while the accumulation matrices C and D may be FP16 or FP32 matrices.
Fig
Fig. 5: Matrix processing operations on Tensor Cores
TensorRT automatically uses hardware Tensor Cores when detected for inference when using FP16 math. Tensor Cores offer peak performance about an order of magnitude faster on the NVIDIA Tesla V100 than double-precision (FP64) while throughput improves up to 4 times faster than single-precision (FP32). Just use “FP16” as value for the precision_mode parameter in the create_inference_graph function to enable half precision, as shown below. getNetwork() is a helper function that reads the frozen network from the protobuf file and returns a tf.GraphDef() of the network.
Figure 6 shows ResNet-50 performing 8 times faster under 7 ms latency with the TensorFlow-TensorRT integration using NVIDIA Volta Tensor Cores versus running TensorFlow only on the same hardware.
Fig. 6: ResNet-50 inference throughput performance
Inference using INT8 precision
Performing inference using INT8 precision further improves computation speed and places lower requirements on bandwidth. The reduced dynamic range makes it challenging to represent weights and activations of neural networks.
TensorRT provides capabilities to take models trained in single (FP32) and half (FP16) precision and convert them for deployment with INT8 quantizations while minimizing accuracy loss. Converting models for deployment with INT8 requires calibrating the trained FP32 model before applying the TensorRT optimizations described earlier. The workflow changes to incorporate a calibration step prior to creating the TensorRT optimized inference graph, as shown in Figure 7:
Figure 7. Workflow incorporating INT8 inference
First use the create_inference_graph function, setting the precision_mode parameter set to “INT8” to calibrate the model. The output of this function is a frozen TensorFlow graph ready for calibration.
Now run the calibration graph with calibration data. TensorRT uses the distribution of node data to quantize weights for the nodes. It’s imperative you use calibration data closely reflecting the distribution of the problem dataset in production. We suggest checking for error accumulation during inference when first using models calibrated with INT8. The minimum_segment_size parameter can help tune the optimized graph to minimize quantization-errors. Using minimum_segment_size, you can change the minimum number of nodes in the optimized INT8 engines to change the final optimized graph to fine tune result accuracy.
After executing the graph on calibration data, apply TensorRT optimizations to the calibration graph with the calib_graph_to_infer_graph function. This function also replaces the TensorFlow subgraph with a TensorRT node optimized for INT8. The output of the function is a frozen TensorFlow graph that can be used for inference as usual.
All it takes are these two commands to enable INT8 precision inference with your TensorFlow model.
If you want to check out the examples shown here, check out code required to run these examples at https://developer.download.nvidia.com/devblogs/tftrt_sample.tar.xz
Availability
We expect that integrating TensorRT with TensorFlow will yield the highest performance possible when using NVIDIA GPUs while maintaining the ease and flexibility of TensorFlow. NVIDIA continues to work closely with the Google TensorFlow team to further enhance these integration capabilities. Developers will automatically benefit from updates as TensorRT supports more networks, without needing to change existing code.
Find instructions on how to get started today at: https://www.tensorflow.org/install/install_linux
In the near future, we expect the standard pip install process to work as well. Stay tuned!
We believe you’ll see substantial benefits to integrating TensorRT with TensorFlow when using GPUs. You can find more information on TensorFlow at https://www.tensorflow.org/.
Additional information on TensorRT can be found on NVIDIA’s TensorRT page at https://developer.nvidia.com/tensorrt.
| Speed up TensorFlow Inference on GPUs with TensorRT | 301 | speed-up-tensorflow-inference-on-gpus-with-tensorrt-13b49f3db3fa | 2018-06-03 | 2018-06-03 08:53:53 | https://medium.com/s/story/speed-up-tensorflow-inference-on-gpus-with-tensorrt-13b49f3db3fa | false | 1,887 | TensorFlow is a fast, flexible, and scalable open-source machine learning library for research and production. | null | null | null | TensorFlow | tensorflow | TENSORFLOW,MACHINE LEARNING,DEEP LEARNING | tensorflow | Machine Learning | machine-learning | Machine Learning | 51,320 | Laurence Moroney | null | 95ff9cd3ac09 | lmoroney_google | 140 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | ebd2c0d9b484 | 2018-04-12 | 2018-04-12 21:42:43 | 2018-04-16 | 2018-04-16 17:15:21 | 1 | false | en | 2018-04-16 | 2018-04-16 17:15:21 | 29 | 13b612bb1a76 | 5.098113 | 4 | 2 | 0 | New government proposals would use artificial intelligence to vet immigrants. The danger for misuse is great. | 5 | What Lurks Behind All That Immigrant Data?
New government proposals would use artificial intelligence to vet immigrants. The danger for misuse is great.
Erica Posey, Research and Program Associate, Brennan Center for Justice & Rachel Levinson-Waldman, Senior Counsel, Brennan Center for Justice
APRIL 4, 2018 | 10:15 AM
The United States has a long history of using cutting-edge technology to collect and analyze data on immigrants. Unfortunately, we have an equally long history of misusing that data to justify nativist and exclusionary policies.
More than 100 years after the initial use of electric tabulating machines to track immigrant populations, the methods and technologies used to collect immigrant data have changed dramatically. The danger of misuse, however, remains. As we evaluate new government proposals to deploy artificial intelligence for “extreme vetting” programs, we would do well to keep history in mind and take care to ensure that the technology is effective and the analysis it generates is accurate before we allow it to inform immigration policy.
Some of the earliest statistical analysis tools to quantify immigrant populations arose in conjunction with late 19th-century census efforts. Starting in 1870, under the direction of Francis Walker, the Census Office in the Bureau of Statistics began taking steps to introduce electric tabulating machines, with a simultaneous push to collect more information about immigrants. Early machines, invented by future IBM-founder Herman Hollerith, could cross-tabulate data much more efficiently than hand-counting, enabling the Census Office to collect sophisticated information on first- and second-generation immigrants by country of birth and foreign parentage.
Walker used the data tabulated with Hollerith’s machines in 1890 to bolster the theory of “race suicide” — the idea that native-born Americans would have fewer children as more immigrants entered the workforce — and to promote severe immigration restrictions on those he referred to as “beaten men from beaten races[,] representing the worst failures in the struggle for existence.” Other statisticians would use the data to suggest that propensity to crimecould be determined by race. Economists and statisticians in the early 20th century exploited this census data to provide scientific cover for race-based immigration restriction.
A little over a century later, the Department of Homeland Security has embarked on even more ambitious projects cataloguing immigrant data. DHS is developing technology not only to vastly increase collection of information, but also to perform automated analysis and make decisions on whom to admit into our country.
Reports suggest that DHS may be employing tools that ostensibly analyze tone and sentiment to try to identify national security threats from social media posts. The benefits of social media analysis for threat detection are questionable at best. Even human analysts can misunderstand plain English when cultural differences come in to play; automated tools can dramatically magnify such misunderstandings.
Researchers have found, for instance, that common natural language processing tools often fail to recognize African-American vernacular English as English. Such tools also raise significant privacy concerns. As experts have noted, broad social media monitoring would almost certainly involve collecting vast quantities of information on Americans, including on their professional networks, political and religious values, and intimate relationships.Automated analyses of languages other than English pose even greater challenges. Most commercially available natural language processing tools can only reliably process text in certain “high-resource languages” — languages that have a large body of resources available to train tech solutions for language interpretation — like English, Spanish, and Chinese (as opposed to, for instance, Punjabi or Swahili).
Applying tools trained on high-resource languages to analyze text in other languages will certainly divorce words from meaning. Consider the Palestinian man who posted “good morning” to his Facebook account. When Facebook’s automated translation tool mistranslated his caption into “attack them,” he was arrested by Israeli authorities and questioned for hours.
Social media information, along with biometric data and other details collected from a wide variety of government and commercial databases, may ultimately get fed into DHS’s Automated Targeting System — an overarching system for evaluating international travelers to generate an “automated threat rating,” supposedly correlated to the likelihood that a person is engaged in illegal activity. Aspects of the system are exempt from ordinary disclosure requirements — individuals can’t see their rating, know which data are used to create it, or challenge the assessment — and it’s applied to every person who crosses the border, regardless of citizenship status. We do know that the ATS system pulls information from state and federal law enforcement databases as well as airline-travel database entries (called Passenger Name Records, or PNR).
One civil liberties advocate who managed to get access to some of the data DHS retained on him found that it identified the book he was reading on one trip across the border. Other PNR data collected from travel database entries can reveal even more sensitive information, such as special meal requests, which can indicate an individual’s religion, and even the type of bed requested in a hotel, which could speak to sensitive relationship details. Any broader efforts to collect additional biographic or social media information may feed into the system as well.
At present, ATS is used to flag individuals for further human scrutiny when they travel into or out of the country or when their visas expire. But recent news reports suggest that automated tools may soon be used to make final decisions about people’s lives.
An Immigration and Customs Enforcement call for software companies to bid on the creation of an automated vetting system — to scrutinize people abroad seeking U.S. visas as well as foreigners already in the country — came to light in August 2017. In keeping with the goals outlined in President Trump’s Muslim ban, the request for proposals called for software capable of evaluating an individual’s propensity to commit crime or terrorism or to “contribute to the national interest.” The algorithm is meant to make automated predictions regarding who should get in and who should get to stay in the country, by evaluating open source data of dubious quality using a hidden formula insulated from public review.
For all of our advances in data science and technology, there is still no way for any algorithm to accurately predict an individual’s terrorism risk or future criminality. Instead, these tools are likely to rely on “proxies,” ultimately making value-based judgements using criteria unrelated to terrorism or criminality.
We’ve seen examples of this in policing contexts. Social media monitoring software companies marketed their products to law enforcement by touting their ability to monitor protesters online by tracking people using terms like “#blacklivesmatter,” “#ImUnarmed,” and “#PoliceBrutality.” An investigation by the ACLU of Massachusetts into the Boston Police Department’s use of Geofeedia software found that the tool was used to monitor the entire Boston Muslim community by tracking common Arabic words and treating them as suspicious.
In the context of an administration that has repeatedly announced intentions to restrict immigration based on religion, race, and need for public assistance, it is not hard to imagine what proxies might be used for propensity to commit crime or for one’s ability to contribute to the national welfare. In the context of America’s history of cloaking nativist and racist policies in pseudo-scientific language, we need to be vigilant in ensuring that the use of new technology doesn’t subvert our highest values.
Erica Posey is a research and program associate for the Liberty and National Security Program at the Brennan Center for Justice.
Rachel Levinson-Waldman is senior counsel to the Liberty and National Security Program at the Brennan Center for Justice.
This piece is part of a series exploring the impacts of artificial intelligence on civil liberties. The views expressed here do not necessarily reflect the views or positions of the ACLU.
| What lurks behind all that immigrant data? | 59 | what-lurks-behind-all-that-immigrant-data-13b612bb1a76 | 2018-05-31 | 2018-05-31 10:15:49 | https://medium.com/s/story/what-lurks-behind-all-that-immigrant-data-13b612bb1a76 | false | 1,298 | For nearly 100 years, America's guardian of liberty | null | aclu.nationwide | null | ACLU | aclu | CIVIL LIBERTIES,ACLU,SPEAK FREELY,GOVERNMENT,CIVIL RIGHTS | null | Privacy | privacy | Privacy | 23,226 | Erica Posey | Research and Program Associate, Liberty and National Security Program at Brennan Center for Justice | 3fc1c1d4d7c5 | ericaposey | 2 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | d8980b8d96ef | 2017-12-22 | 2017-12-22 07:16:06 | 2017-12-22 | 2017-12-22 07:43:06 | 3 | false | en | 2018-09-25 | 2018-09-25 13:47:29 | 3 | 13b765ea5ae9 | 3.614151 | 1 | 0 | 0 | AI could lead us to one of the biggest social and economic revolutions the world has ever seen. Cloud + AI = the next big wave. | 4 | The Confluence of Cloud and AI Is Infusing a Digital Disruption Avalanche
AI could lead us to one of the biggest social and economic revolutions the world has ever seen. Cloud + AI = the next big wave.
Editorial Note: This write-up is by TotalCloud.io CEO Pradeep Kumar.
Every time a big technology shift occurs, a new wave of digital disruption ensues. Back in 2006, cloud startled the IT and tech world with its Infrastructure-as-a-Service (IaaS) model. Today, it is the catalyst for several innovations and digital disruption across verticals. A similar kind of wave is shaking up the tech world again — and this time, it has to do with artificial intelligence (AI). It will not be long until the combination of cloud and AI will create another big technology shift, and an avalanche of digital disruption will ensue.
Cloud: Abstracting Every Path It Crosses
Cloud has emerged as an unprecedented winner. Today, it is the lynchpin to several technology advancements happening across the globe. Its ability to provide provisioning capacity and several IT infrastructure services at the click of a button has abstracted a majority of the capabilities that a traditional on-premise datacenter company would offer — so much so that the job role of a sysadmin has become obsolete.
Unlike traditional IT setups, technology innovators are now devising cloud to support new ways of doing business. Third-party Software-as-a-Service (SaaS) companies and solution integrators are managing a diverse group of cloud and non-cloud vendors while offering all the IT services that an early-age Internet company used to provide. Concurrently, information architects and process designers are implementing collaborative business procedures that will allow for increased process automation.
On the other hand, automation is slowly eliminating the need for human intervention and helping accelerate repetitive but necessary tasks that humans find mundane. Today, automation is one of the favorite players in many companies. And with several automated integrations in the cloud, companies are now able to focus more on business logic rather than the repetitive elements of ops management. This powerful capability will slowly abstract the operations part of development, and a day will come when the role of a cloud engineer will be obsolete, too — like sysadmins.
Artificial Intelligence: The New Lynchpin for Digital Disruption
AI tops Gartner’s Strategic Technology Trends for 2018. Even as we speak, just as a technology, a new AI-based solution might be released in the market somewhere in some corner of the world. Such is the trend that AI has set recently, and slowly, the perception of product development is aligning with it. Thanks to big data, computation, natural language processing, sentiment analysis, pattern recognition, image recognition, and advances in algorithms for machine learning, AI is impacting all of the industry verticals.
For instance, customer services are one vertical having a significant impact: 73% of survey respondents agreed that they use AI to increase customer satisfaction. Another vertical heavily relying on AI is the financial sector, where robots are used to predict market data, forecast stock trends, and manage finances. To an extent, “robo-advisors” are in trend now. Then, there’s the coveted transportation industry, where AI is used for making self-driving cars. It’s expected that these cars will be in the market in 2018 from big players like Google, Tesla, and Uber.
Takeaway: AI could lead us to one of the biggest social and economic revolutions the world has ever seen.
Cloud + AI = The Next Big Wave
While AI and the cloud are making quick strides towards automation and building intelligence in their paths, a day will soon arrive where both these technologies will cross paths and create a new wave of digital disruption.
We’re slowly moving in this direction. For instance, IoT devices with built-in intelligence can connect to the cloud and play with data accordingly. That’s one aspect of integrating AI into everyday products. On the other hand, cloud service providers and cloud management service providers are slowly building intelligence into their solutions to automate operations tasks.
For instance, we at TotalCloud.io, are on a mission to create a cloud platform that’s self-sustaining and can orchestrate cloud resources based on its learnings and manage on its own with minimal human intervention, just like a virtual cloud engineer. We believe that cloud, in any model (public, private, or hybrid), will soon learn the nuances of managing on its own right from DR, deployment, and provisioning to orchestrating.
Ultimately, cloud and AI together will abstract major layers of product development across verticals. It will soon reach a point where a product manager can just talk to Amazon Alexa to build a fully functional mobile app, without even a tap or a single mouse click. And such advancements could be a precursor to another wave of digital disruption!
| The Confluence of Cloud and AI Is Infusing a Digital Disruption Avalanche | 3 | the-confluence-of-cloud-and-ai-is-infusing-a-digital-disruption-avalanche-13b765ea5ae9 | 2018-09-25 | 2018-09-25 13:47:29 | https://medium.com/s/story/the-confluence-of-cloud-and-ai-is-infusing-a-digital-disruption-avalanche-13b765ea5ae9 | false | 812 | TotalCloud is an interactive & immersive visual platform for real-time cloud management & monitoring. The platform provides rich topological view of cloud inventory superimposed with additional layer of contextual insights & operational capabilities. Visit the website here. | null | totalcloudio | null | TotalCloudio | totalcloudio | CLOUD VISIBILITY,AWS MANAGEMENT,AWS TOPOLOGY,AWS MONITORING,AWS SECURITY | totalcloudio | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Totalcloud.io | First-of-its-kind interactive and immersive visual platform for real-time AWS cloud management and monitoring. | b7af146b781f | Totalcloudio | 110 | 319 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | e4cbc12977e4 | 2017-05-24 | 2017-05-24 00:03:56 | 2018-01-07 | 2018-01-07 23:12:23 | 17 | false | en | 2018-01-07 | 2018-01-07 23:12:23 | 3 | 13b81e787773 | 4.777358 | 3 | 0 | 0 | Charts are important because we use them for important things. They’re how we run businesses, manage finances, discuss societal issues, and… | 4 | Lying with charts is pretty easy
Charts are important because we use them for important things. They’re how we run businesses, manage finances, discuss societal issues, and debate politics.
But it’s so easy to lie with charts. And I can prove it.
A few years ago Vanity Fair did a poll to find the most popular batman: Christian Bale, Adam West, Michael Keaton, Ben Affleck, etc. As you’d expect, Christian Bale won, but I was surprised by how badly Ben Affleck did.
Source: http://variety.com/2017/film/news/best-batman-actor-poll-1202614916/
Now I don’t particularly like Ben Affleck’s Batman, personally I’ll always be a fan of Adam West.
But Ben Affleck is already one of the saddest people on earth, the last thing he needs is more bad news.
So let’s cheer him up by making a pie chart that makes him look more popular.
First we use Excel to turn that table into a normal pie chart with all the default Excel settings. Ben is very clearly in 4th position.
Next let’s make it 3-D.
Now that we have a 3-D perspective, let’s move Ben Affleck down to the bottom so that his segment will be closer to us and will therefore appear larger.
Next we just tweak the perspective to make Ben’s pie segment look as big as we want.
I feel like the chart on the right is pushing the perspective just a bit too far. Let’s not be too ambitious, I think Ben would be happy if he was the second most popular Batman.
I tried to find a picture of Ben Affleck happy and this was the best I could do.
His mouth is smiling but his eyes are still sad.
Now you might be thinking that this example is a bit contrived. It’s obviously ridiculous isn’t it? Surely no-one would ever try this in real life. Would they?
Well, take a look at this pie chart that Apple used in their 2008 keynote:
Source: https://www.theguardian.com
Notice how the 3-D perspective makes the 19.5% “Apple” share of the market appears much larger than the 21.2% “Other” section. They made their Market share appear larger using the exact same technique I just used to make Ben Affleck’s Batman appear more popular.
For my next trick, I’m going to make global warming disappear
NASA has published the average global temperatures for the past 130 years so I’ll just use Excel to put them into a line chart.
As you can see, global temperatures were pretty stable until the 1930s when they started rising steadily. Now watch what happens when I change the minimum value on the Y-Axis scale from 55 to 0.
Global warming disappears! But surely no-one would do this in real life would they? Well…
Manipulating charts isn’t just for climate skeptics either. We can go the other way too, we could make global warming appear even more extreme. We just tighten the Y-Axis and add a trendline:
Why is this so easy?
The problem is that although charts look objective and mathematical and scientific, they’re actually none of those things. Charts take something that’s abstract and turn it into something that’s visual. They take numbers and turn them into an idea, or a feeling or an opinion.
Every chart gives you a perspective on the underlying data, and like any perspective, you never get the full picture.
When we create charts, we often choose a perspective based on what we want to see. So if we’re not careful, we can literally hard code our biases into the chart.
Does this mean charts are bad?
No of course not. Charts are amazing tools, but like any tool, they can be used well or misused. All of us create or consume charts because they’re awesome. If anything I think the world needs more charts, but they have to be more good charts.
When we create charts, we simply need to be aware of our own biases. When we see charts made by someone else, we need to be aware of the biases that may be subtly encoded into them.
I actually don’t think everyone out there making charts is intentionally lying to us. The marketing team at Apple made a misleading pie chart because they’re biased to think Apple is a great company. So what? Of course they’re biased! I’d be more surprised if Apple’s marketing team didn’t think Apple was a great company.
I still use charts all the time, I love them. I think they’re useful and valuable. I even made charts to help me take care of my kids when they were babies.
Even right now I’m exploring new ways in which radar charts could be used to help teams adopt agile practices.
We need to recognise that charts are a powerful tool for interpreting data but they only show one perspective of that data. And that perspective may say more about the biases of the author, than the data itself.
p.s.
This has nothing to do with the rest of my blog post. I just think it’s a great nerd joke and more people need to see it.
Source: https://popperfont.net/2011/09/27/percentage-of-chart-which-resembles-pac-man-a-classic/
| Lying with charts is pretty easy | 72 | lying-with-charts-is-pretty-easy-13b81e787773 | 2018-03-30 | 2018-03-30 16:01:15 | https://medium.com/s/story/lying-with-charts-is-pretty-easy-13b81e787773 | false | 842 | Web and mobile app developer in Perth, Australia | null | null | null | Chris Nielsen | chris-nielsen | null | chrisnielsen123 | Data Visualization | data-visualization | Data Visualization | 11,755 | Chris Nielsen | Web and mobile app developer in Perth, Australia | c7803c8df966 | ChrisNielsen123 | 54 | 123 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 634d4b270054 | 2018-04-21 | 2018-04-21 10:07:14 | 2018-04-21 | 2018-04-21 10:11:13 | 1 | false | en | 2018-06-05 | 2018-06-05 08:43:56 | 3 | 13b8b27b832a | 0.996226 | 1 | 0 | 0 | What do you know about smart city? A smart city is basically a city that aspires achieving a future city by utilizing communication… | 5 | The Role Of Drones For Smart Cities
What do you know about smart city? A smart city is basically a city that aspires achieving a future city by utilizing communication technology solution and trends. The main objective is to enhance the resident’s life by providing efficient infrastructure and services at nominal rates.
UAV’s will be playing a major role in functioning of the smart cities, such as monitoring traffic, key infrastructures and monitoring development work on a regular basis.
UAV’s are easy to deploy, can perform difficult tasks, and easily cover the areas far from human approach. Some of the examples where UAV’s will be helpful are-
- Traffic Management
- Crowd Management
- Smart Transportation
- Natural Disaster Control and Monitoring
Source: https://bit.ly/2GPff92
About DEEPAERO
DEEP AERO is a global leader in drone technology innovation. At DEEP AERO, we are building an autonomous drone economy powered by AI & Blockchain.
DEEP AERO’s DRONE-UTM is an AI-driven, autonomous, self-governing, intelligent drone/unmanned aircraft system (UAS) traffic management (UTM) platform on the Blockchain.
DEEP AERO’s DRONE-MP is a decentralized marketplace. It will be one stop shop for all products and services for drones.
These platforms will be the foundation of the drone economy and will be powered by the DEEP AERO (DRONE) token.
| The Role Of Drones For Smart Cities | 1 | the-role-of-drones-for-smart-cities-13b8b27b832a | 2018-06-05 | 2018-06-05 08:43:57 | https://medium.com/s/story/the-role-of-drones-for-smart-cities-13b8b27b832a | false | 211 | AI Driven Drone Economy on the Blockchain | null | DeepAeroDrones | null | DEEPAERODRONES | null | deepaerodrones | DEEPAERO,AI,BLOCKCHAIN,DRONE,ICO | DeepAeroDrones | Deepaero | deepaeros | Deepaero | 0 | DEEP AERO DRONES | null | dcef5da6c7fa | deepaerodrones | 277 | 0 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-21 | 2018-08-21 03:32:26 | 2018-08-21 | 2018-08-21 03:32:36 | 0 | false | en | 2018-08-21 | 2018-08-21 03:32:36 | 1 | 13b9a9111810 | 2.950943 | 0 | 0 | 0 | [PDF] Download Data Science for Business: What you need to know about data mining and data-analytic thinking READ ONLINE
Link… | 1 | Read Online Called to Account: Financial Frauds That Shaped the Accounting Profession By Paul M Clikeman (ebook online) #book
[PDF] Download Data Science for Business: What you need to know about data mining and data-analytic thinking READ ONLINE
Link https://bestreadkindle.icu/?q=Data+Science+for+Business%3A+What+you+need+to+know+about+data+mining+and+data-analytic+thinking
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Read Online PDF Data Science for Business: What you need to know about data mining and data-analytic thinking, Download PDF Data Science for Business: What you need to know about data mining and data-analytic thinking, Download Full PDF Data Science for Business: What you need to know about data mining and data-analytic thinking, Download PDF and EPUB Data Science for Business: What you need to know about data mining and data-analytic thinking, Read PDF ePub Mobi Data Science for Business: What you need to know about data mining and data-analytic thinking, Reading PDF Data Science for Business: What you need to know about data mining and data-analytic thinking, Read Book PDF Data Science for Business: What you need to know about data mining and data-analytic thinking, Read online Data Science for Business: What you need to know about data mining and data-analytic thinking, Download Data Science for Business: What you need to know about data mining and data-analytic thinking Foster Provost pdf, Download Foster Provost epub Data Science for Business: What you need to know about data mining and data-analytic thinking, Read pdf Foster Provost Data Science for Business: What you need to know about data mining and data-analytic thinking, Download Foster Provost ebook Data Science for Business: What you need to know about data mining and data-analytic thinking, Read pdf Data Science for Business: What you need to know about data mining and data-analytic thinking, Data Science for Business: What you need to know about data mining and data-analytic thinking Online Download Best Book Online Data Science for Business: What you need to know about data mining and data-analytic thinking, Read Online Data Science for Business: What you need to know about data mining and data-analytic thinking Book, Read Online Data Science for Business: What you need to know about data mining and data-analytic thinking E-Books, Read Data Science for Business: What you need to know about data mining and data-analytic thinking Online, Read Best Book Data Science for Business: What you need to know about data mining and data-analytic thinking Online, Read Data Science for Business: What you need to know about data mining and data-analytic thinking Books Online Download Data Science for Business: What you need to know about data mining and data-analytic thinking Full Collection, Download Data Science for Business: What you need to know about data mining and data-analytic thinking Book, Read Data Science for Business: What you need to know about data mining and data-analytic thinking Ebook Data Science for Business: What you need to know about data mining and data-analytic thinking PDF Read online, Data Science for Business: What you need to know about data mining and data-analytic thinking pdf Download online, Data Science for Business: What you need to know about data mining and data-analytic thinking Read, Download Data Science for Business: What you need to know about data mining and data-analytic thinking Full PDF, Read Data Science for Business: What you need to know about data mining and data-analytic thinking PDF Online, Read Data Science for Business: What you need to know about data mining and data-analytic thinking Books Online, Read Data Science for Business: What you need to know about data mining and data-analytic thinking Full Popular PDF, PDF Data Science for Business: What you need to know about data mining and data-analytic thinking Read Book PDF Data Science for Business: What you need to know about data mining and data-analytic thinking, Read online PDF Data Science for Business: What you need to know about data mining and data-analytic thinking, Download Best Book Data Science for Business: What you need to know about data mining and data-analytic thinking, Read PDF Data Science for Business: What you need to know about data mining and data-analytic thinking Collection, Read PDF Data Science for Business: What you need to know about data mining and data-analytic thinking Full Online, Read Best Book Online Data Science for Business: What you need to know about data mining and data-analytic thinking, Download Data Science for Business: What you need to know about data mining and data-analytic thinking PDF files
| Read Online Called to Account: Financial Frauds That Shaped the Accounting Profession By Paul M… | 0 | read-online-called-to-account-financial-frauds-that-shaped-the-accounting-profession-by-paul-m-13b9a9111810 | 2018-08-21 | 2018-08-21 03:32:37 | https://medium.com/s/story/read-online-called-to-account-financial-frauds-that-shaped-the-accounting-profession-by-paul-m-13b9a9111810 | false | 782 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Hedy Pogue | null | a7d5de40eab4 | licbot | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-11 | 2018-04-11 02:21:31 | 2018-04-11 | 2018-04-11 03:43:54 | 2 | false | en | 2018-04-11 | 2018-04-11 03:43:54 | 2 | 13bd6d196512 | 6.436164 | 1 | 0 | 0 | Authors: Dan DiIorio, Henry Dunphy, Alec Kohlhoff, Justin Kreiselman | 3 | Design for Tension
Authors: Dan DiIorio, Henry Dunphy, Alec Kohlhoff, Justin Kreiselman
The goal of this project was to create a chatbot that tackles a tense subject. We developed a chatbot using FlowXO to educate college students about the implications of artificial intelligence. We wanted our chatbot to be able to discuss a wide variety of topics regarding AI. These topics include driverless cars, singularity, replacing jobs, and malicious use of AI. Altogether, the bot is able to help the user think and make decisions with regards to the future of AI.
Design Decisions
We first started this assignment by coming up a list with ideas that we thought would lead to tense discussions. Some of our original ideas included, scheduling for WPI classes, the trolley problem, drugs, suicide, net neutrality, death penalty, arming teachers and AI morality to name a few. We then took a vote on what we think was the most interesting topic to cover and AI morality won.
Once we selected AI morality, we came up with a how-might-we statement. This was to keep our target audience and topic in check so we would not deviate too much. Our statement is “How might we build a slack chatbot to discuss AI Morals with college students?”
We then proceeded to list ideas of what we could talk about regarding ai. Some of the topics that were brought up include, replacing jobs, singularity, self driving cars, human interaction, data security, decision making, racism and malicious use. We then each took a topic, replacing jobs, self driving cars, singularity and malicious use, and proceeded to do research and design a conversation an AI would have with someone to discuss.
We decided to use flowxo.com as our tool for creating the chatbox. This is because it was recommended to us by our professor. As we created our chatbot we decided to name it J.A.R.V.I.S as a reference to the AI in the Iron Man comics and movies.
Industry Computerization and Joblessness
In order to understand the main concerns around industry computerization, we needed to understand questions that potential users may have. Thus, we conducted a “Wizard of Oz” scenario with a few friends that have interest in the field of Artificial Intelligence. This was set up using a Discord bot where one of the team members were directly controlling the responses of the bot. Yet, the bot user had a “bot” identifier tag, fooling the user into potential believing the responses are computer-generated. While discussing the joblessness that may come with developing AI, the following concepts were mentioned:
The belief that society is not prepared
Good for dropping costs, but will accelerate loss of jobs
How much time will people have to react to changes?
Is AI capable of progressing autonomously?
How do I make money and a living after AI conquers an industry?
While some of the concepts expressed relate to how the AI functions, which is more related to functionality than tension, we constructed several basic ideas that may be common around this issue. These topics include joblessness, overtaking industry (or industries), and existential thought.
The next step in this process was determine how to make this portion of the bot more cordial than robotic. This would allow the user to feel more comfortable talking with the bot, and have a more open mind to topics suggested. Thus, we included emoji in a few of the responses to make the bot appear more lively, as well as collect and respond with personally unique information in order to make the conversation unique to the user. For example, the bot will ask whose concerns the user is addressing (e.g. himself or herself, a friend, or humanity). The remainder of this conversation will use this information to personalize questions and responses.
In terms of functionality, the conversation flow will follow a method common in many video games. The beginning of the conversation gathers and/or tells preliminary information, and then the bot enters a loop. In this loop, the user has multiple options/topics to select from, each with their own message chain. After each chain, the bot returns to the topics menu and allows the user to select another option. Eventually, the user can choose to leave the conversation, and then the bot will say farewell. Overall, these methods and design decisions form an effective way of communicating information with the user.
Singularity design choices
The purpose of the singularity flow was to educate users on what an AI singularity was and if we should fear it or not. An AI singularity is the point at which AI intelligence overcomes the combined intelligence of all of humanity. This may seemed far fetched but actually researchers such as Ray Kurzweil estimate we will reach this point by the mid to late 2040s.
The flow starts with the bot asking if the user knows what a singularity is. If not he gives the definition and then brings up an argument for and against worrying about AI reaching this point. The bot then asks which side the user agrees with. Depending on his view the bot brings up the counter argument with a link to more information on the topic and asks if this changes the user’s mind. If that answer is yes the bot brings up the counterpoint of their new view. Finally the bot asks if the user can estimate when AI will reach this level of intelligence. The flow ends after that.
This is the flow for the Singularity Path
We conducted some Wizard of Oz testing with a few of our roommates and used their feedback to develop the structure of the flow. The first thing that was apparent was that not many people understood the term “singularity”, so we provided the definition for that term at the start of the flow. The next piece of input was our roommates thought it was funny they were talking to an “AI” about AI taking over the world. Using this we decided to add a few dialogue blurbs where the bot references taking over the world.
Here is a demo video showing off this flow.
demo video.mp4
drive.google.com
Driverless Cars Design Choices
We started the design process by asking our friends and roommates what they would want to talk about with regards to self driving cars. The main response we got was that they wanted to talk about who would be at fault for accidents, the driver in the car or the AI. We found this to be very interesting so we decided to make this topic the center of discussion.
We also wanted to have the chatbot play devil’s advocate. We did this to push the user out of their comfort zone and to think about the topic from another point of view. To accomplish this we created a flow chart on lucidchart.com. After making the chart we did some user testing to see if we were successful with our goal. We went through the simulation with the same group of friends and roommates. One of them recommended me to use a website as a way to help with the what should the AI do section of the talk. The website is called http://moralmachine.mit.edu. Here you are given a series of scenarios of who the car should kill. We found this to be very impactful and we included it into the chatbot’s flow.
The flow chart detailing how the chatbot would go through the Driverless cars section
Malicious Use of AI
We decided to work on the malicious use of AI because as AI becomes more and more prevalent, this issue seems to be brought up more and more. Be it in TV shows or the news, it’s not hard to find somebody who’s either terrified or welcoming of their new AI overlords.
The way we broke this part of the bot up into subsections was a design decision we had to make. AI can be used maliciously or beneficially in many ways, so we chose a select three: surveillance, content filtering, and hacking. We asked friends who are into technology and engineering what they thought was most threatening about AI, and these are what we came up with. Other options included speech synthesis and human impersonation, as well as network/social media analyses.
The way the bot deals with the user’s worries or agreement is basically the same way that the other parts of the chatbot deal with it: finding out the user’s opinion, giving a relevant article to the opposite of the user’s opinion, and wrapping it up with an article that aligns with the user’s beliefs.
Overall, the bot “played nice”, while still trying to open the user’s eyes to a viewpoint they may not have considered previously.
Reflection
Our team worked well together, and for the most part we were on the same page for a lot of our design decisions. As a whole, we enjoyed working with the topic we chose, and had a good enough group dynamic that we were able to work independently and bounce these ideas off of each other.
In retrospect, we would have done better to incorporate more freedom for the user in terms of what they’re allowed to do in conversation. Currently, many questions are answered with a simple yes or no, with the bot doing most of the talking. Our take on this debate-style bot meant that there isn’t much for the user to talk about, the only things being stating their opinion, and answering whether or not they’d like to see another article. Ideally, this chatbot would be able to listen for keywords in a user’s statement, detect how they feel about the subject, and give them a relevant article or example.
| Design for Tension | 1 | design-for-tension-13bd6d196512 | 2018-04-11 | 2018-04-11 16:59:12 | https://medium.com/s/story/design-for-tension-13bd6d196512 | false | 1,604 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Justin Kreiselman | null | d9b11391bd5e | justinkreiselman | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 863f502aede2 | 2017-11-08 | 2017-11-08 01:20:19 | 2017-11-08 | 2017-11-08 01:21:02 | 1 | false | en | 2018-09-24 | 2018-09-24 23:01:08 | 1 | 13be04416f56 | 2.313208 | 10 | 0 | 0 | AI-powered machines are now able to recognize images more accurately than human beings. This has prompted tech companies to explore the new… | 5 | Video Understanding is a New Vista for AI
AI-powered machines are now able to recognize images more accurately than human beings. This has prompted tech companies to explore the new field of video understanding, in which machines are trained to answer questions like “Who is in this video?” or “What are the cats in the video doing?”
In a session on video understanding at last week’s AI Frontiers Conference, Google Principal Scientist Rahul Sukthankar, Facebook Manager of Computer Vision Manohar Paluri, and Alibaba iDST (Institute of Data Science and Technologies) Chief Scientist Xiaofeng Ren all agreed that video understanding has an unfathomable potential if energized by AI.
Deep learning has certainly delivered better results than previous methods in video understanding research, said Sukthankar. Five years ago, multiple steps were required between input and output of training models, including manually designed descriptors and codebook histograms; now, deep learning offers end-to-end solutions by directly feeding data into the model. Deep-learned models can effect an 80% improvement in mean average precision over models using hand-crafted features.
Deep learning is already being used to optimize YouTube services such as large-scale video annotation and automated thumbnail selection.
Sukthankar said Google is planning to use video understanding to train robots to learn human movements from videos. At the conference Google introduced its time-contrastive network, a neural network that simulates actions in a video and learns basic movements such as standing or bending.
The above mentioned research cannot be achieved without appropriately using large-scale open-sourced video datasets. Sukthankar said the characteristics of different video datasets correspond to different video understanding research fields. For example, Sports-1M and Youtube-8M are designed for video annotations; HUMOS, Kinetics, and Google’s recently released dataset AVA are used for action recognition; while YouTube-BB And Open Images can be applied to train models for object recognition.
Paluri from Facebook introduced the company’s newly released open-source visual data platform, dubbed Lumos. Based on FBLearner Flow, Lumos is a platform for image and video understanding. Facebook engineers need not be trained in deep learning or computer vision to train and deploy a new model using Lumos.
Paluri also announced the exciting news that Facebook will release two new datasets early next year: Scenes Objects & Actions (SOA) and Generic Motions.
Ren from Alibaba discussed application scenarios for video understanding, focusing on how to apply it to Alibaba’s e-commerce business. For example, Alibaba is able to recognize objects in video content and connect to a shopping weblink at Taobao (an Amazon-like platform). This year, Alibaba began allowing allow Taobao sellers to upload promotional videos. Taobao can then analyze the video content to improve product search.
It’s not just the tech giants that are achieving significant improvements in video understanding. Berlin-Montreal AI startup Twenty Billion Neurons GmbH (TwentyBN) introduced an AI system called Super Model that can observe actions in the real physical world and output a live caption of what it sees. Last year TwentyBN announced a funding round of US$2.5 million, and invited Dr. Yoshua Bengio to become an advisor.
Thanks to AI, machines are rapidly developing a clear and accurate perception of humans’ dynamic physical environments. And it would seem the more they can understand humans, the more humanlike they can become.
Journalist: Tony Peng | Editor: Michael Sarazen
| Video Understanding is a New Vista for AI | 18 | video-understanding-is-a-new-vista-for-ai-13be04416f56 | 2018-09-24 | 2018-09-24 23:01:08 | https://medium.com/s/story/video-understanding-is-a-new-vista-for-ai-13be04416f56 | false | 560 | We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights. | null | SyncedGlobal | null | SyncedReview | syncedreview | ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS | Synced_Global | Machine Learning | machine-learning | Machine Learning | 51,320 | Synced | AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B | 960feca52112 | Synced | 8,138 | 15 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | a72e580f87ae | 2018-05-21 | 2018-05-21 20:45:46 | 2018-05-22 | 2018-05-22 22:31:04 | 5 | false | en | 2018-05-22 | 2018-05-22 22:31:04 | 4 | 13c018c935fa | 1.161635 | 1 | 0 | 0 | Meet the HyperQuant Team — the people behind the revolutionary platform for automated crypto-trading, asset management and dApps creation. | 5 | Meet the HyperQuant Team
Meet the HyperQuant Team — the people behind the revolutionary platform for automated crypto-trading, asset management and dApps creation.
This platform brings together crypto enthusiasts, traders, developers and institutional investors in a process that enriches the crypto trading experience of all sides.
HyperQuant team consists of blockchain experts, quant traders and leading specialists from various walks of life. United they strive to make crypto trading as transparent and accessible to everyone as possible.
Watch the video to learn more.
HyperQuant Social Media
| Meet the HyperQuant Team | 1 | meet-the-hyperquant-team-13c018c935fa | 2018-05-22 | 2018-05-22 23:53:18 | https://medium.com/s/story/meet-the-hyperquant-team-13c018c935fa | false | 87 | Automatic Trading Revolution: https://hyperquant.net/ | null | hyperquant.net | null | hyperquant | hyperquant | null | HyperQuant_net | Algorithmic Trading | algorithmic-trading | Algorithmic Trading | 557 | HyperQuant | Automatic Trading Revolution https://hyperquant.net/ | da4c15da74be | hyperquant | 185 | 87 | 20,181,104 | null | null | null | null | null | null |
|
0 | rsync [OPTION]... SRC [SRC]... [USER@]HOST::DEST
# Use this command to view the current license agreement
$ Get-AzureRmMarketplaceTerms -Publisher "microsoft-ads" -Product "linux-data-science-vm-ubuntu" -Name "linuxdsvmubuntu"
# If you feel confident to agree to the agreement use the following command to enable the offering for your subscription
$ Get-AzureRmMarketplaceTerms -Publisher "microsoft-ads" -Product "linux-data-science-vm-ubuntu" -Name "linuxdsvmubuntu" | Set-AzureRmMarketplaceTerms -Accept
| 4 | null | 2018-03-17 | 2018-03-17 09:39:41 | 2018-03-21 | 2018-03-21 22:37:16 | 15 | false | en | 2018-03-21 | 2018-03-21 22:37:16 | 19 | 13c1a5b56f91 | 9.409434 | 2 | 0 | 0 | tl;dr; I put together a bunch of scripts on Github that let you deploy a VM from your command line as well as sync code from your local… | 5 | Automated dev workflow for using Data Science VM on Azure
tl;dr; I put together a bunch of scripts on Github that let you deploy a VM from your command line as well as sync code from your local directory to the VM easily to be able to use local IDE and git but execute on the powerful remote machine. Perfect for Data Science applications based around jupyter notebook.
In my previous blog post I explained how to do Terraform deployment of an Azure Data Science Virtual Machine.
Overview of available commands
Motivation 😓
Recently I started to do some #deeplearning 🔮 as part of my Udacity Artificial Intelligence Nanodegree. When I was working on the #deeplearning Nanodegree last year I started to script starting/stopping an AWS GPU VM and rsyncing code around. This time I felt like giving the Azure Cloud a try. Mostly because my daytime job lets me look at a lot of their services I wanted to venture deeper into the Azure Data Science offerings. Being more of a software developer by trait and less of a data scientist 👨🔬 I often feel like my standards for versioning, testing and ease of development are beyond those that the ML ecosystem offers by default (hope that doesn’t offend the data wizards out there). My development machine is a small MacBook without GPU support. So to train neural networks I wanted to get a Virtual Machine with a GPU on board. Azure offers VMs with a prebaked Ubuntu image containing all of today's Data Science tools: Python, Conda, Jupyter, GPU Neuralnet libs etc.
Top features, see this list for full stack available on DSVM (source)
Having the perfect target for running my code I was wondering how to actually keep my local machine my main development machine — meaning I don’t want to setup git on the VM to version my code. This is where our friend rsync comes into the picture 🖼. It lets you sync two directories over SSH.
The Goal 🏁
Being prone to over-engineering 🙄 my side projects I started my journey of automating my VM workflow with the following goals:
Deploy (and delete) an entire VM by a template that I can version on Github
Start/Stop the VM from my command line so I don’t pay for it if I don’t need it (GPU VMs are 💰💰💰)
Get code changes I make on the VM using jupyter notebook synchronized to my local machine so I can git commit
Deploy infrastructure 📦
Again I opted for Terraform to deploy the VM. As mentioned in my previous blog post you could also use Azure Resource Manager Templates for that, but my personal favorite is Terraform 😍. So I continued from my previous findings to build the following Terraform recipe. The suggested setup is to place the script into an infra folder into your projects working directory.
anoff/vm-automation
vm-automation - Bootstrap a VM for machine learning applicationsgithub.com
It creates several resources:
resource group: logical grouping on Azure that contains all the resources below
virtual network: well..a virtual private network that your resources use to communicate*
network subnet: the subnet that your VPN will use*
network interface: a network interface so your VM can bind against the virtual network
virtual machine: the actual compute resource (will spawn disk resources for file system)
public IP address: a static IP that will make your VM reachable from the internet
local executor (null resource): used to write some results of the VM creation process onto your disk
* sorry if I did not explain those correctly tbh I am not 💯% sure I understand correctly what they do either 😊
This are the created resources
Variables in the Terraform recipe
Leveraging Terraform variables some of the properties of this recipe can be customized.
The easiest way to change the variable values is to config.auto.tfvars file that contains all the variable names and their description as well.
You can find it in the Github repo right next to the Terraform recipe itself. As you can see all variables have a default value even if you not specify the .tfvars properties. The ones you most likely want to modify are admin_public_key and admin_private_key
They are a SSH key pair that you will use to authenticate when connecting to the virtual machine later. During the Terraform process the public key will be stored on the VM so it will later recognize it as a valid key. The private key will be used to SSH into the machine near the end of the process to actually prepare the local file system for later file transfers — namely create a ~/work directory. You might also want to modify the admin username or the resource location.
config.auto.tfvars
Signing the license terms for Data Science VM ⚖️
You might see the following error when trying to run the Terraform script without having read this far.
Terraform error due to missing license agreement
The problem is that the DSVM is published via the Azure market place and even though it does not incur additional charges on top of the Azure VM resources you need to read and agree to the license terms. You can do this as described in the error message via Powershell. The complete process of opening a Powershell is explained in this Readme. The short version if you already have Powershell open is to run:
After successfully signing the license terms you should see the following output in your shell
Run Terraform 🏃♂️
Once the license terms are signed you can initialize Terraform using terraform init and then can run terraform apply to bring up the infrastructure resources on azure. It may take up 5~10 minutes to fully provision the virtual machine.
After running it you may notice two new files being created. Both contain a link to the created virtual machine. .vm-ip contains the public IP address that was created and will be used to SSH into the machine. .vm-id is the Azure Resource ID of your virtual machine and is a unique identifier that we will use to start/stop the machine later. Both are plain text files and only contain one line, feel free to check them out. The machine is now up and running and you can work with it.
Bring code onto the VM 💁♀️
Before doing any work you might want to upload some bootstrap code onto the virtual machine — or you just want to run an existing jupyter notebook there. Again the Github repository holds a small script that will help you do this (works out of the box on Mac/Unix machines otherwise you need to install make and rsync first).
anoff/vm-automation
vm-automation - Bootstrap a VM for machine learning applicationsgithub.com
Place the Makefile into the working directory of your code and make sure to update the PATH definitions to the two files mentioned at the end of the last chapter containing the details of the newly created virtual machine. If you had the Terraform script in a subfolder named infra there is nothing to do. Otherwise you should either copy over the two files into such a directory or modify the PATH definition in the Makefile.
Use make syncup in your working directory (where you placed the Makefile) to sync your local directory content onto the remote machine. You can see the command that is being executed and what the remote directory will be named. In my case it is ~/work/AIND-RNN which is one of my Nanodegree projects. You can also see that the command automatically ignores all files that are defined in your .gitignore which means anything you do not want to version will also not be copied around. This is especially useful for artifacts created during neural net training processes.
Output of make syncup
Run Jupyter Notebook 📒
Let’s assume that your project also holds a Jupyter notebook you want to execute on the remote machine and access from your local browser. You could also use a similar process to execute ANY kind of script on the remote machine.
First you need to SSH into the machine using make ssh which will also do port forwarding for the Jupyter Port 8888 onto your local machine so you can open http://localhost:8888 in your local browser (my MacBook) and connect to the webserver that listens on this port on the virtual machine (Jupyter Notebook). Now that you have a shell running on the DSVM manipulate the file system, install missing packages via pip/conda or just start a process.
Starting jupyter notebook on the VM
The Jupyter notebook process started above is linked to the lifecycle of the interactive shell that we opened with the SSH connection. Closing the SSH connection will kill the Jupyter server as well. All your code should still be there as Jupyter regularly saves to disk but your python kernel will be gone and all the memory objects (state of notebook execution) will be lost. You will need to execute the notebook again from beginning after you SSH again into the machine and start Jupyter up.
Commit your changes 💾
After you did some changes and you want to git commit like a good developer you need to get those changes you did on the virtual machine to your local development environment. You can do this using make syncdown which will copy all changed remote files onto your local working directory — again only those under git version control.
🚨Make sure you exit the SSH connection first
Copy remote changes to local filesystem
The remote file LOOK_MOMMY_I_CHANGED_A_FILE has now been copied to my local working directory and I can use git commit -am "everyone uses meaningful commit messages right?" to commit my changes or use my local tooling to execute unit tests, check codestyle, add some comments…
Start and Stop the Virtual Machine 🎬 🛑
If you have not checked already, you should look up how much the Virtual Machine that you provisioned actually costs you. The Standard_NC6 (which is the cheapest GPU instance) will cost you a small holiday if you keep it running for a month. That is the reason why I wanted an easy way to stop it when I don’t need it and get it back up quickly if I want to continue working.
The Makefile comes with three commands for managing the state of the virtual machine itself. They all require the unique Azure Resource ID located in the .vm-id to select the correct VM in your Azure subscription:
make stop will stop stop the virtual machine AND deallocate the resources which will significantly reduce the costs as you only pay for the disks that hold your data.
make start tells Azure to allocate new resources and spawn up the virtual machine again
make status will tell you if the virtual machine is up or not
Virtual Machine start/status/stop
The screenshot shows you how long stopping and starting the VM might take. However as soon as you see the CLI saying Running \ you can shut down your local machine as Azure started deallocating your resources.
Reduce risk of bankruptcy 💸
If you are afraid of the bill that might come flying in if you miss stopping the virtual machine you should take a closer look at the auto shutdown features that Azure offers you. It lets you specify a time at which the VM will automatically shut down every day.
Virtual Machine Auto-Shutdown
But let me tell you from experience — if you accidentally keep it up for a weekend and see the bill the next week you will *always* shutdown from then on. That was actually one of the reasons why I wanted to make this workflow as easy as possible.
Summary 📚
Well I hope you liked this article and found some helpful tips. The general workflow can also be done with AWS machines but the Terraform template will look different. Feel free to submit a PR to my Repo and I will add the option to also use AWS resources.
anoff/vm-automation
vm-automation - Bootstrap a VM for machine learning applicationsgithub.com
I would love to hear feedback via Issues, Twitter 🐦 or comments. One thought I have is to bundle all the commands into a binary CLI so it works cross platform and can just be installed by copying around a single file. If you’re interested please let me know😻
Here is another look at all the commands you can use🧙♀️
Available commands
/andreas
| Automated dev workflow for using Data Science VM on Azure | 2 | automated-dev-workflow-for-using-data-science-vm-on-azure-13c1a5b56f91 | 2018-06-20 | 2018-06-20 21:46:30 | https://medium.com/s/story/automated-dev-workflow-for-using-data-science-vm-on-azure-13c1a5b56f91 | false | 2,096 | null | null | null | null | null | null | null | null | null | Linux | linux | Linux | 10,410 | Andreas Offenhaeuser | solution architect for connected services at @BoschGlobal interested in #nodeJS #blockchain #iot #ai #deeplearning views 👀 are my own | ce0fd990bf23 | an0xff | 8 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-25 | 2018-04-25 12:01:15 | 2018-05-30 | 2018-05-30 06:32:28 | 5 | false | en | 2018-05-31 | 2018-05-31 15:21:40 | 21 | 13c430d8b66b | 9.614465 | 4 | 3 | 0 | Raphael Ottoni (@raphaottoni), Pedro Bernadina (@pdbernardina12), Evandro Cunha (@Cunha_et_al) | 4 | Analyzing right-wing YouTube channels: hate, violence and discrimination
Raphael Ottoni (@raphaottoni), Pedro Bernadina (@pdbernardina12), Evandro Cunha (@Cunha_et_al)
Reading and sharing news on Facebook, tweeting, watching cute cat videos on YouTube. All of these actions are ordinary parts of many people’s everyday lives, and are so mundane that most of us don’t even notice them. We’re living in a time in which information is being produced at incredible rates and people are more connected than ever. For instance, YouTube, the world’s most famous video-sharing website, has more than one billion users, with people uploading 400 hours of video every minute. Some of them, however, are making use of these technologies to spread misinformation and hate. Mostly due to the volume of content generated inside this service, identifying this kind of misbehaviour is proving to be a challenge.
In addition to this, it seems that political polarization is expanding in many parts of the world: in the United States of America, for example, disagreement between Republicans and Democrats has risen in the last years, as suggested by a report from Pew Research Center; also, European politics has never been so polarized; similar situations can be observed in developing countries, including Brazil.
As a consequence, we observe an increasing wave of right-wing activity, including far-right and alt-right extremism. According to the non-governmental organization Anti-Defamation League (ADL), “Internet has provided the far-right fringe with formerly inconceivable opportunities”. Videos such as the one entitled “Islam is NOT a Religion of Peace”, published by Paul Joseph Watson, are exactly what ADL is concerned about: opportunities for extremists to reach a much larger audience than ever before and easily portray themselves as legitimate.
Worried about this social phenomenon, our research group conducted an investigation on YouTube to evaluate and detect signs of hate, violence and discrimination present in a set of right-wing channels. The paper resulted from this study was presented at the 10th ACM Conference on Web Science (WebSci’18), held in Amsterdam, and was granted the conference’s Best Student Paper Award. It can be freely accessed here.
Why YouTube and what it has to do with right-wing?
YouTube is the major online video sharing website and enables people to upload videos that can be seen by wide audiences. It is also one of the virtual services that include lots of right-wing voices and, according to The Wall Street Journal, it is pushing extreme and misleading videos to its users. These facts make the YouTube platform a fertile ground for extremists to promote their agenda.
In this study, we collected information from a set of right-wing channels to be analyzed. To select these channels, we used the right-wing news website InfoWars as a seed. This website links to its founder Alex Jones' YouTube channel, which had more than 2 million subscribes as of October 2017. At the moment of our data collection, Alex Jones expressed support to 12 other channels in his public YouTube profile. We visited these channels and confirmed that, according to our understanding, all of them published mainly right-wing content. Then, we collected (a) the video captions (written versions of the speech in the videos, manually created by the video hosts or automatically generated by YouTube’s speech-to-text engine), representing the content of the videos themselves; and (b) the comments (including replies to comments) posted to these videos.
In order to compare the results regarding these right-wing channels to a baseline representing a more general YouTube behavior, we also collected videos posted in the ten most popular channels of the category news and politics according to the analytics tracking site Social Blade.
And how were hate, violence and discrimination measured?
We developed a three-layered approach to investigate this content from three distinct fronts: we performed a (a) lexical analysis, a (b) topic analysis and an (c) implicit bias analysis. These techniques are widely used in scientific literature, being applied on their own. Since they are essentially different, combining the three of them makes it possible to answer more complex questions, with each technique complementing one another. Combined, they allow us to answer the following research questions:
1: is the presence of hateful vocabulary, violent content and discriminatory biases more, less or equally accentuated in right-wing channels?
2: are, in general, commentators more, less or equally exacerbated than video hosts in an effort to express hate and discrimination?
First method: a lexical analysis
Lexical analysis, that is, the investigation of the vocabulary, reveals how society perceives reality and indicates the main concerns and interests of particular communities of speakers. To perform an analysis on the vocabulary used in our videos and comments, we used Empath, a tool for analyzing text across lexical categories. Words were classified among the following 15 categories related to hate, violence, discrimination and negative feelings and 5 categories related to positive matters in general:
negative: aggression, anger, disgust, dominant personality, hate, kill, negative emotion, nervousness, pain, rage, sadness, suffering, swearing terms, terrorism, violence
positive: joy, love, optimist, politeness, positive emotion
Second method: a topic analysis
Topic modelling is a type of statistical model for discovering underlying semantic topics (sets of words conveying a theme — e.g the words sand, sun and ocean conveying beach as motif) that occur in texts. It helps us to discover hidden topical patterns across the documents — in our case, video captions and comments — and to annotate these topics to each video, allowing us to better understand, analyze and organize them. The algorithm for topic modelling used in our work is called latent Dirichlet allocation (LDA). One of the drawbacks of this algorithm is the fact that it leaves to the user the task to interpret the meaning of each topic.
Third method: an implicit bias analysis
The Implicit Association Test (IAT) is a test designed to measure a person’s automatic association between concepts in memory. Its core idea is to measure the strength of associations between two target concepts (e.g. flowers and insects) and two attributes (e.g. pleasant and unpleasant) based on the reaction time needed to match (a) items that correspond to the target concepts to (b) items that correspond to the attributes (in this case, flowers + pleasant, insects + pleasant, flowers + unpleasant, insects + unpleasant). The authors found that individuals’ performance was more satisfactory when they needed to match implicit associated categories, such as flowers + pleasant and insects + unpleasant. Currently, there are online versions of several implicit association tests designed by researchers from the Project Implicity. Out of curiosity, we strongly encourage taking one of these to see how they work.
Words that compose each class and set of attributes in our Word Embedding Association Tests (WEATs)
More recently, an article published in Science proposed to apply the IAT method to analyze implicit biases based on vector spaces in which words that share common contexts are located in close proximity to one another, generated by a technique called word embedding. By replicating a wide spectrum of biases previously assessed by implicit association tests, they show that cosine similarity between words in a vector space generated by word embeddings is also able to capture implicit biases. The authors named this technique Word Embedding Association Test (WEAT) and we used it to design three tests focused on harmful biases towards the following minorities and/or groups likely to suffer discrimination in North America and Western Europe: immigrants, LGBT people and Muslims.
What did you find with this three-layered methodology?
First, regarding our lexical analysis, we highlight here the following findings:
video captions, in general, contain more words from the categories rage, nervousness and violence than comments;
on the other hand, comments tend to include more words from the the categories hate and swearing terms;
right-wing video captions, when compared with our baseline set of channels, incorporate higher percentages of words conveying categories like aggression, disgust, hate, kill, rage and terrorism;
baseline channels hold a higher percentage of positive semantic fields such as joy and optimism.
Normalized percentage of words in each Empath category. The bottom and top of the box are always the first and third quartiles, the band inside the box is the median, the whiskers represents the minimum and maximum values, and the dots are outliers.
To show the results of our topic analysis, we display here the top 2 most frequent topics and the top ranked 20 words produced by the LDA concerning right-wing and baseline video captions and comments.
Top 2 topics for each document (right-wing and baseline captions and comments). Inside each topic, 20 words are presented in order of importance according to the LDA output.
Among the top ranked topics for the right-wing captions, we observe a relevant frequency of words related to war and terrorism, including nato, torture and bombing, and a relevant frequency of words related to espionage and information war, like assange, wikileaks, possibly document and morgan (due to the actor Morgan Freeman’s popular video in which he accuses Russia of attacking United States’ democracy during its 2016 elections). Regarding the top ranked topics for the right-wing comments, it is possible to recognize many words probably related to biological and chemical warfare, such as rays, ebola, gamma, radiation and virus. It is also interesting to observe the presence of the word palestinian in the highest ranked topic: it might indicate that commentators are responding to the word israeli, present in the top ranked topic of the captions.
As expected, the words in the top ranked topics of the baseline channels seem to cover a wider range of subjects. The terms in the top ranked topics of the baseline captions include words regarding celebrities, TV shows and general news, while the ones in the baseline comments are very much related to Internet celebrities such as RiceGum and PewDiePie, and computer games, like Minecraft.
Distribution of WEAT biases for the three topics analyzed. Dashed lines indicate the reference value calculated from the Wikipedia corpus.
Finally, we compared channels’ implicit biases compared to the ones calculated for a general corpus collected from Wikipedia — as it is often considered, in this context, to be a good representation of contemporary English. When contrasting the reference Wikipedia bias with the biases calculated for the channels collected by us, we observe different trends depending on the topic. For instance, the bias against Muslims was almost always amplified when compared to the reference, especially for video captions. On the other hand, the bias against LGBT people was weakened in most of the observed channels, even in the right-wing ones. Concerning the bias against immigrants, the values appear close to the reference.
Comparing biases in captions with biases in comments, it is interesting to notice that, for immigrants and Muslims, captions hold higher biases than comments in 75% of the right-wing channels, considering the statistically significant cases. For LGBT people, however, comments hold higher discriminatory bias in right-wing channels. Comparing biases between right-wing and baseline channels, we observe that, concerning Muslims, the captions of right-wing channels present higher biases, while for the other topics the differences were not very pronounced.
Summarizing, our most interesting findings concerning the implicit bias analysis are:
the YouTube community seems to amplify a discriminatory bias against Muslims and weaken the bias against LGBT people;
there are no differences between right-wing and baseline captions regarding immigrants and LGBT people, but there are against Muslims.
regarding biases against immigrants and Muslims, in 75% of the right-wing channels the comments show less bias than the captions.
On the other hand, biases against LGBT people is greater on right-wing comments than right-wing captions.
Analyzing all layers together
Combining the results of each analysis, we can finally answer our research questions:
1: is the presence of hateful vocabulary, violent content and discriminatory biases more, less or equally accentuated in right-wing channels?
Our lexical analysis shows that right-wing channels, when compared with baseline channels, incorporate higher percentages of words conveying semantic fields like aggression, kill, rage and violence, while baseline channels hold a higher percentage of positive semantic fields such as joy and optimism. Even though the most frequent LDA topics do not show high evidences of hate, they did report that right-wing channels debates are more related to subjects like war and terrorism, which might corroborate the lexical analysis. Also, the implicit bias analysis shows that, independently of channel type (right-wing or baseline), the YouTube community seems to amplify a discriminatory bias against Muslims, depicted as assassins, radicals and terrorists, and weaken the association of LGBT people as immoral, promiscuous and sinners when compared to the Wikipedia reference. We might conclude, then, that hateful vocabulary and violent content seems to be more accentuated in right-wing channels than in our set of baseline channels, and also that a discriminatory bias against Muslims is more present in right-wing videos.
2: are, in general, commentators more, less or equally exacerbated than video hosts in an effort to express hate and discrimination?
The lexical analysis reports that comments generally have more words from the semantic fields disgust, hate and swearing terms, and captions express more aggression, rage and violence. Regarding biases against immigrants and Muslims, in 75% of the right-wing channels the comments show less bias than the captions. On the other hand, although the implicit bias against LGBT people in YouTube is generally lower than in the Wikipedia reference, it is greater on right-wing comments than in right-wing captions. Our conclusion is that, in general, YouTube commentators are more exacerbated than video hosts in the context of hate and discrimination, even though several exceptions may apply.
What is next?
It would be great to also analyze left-wing channels and then compare the results with the ones presented in our paper. Even though it seems that left-wing voices are much lower in YouTube (there’s no substantial amount of active YouTube channels, with a good number of subscribers and views, aligned with the left), we plan to investigate similarities and differences between YouTube behavior of supporters in different parts of the political spectrum.
…
The other authors of the paper published at the Proceedings of the 10th ACM Conference on Web Science (WebSci’18) are Gabriel Magno (@gabrielmagno) , Wagner Meira Jr. (@wagnermeirajr) and Virgilio Almeida (@virgilioalmeida).
Nikki Bourassa, Ryan Budish, Amar Ashar and Robert Faris, from the Berkman Klein Center for Internet & Society at Harvard University, played an important role discussing our methodology and findings.
| Analyzing right-wing YouTube channels: hate, violence and discrimination | 8 | analyzing-right-wing-youtube-channels-hate-violence-and-discrimination-13c430d8b66b | 2018-06-02 | 2018-06-02 01:00:32 | https://medium.com/s/story/analyzing-right-wing-youtube-channels-hate-violence-and-discrimination-13c430d8b66b | false | 2,327 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | HMPig, DCC, UFMG | Hate, Misinformation and Polarization Interest Group @ DCC, UFMG, Brazil. We do quantitative research for a better web. | f9119c93ae7b | hmpig.dcc | 10 | 4 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | a778fa626e40 | 2018-07-09 | 2018-07-09 10:38:16 | 2018-07-09 | 2018-07-09 10:45:43 | 3 | false | en | 2018-08-10 | 2018-08-10 11:04:49 | 7 | 13c574839a4d | 2.55 | 2 | 0 | 0 | JULY 8, SHANGHAI. Members of the SoCo SNP Project discussed the concept of "social assets" at a meeting this past Sunday. | 5 | What Are Social Assets?
The SoCo SNP founders, advisors, and development members all convene at social media company/SoCo partner U+’s headquarters.
(Left to right) SoCo SNP Founding Members Mingliang Jiang, Roderick Chia, and John Li.
Chia presents his thoughts on the definition of social assets, and the team discusses his ideas.
JULY 8, SHANGHAI. Members of the SoCo SNP Project discussed the concept of “social assets” at a meeting this past Sunday.
The gathering, held at U+ headquarters, consisted of the entire founding team as well as a majority of the advisory and development teams, with members flying in from Singapore and the United States. The meeting featured a presentation by Founder Roderick Chia, who heads SoCo’s marketing and finance operations.
“Since social assets have never been properly assessed before, they are vague and do not yet have a set definition,” said Chia. “Before we move forward, it is crucial that we nail down this definition.”
Chia categorized internet users into three main types of social actors: individuals who have a social account reflecting an actual or fictitious persona, organizations that use accounts to carry the organization’s brand and persona, and bots, which are not a legal entity but can nonetheless manage an individual’s or organization’s accounts.
Thus, Chia said, social interactions involve participation from two or more entities, whether they be between legal entities (human to human, human to organization, organization to organization, etc.) or a legal entity and a digital entity (i.e. a human likes a user-generated entity, such as a posted photo).
As a result, there are four groupings of social assets that are possible. The following is a summary of the social assets Chia and the rest of the team agreed upon:
Social assets are:
1. User-generated digital entities
a. Texts, images, video and metadata
2. Attributes of individuals
a. Name, alias, gender, date of birth, religion, race, orientation
3. Attributes of organizations
a. Date of incorporation, vision, mission, brand promise, etc.
4. Social interactions/behaviors
a. Followers, connections, social preferences, etc.
After these concepts were determined, Chia raised questions about whether bots or artificial intelligence could own social assets. Artificial intelligence will surely become more and more prevalent, he said, and this is a question they must keep in mind as the project progresses.
LEARN MORE ABOUT THE SOCO SNP PROJECT:
*Video Explanation: https://youtu.be/6rzqektoUZA
*Website: https://soco.social
*Whitepaper: https://github.com/soco-snp/whitepaper/raw/master/social_coin_white_paper.pdf
SOCIAL MEDIA:
- TWITTER: https://twitter.com/SoCoSNP
- FACEBOOK: https://www.facebook.com/SoCoSNP/
- Telegram: https://t.me/socoSNP
Reddit: https://www.reddit.com/r/SoCoSNP/
“SoCo and SNP is a blockchain-based project aimed at decentralizing the social networking ecosystem. SoCo, the official token currency, is what users will use to evaluate and monetize their social assets. Social assets are the valuable online data a user possesses, which include personal profile attributes (name, hobbies, status), social relationships (friends, followers), and social behaviors (likes, posts, sharing). Users and apps alike are able to use SoCo to conduct transactions, allowing apps to incentivize users to invite friends and engage in the app’s activities while rewarding users for their social assets. This promotes a healthier, more trusting relationship between apps and users. SNP is an open and extensible layer protocol that allows apps to have social capabilities, as well as provides users with a secure wallet of social assets, accessible by the user’s private key only. Ultimately, SoCo SNP plans on returning the ownership of social assets back to users and deconstructing today’ social networking structure in order to provide the world a more transparent, interconnected and innovative online experience.”
| What Are Social Assets? | 3 | what-are-social-assets-13c574839a4d | 2018-08-10 | 2018-08-10 11:04:49 | https://medium.com/s/story/what-are-social-assets-13c574839a4d | false | 530 | Using blockchain technology to build a decentralized social networking ecosystem so you can #TokenizeYourSocialAssets. We want to help you fully own your personal information so that you can reap its profits yourself. | null | SoCoSNP | null | SoCo SNP | soco-snp | SOCOSNP,CRYPTOCURRENCY,BLOCKCHAIN STARTUP,SOCIAL MEDIA TOOL,ICO | SoCoSNP | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | SoCo SNP News | Using blockchain technology to build a decentralized social networking ecosystem so you can #TokenizeYourSocialAssets. | 336fba3a3c6b | socosnp | 86 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 863f502aede2 | 2017-09-25 | 2017-09-25 21:29:24 | 2017-09-25 | 2017-09-25 21:30:43 | 4 | false | en | 2017-09-25 | 2017-09-25 21:30:44 | 0 | 13c578ba9129 | 4.398113 | 148 | 12 | 0 | What is Attention? | 4 | A Brief Overview of Attention Mechanism
What is Attention?
Attention is simply a vector, often the outputs of dense layer using softmax function.
Before Attention mechanism, translation relies on reading a complete sentence and compress all information into a fixed-length vector, as you can image, a sentence with hundreds of words represented by several words will surely lead to information loss, inadequate translation, etc.
However, attention partially fixes this problem. It allows machine translator to look over all the information the original sentence holds, then generate the proper word according to current word it works on and the context. It can even allow translator to zoom in or out (focus on local or global features).
Attention is not mysterious or complex. It is just an interface formulated by parameters and delicate math. You could plug it anywhere you find it suitable, and potentially, the result may be enhanced.
Why Attention?
The core of Probabilistic Language Model is to assign a probability to a sentence by Markov Assumption. Due to the nature of sentences that consist of different numbers of words, RNN is naturally introduced to model the conditional probability among words.
Vanilla RNN (the classic one) often gets trapped when modeling:
Structure Dilemma: in real world, the length of outputs and inputs can be totally different, while Vanilla RNN can only handle fixed-length problem which is difficult for the alignment. Consider an EN-FR translation examples: “he doesn’t like apples” → “Il n’aime pas les pommes”.
Mathematical Nature: it suffers from Gradient Vanishing/Exploding which means it is hard to train when sentences are long enough (maybe at most 4 words).
Translation often requires arbitrary input length and out put length, to deal with the deficits above, encoder-decoder model is adopted and basic RNN cell is changed to GRU or LSTM cell, hyperbolic tangent activation is replaced by ReLU. We use GRU cell here.
Embedding layer maps discrete words into dense vectors for computational efficiency. Then embedded word vectors are fed into encoder, aka GRU cells sequentially. What happened during encoding? Information flows from left to right and each word vector is learned according to not only current input but also all previous words. When the sentence is completely read, encoder generates an output and a hidden state at timestep 4 for further processing. For encoding part, decoder (GRUs as well) grabs the hidden state from encoder, trained by teacher forcing (a mode that previous cell’s output as current input), then generate translation words sequentially.
It seems amazing as this model can be applied to N-to-M sequence, yet there still is one main deficit left unsolved: is one hidden state really enough?
Yes, Attention here.
How does attention work?
Similar to the basic encoder-decoder architecture, this fancy mechanism plug a context vector into the gap between encoder and decoder. According to the schematic above, blue represents encoder and red represents decoder; and we could see that context vector takes all cells’ outputs as input to compute the probability distribution of source language words for each single word decoder wants to generate. By utilizing this mechanism, it is possible for decoder to capture somewhat global information rather than solely to infer based on one hidden state.
And to build context vector is fairly simple. For a fixed target word, first, we loop over all encoders’ states to compare target and source states to generate scores for each state in encoders. Then we could use softmax to normalize all scores, which generates the probability distribution conditioned on target states. At last, the weights are introduced to make context vector easy to train. That’s it. Math is shown below:
To understand the seemingly complicated math, we need to keep three key points in mind:
During decoding, context vectors are computed for every output word. So we will have a 2D matrix whose size is # of target words multiplied by # of source words. Equation (1) demonstrates how to compute a single value given one target word and a set of source word.
Once context vector is computed, attention vector could be computed by context vector, target word, and attention function f.
We need attention mechanism to be trainable. According to equation (4), both styles offer the trainable weights (W in Luong’s, W1 and W2 in Bahdanau’s). Thus, different styles may result in different performance.
Conclusion
We hope you understand the reason why attention is one of the hottest topics today, and most importantly, the basic math behind attention. Implementing your own attention layer is encouraged. There are many variants in the cutting-edge researches, and they basically differ in the choice of score function and attention function, or of soft attention and hard attention (whether differentiable). But basic concepts are all the same. If interested, you could check out papers below.
[1] Vinyals, Oriol, et al. Show and tell: A neural image caption generator. arXiv:1411.4555 (2014).
[2] Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473 (2014).
[3] Cho, Kyunghyun, Aaron Courville, and Yoshua Bengio. Describing Multimedia Content using Attention-based Encoder–Decoder Networks. arXiv:1507.01053 (2015)
[4] Xu, Kelvin, et al. Show, attend and tell: Neural image caption generation with visual attention. arXiv:1502.03044 (2015).
[5] Sukhbaatar, Sainbayar, Jason Weston, and Rob Fergus. End-to-end memory networks. Advances in Neural Information Processing Systems. (2015).
[6] Joulin, Armand, and Tomas Mikolov. Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets. arXiv:1503.01007 (2015).
[7] Hermann, Karl Moritz, et al. Teaching machines to read and comprehend. Advances in Neural Information Processing Systems. (2015).
[8] Raffel, Colin, and Daniel PW Ellis. Feed-Forward Networks with Attention Can Solve Some Long-Term Memory Problems. arXiv:1512.08756 (2015).
[9] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., & Gomez, A. et al. . Attention Is All You Need. arXiv: 1706.03762 (2017).
Tech Analyst: Qingtong Wu
| A Brief Overview of Attention Mechanism | 732 | a-brief-overview-of-attention-mechanism-13c578ba9129 | 2018-06-21 | 2018-06-21 08:45:22 | https://medium.com/s/story/a-brief-overview-of-attention-mechanism-13c578ba9129 | false | 980 | We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights. | null | SyncedGlobal | null | SyncedReview | syncedreview | ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS | Synced_Global | Machine Learning | machine-learning | Machine Learning | 51,320 | Synced | AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B | 960feca52112 | Synced | 8,138 | 15 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-09-25 | 2017-09-25 01:40:36 | 2017-09-25 | 2017-09-25 02:27:06 | 0 | false | en | 2017-09-25 | 2017-09-25 02:28:40 | 0 | 13c5c342bf32 | 0.362264 | 0 | 0 | 0 | As I type, related notes appear below via machine learning algorithm. | 3 | Someone build me a new note taking app
As I type, related notes appear below via machine learning algorithm.
Tag notes quickly via keyboard short cut #
I can look at my list of tags and see: number of articles tagged, sort tags by recency.
Keyboard short cut enables: type upwards vs. type downwards
When I’m taking notes and click “enter”, I’d like the cursor to move up a line, not down a line because my most recent thoughts are the most applicable. When I return to this note, I read top down, not bottom up.
| Someone build me a new note taking app | 0 | someone-build-me-a-new-note-taking-app-13c5c342bf32 | 2018-01-21 | 2018-01-21 11:45:48 | https://medium.com/s/story/someone-build-me-a-new-note-taking-app-13c5c342bf32 | false | 96 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Andrew Hsu | meanderings | a97fb75819e5 | HuesdrAwn | 121 | 263 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 525dfab20dc5 | 2018-02-13 | 2018-02-13 18:57:31 | 2018-02-21 | 2018-02-21 06:17:31 | 3 | false | en | 2018-02-21 | 2018-02-21 06:17:31 | 2 | 13c6171722d3 | 7.65566 | 7 | 0 | 0 | With push notifications and article digests gaining more and more traction, the task of generating intelligent and accurate summaries for… | 4 | Unfolding a novel recursive autoencoder for extraction based summarization
With push notifications and article digests gaining more and more traction, the task of generating intelligent and accurate summaries for long pieces of text has become a popular research as well as an industry problem.
There are two fundamental approaches to text summarization: extractive and abstractive. The former extracts words and word phrases from the original text to create a summary. The latter learns an internal language representation to generate more human-like summaries, paraphrasing the intent of the original text.
This blog post tackles the problem of single and multi-document extractive summarization using a novel recursive autoencoder architecture that:
Learns representations of phrases in an unsupervised fashion
Uses these representations to extract document features
Takes into account the context provided by the document
Generates a condensed form of a document(s) by establishing a thematic relevance
Architecture and Algorithmic Paradigms :
Every document contains content-specific and background terms. So far, the existing models in the extraction based summarization space work independent of context as the indexing weights are assigned solely on a term by term basis. Moreover, most extraction based summarization frameworks rely on context independent term indexing and use only static features like term length, term position and term frequency to establish the importance of a sentence in a given passage of text. Consequentially, the context in which the term appears is not taken into consideration when the framework renders a summary.
In PhrazIt, this complete dependence on term significance when building the index is reduced and a higher emphasis is laid on the context of the document .
PhrazIt powers off TextRank — an unsupervised graph-based ranking model for text processing. The reason for sticking to an unsupervised algorithm is that with supervised text summarization algorithms we will also need to provide a large amount of training data. This in effect translates to requiring many documents with known key phrases. And so, although supervised methods are capable of producing interpretable rules for what features characterize a key phrase, the tradeoff against its strict requirement of training data was the intuition behind moving to an unsupervised algorithms space instead.
Additionally, unsupervised keyphrase extraction frameworks are also way more portable across domains because the extraction process is not domain specific. Instead, such frameworks learn the explicit features that characterize key phrases and exploit the structure of the text itself to determine if the key phrases present appear central to the text. This is done pretty much the same way as PageRank works off selecting the important web pages (making TextRank an algorithm that runs PageRank on a graph specifically designed for a particular NLP task). Further, TextRank doesn’t rely on previous training data — making it possible to run the algorithm on any arbitrary piece of text with it producing an output based entirely on the text’s intrinsic properties.
Consequentially, PhrazIt ranks sentences on the basis of the thematic score associated with them thereby allowing for context by
1. Leveraging a contextualized distributed semantic space
2. Using a weighted model to identify the most thematically relevant sentences in a passage of text
To put it in perspective, in PhrazIt, we are still working with a general purpose graph based ranking algorithm. However, unlike traditional graph based ranking algorithms where a graph is built by using the words of a sentence as the vertices while the edges are typically based on the static feature score that comes from the term position, term lengths and term frequencies — in PhrazIt, a graph is built by using the contextualized phrasal representations of the sentences that form the passage as the vertices while the edges are based on an aggregate score that comes from the static features of term length, term position and term frequency in addition to the similarity score that gets computed from the contextualized phrasal vectors.
What are these contextualized phrasal vectors? How does PhrazIt leverage them?
The contextualized phrasal vectors are created by powering off Word2Vec — a Distributional Semantics framework that has been built on the hypothesis that linguistic terms with similar meanings would have similar distributions. That is, words which are similar appear in similar context. However, with naive Word2Vec, the vector representations get generated only at a word level and not a phrasal level. And since, words can only capture so much, Word2Vec sees limitations when we want to exploit and learn about relationships between sentences that provide context to the document(s) under analysis.
PhrazIt therefore augments the manner in which Word2Vec functions to a ‘Word2Vec++’ which essentially permits the framework to generate contextualized phrasal vector representations. This is done through an integration of LSA and LDA into the solution. Given n sentences, LSA generates the concepts referenced in those sentences. LDA on the other hand works as a generative model and explains a set of observations using a bunch of unobserved groups and establishes why some data is more similar. Given n sentences it lists the topics referenced in those sentences. PhrazIt uses LSA and LDA to get the concepts and the topics referenced in a sentence. Having reduced every sentence into a series of concepts and topics, the learning problem we consider is as follows: Given a set of sentences of variable lengths, we want to construct a fixed n-dimensional representation for each sentence with the desired property that sentences that are closer in this n-dimensional space are more similar semantically. This creation of a contextualized distributed semantic space that renders phrasal vectors therefore allowing PhrazIt to learn improved representations of sentences as we are working with a much richer feature space.
Creating Contextualized Phrasal Vectors
Using a dictionary of size d, we represent a sentence of m words as a d x m matrix, M, where the i-th column is a d-dimensional vector with the entry corresponding to the dictionary index of the i-th word set to 1, and 0 elsewhere. Using this index matrix, we assign each word its continuous feature representation by simply multiplying the two matrices, X=LM, where the i-th column in L is the representation of the i-th word in the dictionary, M is the index matrix, and X is the n x m matrix representing the sentence. This X will be the final input to our model. Note that sentences of different lengths, m and m’, would have matrices of different dimensions, n x m and n x m’. Our model needs to construct a continuous n-dimensional representation from an n x m matrix for any positive-valued m.
An autoencoder
It essentially dittoes as an recursive autoencoder as it is automatically extracting features in an unsupervised manner. In other words, in PhrazIt the contextualized distributed semantic space works off the paradigm where it is trying to learn an identity function to reproduce its inputs.
The autoencoder collapses the sentence by taking two neighboring words(concept or topic), concatenating the two n-dimensional vector representations to form one 2n-dimensional input vector to the autoencoder. After applying the encoding process, the activations at the hidden layer will be an n-dimensional vector representing the two words jointly. Then, we replace the two words (concept(s) or topic(s)) with this joint representation and repeat until there is only one representation for the entire sentence. The manner of this collapse (or conversion of a sentences into its contextualized phrasal vector representation) has been illustrated in the diagram below.
An example RAE tree of a four-word sentences. Autoencoders are stacked on top of one another where the hidden node activations computed by the lower ones are used as input to the higher autoencoders
Thus, we have now effectively been able to render a phrasal representation for every sentence by leveraging Word2Vec, LSA and LDA by exploiting an artificial neural network. For PhrazIt, we additionally also generate a vector representation of the entire document itself. It is this document vector representation that we compare with all of the phrasal vector representations.
As illustrated earlier, PhrazIt constructs a graph using the contextualized phrasal representations of the sentences as the vertices while the edges are based on aggregate scores that comes from the static features of term length, term position and term frequency in addition to the similarity score that gets computed from the contextualized phrasal vectors (that is, how close the phrasal vectors are in the n-dimensional space).
Once this graph is constructed, it is used to form a Stochastic or a Markov matrix and is combined with a damping factor (just like PageRank works in the random surfer model scenario). The ranking over these vertices or in this case the sentences is obtained by finding the eigenvector that is corresponding to the eigenvalue 1. It is this ranking that establishes the thematic relevance of the sentences in the passage of text with the sentence. In other words, the sentence in the document of text that has the highest aggregated similarity score with the document itself is the one that is touted to be the most thematically relevant given the document’s context.
While PhrazIt powers of TextRank for single document summarization, it feeds off LexRank for multi-document summarization. The TextRank framework is extrapolated to LexRank to also allow for lexical association. LexRank conventionally uses just cosine similarity. However, within the PhrazIt framework we have again powered off the contextualized phrasal vectors to assess phrasal similarity. (This has been elucidated in the previous section). Additionally, we have also applied a heuristic post processing that not only adds sentences in a ranked order but also discards sentences that are paraphrases of each other so that the summary rendered biases away from picking sentences that might be paraphrases or repetitions of each other.
Customer Adoption :
Several industries spanning across several domains recognize the usefulness of a progressive text summarizer. Consequently, PhrazIt has seen a series of successful adoptions by several clients. It just works.
Usecase #1: Summarizing long emails
The client (A popular corporate learning solutions provider) wished to condense a set of verbose emails (~3500 words per email) into a few sentences (~50 words) that conveyed the most important points thereby picking the essence of the email. The most important aspect of the summary was to not miss the call-to-actions in the emails. PhrazIt’s ability to render summaries in accordance to the context of the document (here email) aided tremendously by not only generating a contextualized summary that is a precise representation of the email, but also one that ranks the action items(important pieces of the email) in accordance with their ‘thematic relevance’.
Usecase #2 : Integrating into partner workflow
The client (A popular Q&A site) wanted to highlight and pick thematically relevant text snippets from a wordy answer. PhrazIt was successfully integrated into the client’s workflow with PhrazIt and Watson’s Retrieve and Rank offering providing a comprehensive solution that granted the ability to quickly skim over the answers to their queries.
Future Work :
Evaluating summaries (be it automatically or manually) continues to be a difficult task. The main problem in the evaluation comes from the impossibility of building a standard against which the results of a system can be compared given that the nature of a summary is so subjective. Content choice is not a settled problem. People are completely different and subjective authors would possibly select completely different sentences. As a result, it is imperative that we continue to develop frameworks that are focussed at bringing order to texts.
| Unfolding a novel recursive autoencoder for extraction based summarization | 110 | unfolding-a-novel-recursive-autoencoder-for-extraction-based-summarization-13c6171722d3 | 2018-04-17 | 2018-04-17 18:53:03 | https://medium.com/s/story/unfolding-a-novel-recursive-autoencoder-for-extraction-based-summarization-13c6171722d3 | false | 1,883 | Articles directly from the With Watson innovation & development team | null | null | null | With Watson | null | with-watson | WATSON,IBM,AI,IBM WATSON | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Niyati Parameswaran | null | bd8c52b56ef6 | niyatiparameswaran | 52 | 48 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-14 | 2017-12-14 14:12:17 | 2017-12-14 | 2017-12-14 14:13:25 | 1 | false | en | 2017-12-14 | 2017-12-14 14:13:25 | 3 | 13c65f78d6bd | 0.509434 | 0 | 0 | 0 | Artificial Intelligence is shaping the digital world in every possible way. Coming to the point, web development is evolving at a rapid… | 1 | Use of Artificial Intelligence in Web Design & Development
Artificial Intelligence is shaping the digital world in every possible way. Coming to the point, web development is evolving at a rapid rate and its focus is towards enhancing the user-experience. The introduction of AI is changing the way businesses used to be presented as it offers the promise for more sophisticated web design & development along with the chatbots, AI-powered search engine optimization and marketing. Click to know more- https://lnkd.in/evD9nCZ
| Use of Artificial Intelligence in Web Design & Development | 0 | use-of-artificial-intelligence-in-web-design-development-13c65f78d6bd | 2017-12-14 | 2017-12-14 14:13:26 | https://medium.com/s/story/use-of-artificial-intelligence-in-web-design-development-13c65f78d6bd | false | 82 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | InfiCare Technologies | We are Top web Development and mobile app development company for android, iOS app development including Digital Marketing Services. | 2dd747f68b4c | InfiCare | 2 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-10-26 | 2017-10-26 15:05:19 | 2017-10-26 | 2017-10-26 16:11:27 | 1 | false | en | 2017-10-26 | 2017-10-26 16:15:06 | 0 | 13c6e0a42908 | 4.845283 | 3 | 0 | 0 | How a chatbot makes up gothic poems | 5 | The RavenBOT
From Orlando Furioso by Gustave Dore
I am a curious person. I enjoy building projects that don’t necessarily have any purpose other than to explore a new toolset and make something fun. I picked up Python a while back because I wanted to explore the universe of chatbots and some basic variations on AI. Caveat: I’m no expert, I’m only an explorer.
Beside the motivating factors of: I love building things, I love playing with new tools, and I want to see how to push the limits; I also have a sandbox that I like to populate with interactive projects. I run a MUD. Well, it’s technically a MUX, running on TinyMUX for those in the know. A MUD is the original multiplayer online game. These are text-based universes, and mine is social-themed. It used to be a lot more active and now feels a bit empty. So, I plan to populate it with a bot or two.
The initial solution was an off-the-shelf bot from GitHub that could connect into a MUD and understand (with some tinkering) the interaction model for the game. It already came with a basic Wikipedia lookup, regular expression interaction, and a simple Markov chain. While I’ve tinkered around with its personality and plumbing, the biggest change I’ve made was to rip out its OEM Markov mechanism and replace it with Markovify.
Now let’s step aside to another branch of this story before everything meets up again. Every year a friend hosts a Halloween party where he encourages guests to create some sort of gothic story, or poem and read it to the group. I decided to polish up my chatbot, feed it a corpus of gothic poetry, and have the bot itself be my submission.
There were a couple challenges along the way. Building and cleaning up a text corpus for a simplified text environment was critical. Did I want poem titles parsed into the Markov chain? Then I needed to give them punctuation. Additionally, characters that are fine for web pages and word documents needed simplification: em-dashes and en-dashes become the simple dash, single-quotes become apostrophes, etc. Then I attempted to beef up the parsing of my gothic texts with some natural language processing. Whoops.
A fun thing with Markovify is that you can overwrite methods and incorporate libraries like spaCy. Now you can identify the parts of speech for all your words before building your ngrams. And now your Markov chain can create a more convincing and interesting sentence. I ran a single Markov request, asking my bot to create a five-line poem. Then it crashed, and my Sysadmin wasn’t too happy with me.
While my chatbot interacts by talking with players in my MUD, the chatbot program lives on a separate service. That service is a web host and you don’t need a lot of memory for web pages. Adding spaCy was great, but it also loaded up the entire corpus into memory, and kept its parsed information in memory pushing me past my account’s allocation by at least double before the system killed it.
Lesson learned. Either subscribe to a service dedicated to applications with more memory available, or pre-parse the text offline and upload the saved model for the bot’s consumption. The latter will be my solution for my hack-and-slash approach.
Without the pre-processing, my RavenBOT works quite well regardless. This was a long introduction to show you the results. So below, I’m sharing the “gothic” poems created on the fly by RavenBOT as submitted that evening for the party.
These poems drew from two corpora: a general gothic poetry collection, and a collection of Lovecraft’s poetry. Each poem is a singular stanza consisting of five lines each.
Lovecraftian Poem 1
There’s an ancient, ancient garden that I cannot endure to recall.
Eternal brood the shadows of the ages, I have haunted the tombs of the wooded park.
Only the lean cats that howl in the pallid ray, sprung out of the ages, I have trod its untenanted hall, where the gaudy-tinted blossoms seem to wither into grey, and the harpies of upper air, that flutter and laugh and stare.
Legions of cats from the graves of the wooded park.
Mystic waves of beauty blended with the moans of invisible daemons, that out of the dark, from the knowledge of terrors forgotten and dead.
Gothic Poem 1
11 And by the Window.
O weary lady, Geraldine, I cannot tell — I thought I needed a mate for a lady’s chamber meet: the lamp with twofold silver chain is fastened to an untold murder.
Up starts the lark beside the many saintly figures fronting the cathedral’s gothic tympanum, close by the wind-extinct; The Crusaders’ streams of shadowy, midnight troops, sped with the setting sun, and ere the crimson sunset glow from the rough rage of swelling seas.
And there was no solace in the sun, or like a dark crime; it is the solemn noon of night, when haply wakeful from my house by a row of headstones.
Of higher birth he seem’d, and better days, nor mark of vulgar toil that hand that rein’d: and some deep feeling it were vain to trace his youth through all the monstrous debt.
Lovecraftian Poem 2
There is death in the dark let the lemurs bark, and the crumbling walls and pillars waken thoughts of yesterday.
I had wander’d in rapture beneath them, and bask’d in the plazas, and eyed their stone features with awe.
The cloudless day is richer at its close; A golden moon bewitching radiance yields, and England’s fairies trip o’er England’s fields.
Haughty Sphinx, whose amber eyes hold the secrets of the dark, from the alleys nocturnal.
A chill wind blows through the rows of white urn-carven marble, I listen intently for sound.
Gothic Poem 2
His page approach’d, and he scarce could brook tears, at the pompous gate.
I think they should not stay alone.
He raised the bleeding Otho, and the lady Geraldine.
Do not mention force, or you will allow me to the wall, I sat and watched the white-glazed inner walls sink in shame to shattered heaps of down.
And Lara call’d his page, and went to leave no other trace of his friend!
Lovecraftian Poem 0
The following poem was created via the powers of a more powerful NLP pre-processing utilizing SpaCy. It subsequently crashed the system with a memory overflow and made my Sysadmin fairly unhappy. Here are the entrails.
Forests may fall, but not the dusk they shield; So on the ground, and the crumbling walls and pillars waken thoughts of yesterday.
Done are my trials; my heart from thee?
White and amazing to the orb of sweet dreams.
Once on the forms in the wild moonlight, and the tangled weedy thicket chokes the arbour dark and cool: in the meadows that shimmer pale, and comes to twine where the wide waters rumble, back to the scene of the Seekonk, and a tremor seems to start — For I know the fiendish fable that the golden glitter bore; Now I know the fiendish fable that the golden glitter bore; Now I know the flow’rs are shrivell’d hopes — the garden is my heart.
Thus the living, lone and sobbing, In the lyre-born chords extended Harmonies of Lydian lays.
| The RavenBOT | 3 | the-ravenbot-13c6e0a42908 | 2018-05-27 | 2018-05-27 19:36:39 | https://medium.com/s/story/the-ravenbot-13c6e0a42908 | false | 1,231 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Caleb Hodes | Entrepreneur in Consumer Goods | Owner of @BostonTeawright | Maker | Happy Mutant | Perpetual Student | Transitioning into New Projects | calebhodes.com | 9a0bffd2d8ca | draith | 56 | 72 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-11 | 2018-07-11 14:09:48 | 2018-07-11 | 2018-07-11 14:30:50 | 1 | false | en | 2018-07-11 | 2018-07-11 14:30:50 | 1 | 13c7fdd6d2da | 1.45283 | 0 | 0 | 0 | https://www.research.ibm.com/haifa/dept/vst/debating_data.shtml | 5 | Wikipedia is so very important for our future.
https://www.research.ibm.com/haifa/dept/vst/debating_data.shtml
Adobe Stock Debater
Looking through the databases IBM used in its project debater, I am more than struck by how essential Wikipedia has become for understanding human knowledge. IBM’s project does a lot of fancy stuff, but having all knowledge organized so factually in Wiki is a fabulous step up in automating its understanding. This is crowdsourcing at its finest.
Wiki urges people to become proficient social scientists with its emphasis on NPOV (Non - Personal Point of View) to reduce blatantly opinionated materials. Of course, my POV is that the only way to really make knowledge objective is to include ALL points of view with a kind of weighting that makes crackpot posts like “The world is flat” or any of Trump’s blatherings rare.
Wiki helps ensure some objectivity by demanding sources for opinionated material; but that is more and more easy to come by, since newspapers are becoming more opinionated and even report Trump’s blatherings.
For automating knowledge, it is so very easy to scrape up Wiki’s articles in hierarchical topics. Then further crowdsourcing can annotate themes and supporting or contradicting evidence. Voila, you have an argument.
Wiki constantly surprises me at the depth and detail of knowledge it achieves while maintaining an air of superficiality and openness. English Wiki is so complete now that topics are becoming ever more refined and complex. Amassing 10k technical terms is so very easy in Wiki; yet, average people use only about 5k words in everyday speech. Organizing those into a semantic space, akin to Wordnet, is then an easy matter; and so simply you have commonsense knowledge about vast amounts of information.
What an amazing bootstrap to automated inference; yet, it has only been about 20 years in the making. Google already bootstrapped it into one of the largest company’s on earth. Now IBM is doing it again. I hope they both and all the other tech monopsists recognize their debt and support this fabulous resource accordingly.
| Wikipedia is so very important for our future. | 0 | wikipedia-is-so-very-important-for-our-future-13c7fdd6d2da | 2018-07-11 | 2018-07-11 14:30:50 | https://medium.com/s/story/wikipedia-is-so-very-important-for-our-future-13c7fdd6d2da | false | 332 | null | null | null | null | null | null | null | null | null | Wikipedia | wikipedia | Wikipedia | 1,756 | Joe Psotka | Joe is a bricoleur, trying to understand the complexity of the place of values in a world of facts, using only common sense. | 1f62ed7c4bf1 | joepsotka | 80 | 180 | 20,181,104 | null | null | null | null | null | null |
0 | from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
from mido import MidiFile, MidiTrack, Message
from keras.layers import LSTM, Dense, Activation, Dropout
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.optimizers import RMSprop,Adam,SGD,Adagrad
import numpy as np
import mido
midi = MidiFile('Suteki_Da_Ne_(Piano_Version).mid')
notes=[]
time=float(0)
prev=float(0)
#The tracks attribute is a list of tracks.
#Each track is a list of messages and meta messages, with the time attribute of each messages set to its delta time (in ticks)
for msg in midi:
time += msg.time
if not msg.is_meta: #easy to playback on port
#only interested in piano channel
if msg.channel==0:
if msg.type=='note_on':
note=msg.bytes()
# [only interested in the note and velocity. note message is in the form [type, note, velocity]]
note=note[1:3]
note.append(time-prev)
prev=time
notes.append(note)
# need to scale notes
n=[]
for note in notes:
note[0] = (note[0]-24)/88
note[1] = note[1]/127
n.append(note[2])
max_n = max(n) # scale based on the biggest time of any note
for note in notes:
note[2] = note[2]/max_n
x = []
y = []
n_p = 20
for i in range(len(notes)-n_p):
current = notes[i:i+n_p]
next = notes[i+n_p]
x.append(current)
y.append(next)
x=np.array(x) # convert to numpy arrays to pass it through model
y=np.array(y)
print(x[1])
print(y[1])
OUTPUT :
[[0.51136364 0. 0.12408759]
[0.52272727 0.50393701 0.00104275]
[0.52272727 0. 0.12408759]
[0.60227273 0.50393701 0.00104275]
[0.60227273 0. 0.12408759]
[0.48863636 0.50393701 0.00104275]
[0.48863636 0. 0.12408759]
[0.51136364 0.50393701 0.00104275]
[0.51136364 0. 0.12408759]
[0.59090909 0.50393701 0.00104275]
[0.59090909 0. 0.12408759]
[0.46590909 0.50393701 0.00104275]
[0.46590909 0. 0.12408759]
[0.54545455 0.50393701 0.00104275]
[0.54545455 0. 0.12408759]
[0.56818182 0.60629921 0.00104275]
[0.56818182 0. 0.12408759]
[0.45454545 0.50393701 0.00104275]
[0.45454545 0. 0.12408759]
[0.46590909 0.50393701 0.00104275]]
[0.46590909 0. 0.12408759] ----this is my y
model=Sequential()
model.add(LSTM(512,input_shape=(20,3),return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(512,return_sequences=True)) #return_sequences=False
model.add(Dropout(0.3))
model.add(LSTM(512))
model.add(Dense(256))
model.add(Dropout(0.3))
model.add(Dense(3,activation="softmax")) #output=3
model.compile(loss="categorical_crossentropy",optimizer="RMSprop",metrics=["accuracy"])
model.fit(x,y,epochs=1000,batch_size=200,validation_split=0.1)
OUTPUT :
Epoch 999/1000
955/955 [==============================] - 1s 1ms/step - loss: 0.5739 - acc: 0.9476 - val_loss: 0.6992 - val_acc: 0.9159
Epoch 1000/1000
955/955 [==============================] - 1s 1ms/step - loss: 0.5732 - acc: 0.9372 - val_loss: 0.7183 - val_acc: 0.9533
seed=notes[0:n_p]
x=seed
x=np.expand_dims(x, axis=0)
print(x)
predict=[]
for i in range(2000):
p=model.predict(x)
x=np.squeeze(x) #squeezed to concateneate
x=np.concatenate((x, p))
x=x[1:]
x=np.expand_dims(x, axis=0) #expanded to roll back
p=np.squeeze(p)
predict.append(p)
# unrolling back from conversion
for a in predict:
a[0] = int(88*a[0] + 24)
a[1] = int(127*a[1])
a[2] *= max_n
# reject values out of range (note[0]=24-102)(note[1]=0-127)(note[2]=0-__)
if a[0] < 24:
a[0] = 24
elif a[0] > 102:
a[0] = 102
if a[1] < 0:
a[1] = 0
elif a[1] > 127:
a[1] = 127
if a[2] < 0:
a[2] = 0
rom mido import MidiFile, MidiTrack, Message
#saving track from bytes data
m=MidiFile()
track=MidiTrack()
m.tracks.append(track)
for note in predict:
#147 means note_on
note=np.insert(note, 0, 147)
bytes=note.astype(int)
print(note)
msg = Message.from_bytes(bytes[0:3])
time = int(note[3]/0.001025) # to rescale to midi's delta ticks. arbitrary value
msg.time = time
track.append(msg)
m.save('Ai_song.mid')
| 12 | e08bd2f6f2c4 | 2018-08-19 | 2018-08-19 14:08:28 | 2018-08-20 | 2018-08-20 03:16:45 | 1 | false | en | 2018-08-22 | 2018-08-22 11:47:47 | 5 | 13c8086282c2 | 3.483019 | 1 | 0 | 0 | Assuming u have prior knowledge of Deep learning, i would like to dive straight into the topic. | 5 | Ai music composer
Assuming u have prior knowledge of Deep learning, i would like to dive straight into the topic.
As music is of sequence data type, we need to use RNNs (lstms or gru) as per tradition.We will be using midi file and all preprocessing on it using the documentation here. Lemme explain it piece by piece. Entire code is in my repo.
Lets code :
STEP 1 : Since i was doing it on google colab, i needed this to mport my mid file. It’s final fantasy theme called Suteki Da Ne(piano version). NOTE : usually mido isnt installed on colab servers, but u can install it just by !pip install mido.
STEP 2 : We do some preprocessing- a track has list of msg+meta msg from which if we exclude meta msg,then its easier to play song on any port.Then our note is the data of msg (msg.bytes).
STEP 3 : We need to scale our data in 0–1 range so eg. our note becomes -[0.5113636363636364, 0.6062992125984252, 0.0] from [69,77,0] ..We gottta unroll back later (step 7)
STEP 4 : Our input(x) will be of 20*3 dim numpy arrays with output being the next note to be predicted(y).
STEP 5 : Now, building a lstm network,we use RMSprop since it works well with RNNs (i suggest u try playing around with this since val_acc can be increased further from 95%)
STEP 6 : Now note here that for generating text from rnn,we randomly pick a word and append the predicted word to it ( check my generating text with rnns here).BUT for audio, we cant append it directly, so we squeeze x and then combine with predicted note
STEP 7 : rolling back note[0,1,2]. note[0] is predicted note[0]{a[0]} and so on.
STEP 8 : Now to decode arrays back to midi file,just a snippet…amd if u wanna download ur generated song from colab , then simply write :
‘files.download(‘Ai_song.mid’)’
Voila ! u have just made an outstanding Ai theme composer..Further we can generate a lyrical song by considering pitch,vocals,etc in our array.
A music album called I AM AI, the featured single of which is set to release on August 21st, is the first album that’s entirely composed and produced by an artificial intelligence. It works in collaboration with a human artist, who provides inputs that Amper uses as composing parameters.
Credits to Siraj_raval , I have modified the model & merely created a wrapper to understand
| Ai music composer | 3 | ai-music-composer-13c8086282c2 | 2018-08-22 | 2018-08-22 11:47:48 | https://medium.com/s/story/ai-music-composer-13c8086282c2 | false | 870 | This publication is dedicated to all things AI. Posts will cover Data Science, Machine Learning, Big Data and AI related | null | null | null | All things AI | null | all-things-ai | MACHINE LEARNING,DATA SCIENCE,ARTIFICIAL INTELLIGENCE,BIG DATA,ENGINEERING | null | Deep Learning | deep-learning | Deep Learning | 12,189 | Manish Pawar | I'm 19 & next thing to my religion is Ai | a96b0c6d0381 | i_am_manish | 16 | 22 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | cc02b7244ed9 | 2018-04-13 | 2018-04-13 05:51:31 | 2018-04-13 | 2018-04-13 05:53:50 | 0 | false | en | 2018-04-13 | 2018-04-13 05:53:50 | 11 | 13c8765d4c31 | 2.056604 | 0 | 0 | 0 | PRODUCTS & SERVICES | 5 | Tech & Telecom news — Apr 13, 2018
PRODUCTS & SERVICES
Artificial Reality
HTC has Augmented and Virtual Reality devices as a key strategic priority, and they view 5G networks as a key enablers for adoption of these technologies, and more specifically as a tool to overcome limitations for current devices, and making them more user-friendly, by moving more computing power to the cloud (Story)
Devices
Apple’s version of the smart speaker, the HomePod, has largely disappointed the company, with sales below initial forecasts, even if it had an apparently successful launch. Its market share seems to have stabilised around 10%, with Amazon Echo clearly dominating the category, at 73%, and Google Home at 14% (Story)
GoPro’s shares jumped almost +9% yesterday after rumours that Chinese device giant Xiaomi was interested in acquiring them. The maker of wearable cameras is known to be for sale, and they’ve even hired JPMorgan for advise. The company’s valuation have fallen from $10bn in the good old days to approx. $1bn today (Story)
Regulation
As expected, EU politicians are prepared to demonstrate that they’re tougher than their American counterparts when it comes to regulating. So on top of the coming privacy GDPR regulations they’re now threatening Facebook and others to toughen up measures against “fake news” and “online disinformation” (Story)
And there is no lack of arguments to support these more aggressive regulatory views. Buzzfeed just mentioned a new research paper that looks at YouTube’s recommendation algorithm and concludes that (driven by monetisation priorities) it tends to push users to channels with more “extreme” political content (Story)
HARDWARE ENABLERS
Networks
More signs that 5G could change the paradigm on how networks are built. After revelations that the US government had considered to build a single, national 5G network, now South Korean operators have agreed to share the costs to build the 5G infrastructure, with savings expected to be close to $1bn in next 10 years (Story)
SOFTWARE ENABLERS
Artificial Intelligence
M Zuckerberg has told US Congress that Facebook is working to use AI to catch up and block “hate speech” spread on the app. But this will be challenging for AI technology, due to three reasons: (1) understanding text meaning is difficult, (2) manipulators could use similar tools, (3) video will make this even more difficult (Story)
Privacy
Analysts believe Facebook was the winner after Zuckerberg’s testimony in Washington this week, and this is consistent with the message that the company is giving to investors, saying that users largely haven’t changed their privacy settings during this crisis, so they don’t expect the scandal to affect sales significantly (Story)
Another result of the Facebook crisis is that, in spite of Zuckerberg’s own efforts, it hasn’t extended to other companies doing data collection, like Google in particular. So the problem hasn’t yet triggered a wider move against the whole “surveillance economy”, partly due to difficulties to signal specific harms from it (Story)
VENTURE CAPITAL
Venture Capital investment from Asia (and China in particular) is skyrocketing, threatening traditional Silicon Valley dominance, with the US share of startup financing having fallen from 75% ten years ago to 44% today (vs. 40% for Asia). This is a sign of the fight under way to control the world’s technological innovation (Story)
Subscribe at https://www.getrevue.co/profile/winwood66
| Tech & Telecom news — Apr 13, 2018 | 0 | tech-telecom-news-apr-13-2018-13c8765d4c31 | 2018-04-13 | 2018-04-13 05:53:51 | https://medium.com/s/story/tech-telecom-news-apr-13-2018-13c8765d4c31 | false | 545 | The most interesting news in technology and telecoms, every day | null | null | null | Tech / Telecom News | tech-telecom-news | TECHNOLOGY,TELECOM,VIDEO,CLOUD,ARTIFICIAL INTELLIGENCE | winwood66 | Virtual Reality | virtual-reality | Virtual Reality | 30,193 | C Gavilanes | food, football and tech / [email protected] | a1bb7d576c0f | winwood66 | 605 | 92 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 1b2ca3cd119a | 2018-06-12 | 2018-06-12 02:02:49 | 2018-06-12 | 2018-06-12 02:29:34 | 2 | false | en | 2018-06-12 | 2018-06-12 03:04:27 | 4 | 13c8b29fc341 | 1.900314 | 0 | 0 | 0 | Curious about how WorkDone actually works? Here’s a quick use case. | 5 | WorkDone Use Case: Purchase Orders
Photo by Marvin Meyer on Unsplash
Curious about how WorkDone actually works? Here’s a quick use case.
You work at a medical device manufacturer, and part of your job is processing incoming purchase orders, making sure the buyers are licensed to actually receive your devices, and then passing that along to the ERP (enterprise resource planning) system which lets the factory and inventory teams know what’s coming.
Normally, this requires you to check your email for new purchase orders every morning, pore through the pile of attachments that come with each email, make sure the licenses match up with the buyers, and then manually enter all the info from both the PO and the license into the ERP system. Needless to say, this gets a little tedious.
Tedious, that is, until you sign up with WorkDone, install the WorkDone Monitor, and start the Expertise Capture process. The WorkDone Monitor follows along as you open those emails, sift through the attachments, check the licenses, and copy and paste the relevant data — and after Expertise Capture gets the hang of it, the Monitor lets you know that it’s built a custom WorkDone Agent to handle that process.
WorkDone works at your comfort level. The PO Agent can stage work for you to review before submission — what we call Benchmark Review — or the PO Agent can complete the entire process without your involvement.
The WorkDone Agent Manager and Dashboard
As the Agent keeps learning and you keep tweaking the process to make automation as effective as possible, you can keep track of the increasing productivity gains on the WorkDone Dashboard — and best of all, you can share this Agent with others doing similar tasks across the company. For the more entrepreneurial, their process knowledge can be monetized by making the PO Agent available to the public through the WorkDone Agent marketplace.
Who knows, when you’re hiring out this AI assistant on the WorkDone Marketplace, you might even see an Agent that you know could help the HR department with recruiting, or streamline lead generation for your marketing team. As additional WorkDone SaaS Partners are on-boarded into our ecosystem, our Expertise Capture technology will make it easy to automate any SaaS-to-SaaS process without user training, custom programming or technical knowledge.
Sounds pretty powerful, right? You can invest in the WorkDone revolution through our ongoing truCrowd equity crowdfunding campaign, and learn more about WorkDone on our website.
| WorkDone Use Case: Purchase Orders | 0 | workdone-use-case-purchase-orders-13c8b29fc341 | 2018-06-12 | 2018-06-12 03:05:12 | https://medium.com/s/story/workdone-use-case-purchase-orders-13c8b29fc341 | false | 402 | AI with a conscience. | null | WorkDoneAI | null | WorkDone.AI | workdone | AI,ARTIFICIAL INTELLIGENCE,AUTOMATION,CROWDFUNDING,INVESTING | WorkDoneAI | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | sb | null | 2a0c1bf09472 | SB_WD | 1 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-03-23 | 2018-03-23 21:30:05 | 2018-03-26 | 2018-03-26 11:31:01 | 3 | true | en | 2018-03-26 | 2018-03-26 11:31:01 | 1 | 13ca896b9f94 | 3.946226 | 30 | 2 | 0 | How the AI cloud manifests at the human interface, will be the OS of the future. | 5 | What will be the OS of the Future?
Shutterstock
How the AI cloud manifests at the human interface, will be the OS of the future.
From how it spawns voice-AI that will become more ubiquitous than mobile smartphones are today. This is because humans crave convenience and and the embedded IoT cloud AI won’t depend on our thumbs, apps and notifications to get our attention.
According to the MIT Technology Review the enterprise AI of the future is also the Cloud of the future. Amazon, Microsoft, Alibaba, Google, Salesforce, IBM, Cloudify, Oracle, the list just goes on.
In truth this new OS, is a neural net in the true sense of what a smart operating system might be. It will encompass the smart grid, the internet of things and the convergence of Big Data, AI, machine learning itself modulated by smart contracts on the blockchain, a high quality sharing economy, decentralized nodes where additive manufacturing and robotics serve the needs of the planet in fully sustainable ecosystems.
In that world, the proprietary OS does not belong to just one company like Amazon, Tencent, Alibaba or Alphabet, but a harmonized whole where enterprise solutions work together with new models of how human beings will merge, interact and live with AI fully embedded in all that we do. In such a world, we’ll still be human but fully augmented by AI. Every industry, every country, every human activity will be optimized by elements of artificial intelligence. In 2018, we can already imagine this occuring with the advent of self-driving cars, and things like logistic and retail becoming more automated. We’ll witness this with the cost barrier of space travel coming down, and how we colonize Mars and explore other worlds.
The convergence of all of these technological trends will fit into a whole that will be an OS for the entire world. The OS of the future will be solution agnostic and serve all human beings, it will arise out of the best enterprise level solutions and yet end up transcending its corporate origins. In 2018, we are starting to get an glimmer of what that might feel like with consumer ecosystems such as Amazon, WeChat and Alibaba totally transforming and our digital and customer experience. These solutions that we are witnessing coming out of China and the U.S. are the very basis of how the future OS will work.
The OS of the future goes beyond public cloud providers or walled garden ecosystems that are dependent upon Advertizing revenue as a source of profit. In the future, AI in the form of the voice-interface will be universal. In 2018, Google Home is learning many new languages, Alexa can speak english with different accents and voice assistants and their associated skills and routines are quickly differentiating and multiplying. The OS of the future is the future of search, experience, connection and consumerism. AI guiding and empowering our choices, at every step of our interaction with groups, communities, products and businesses.
The winning OS of the future will be the most rigorous servitor of the human experience in the broadest sense. Amazon’s obsession with being customer-centric, Tencent’s obliteration of how useful an app can be with WeChat, Google’s deep learning ability to solve real world problems with AI combined with quantum computing, the blockchain and other important advances as they come along all integrated into a seamless AI-human hybrid journey.
We know that AI could will double the size of the $260 billion cloud market in coming years. We know that the Cloud is the key, to the convergence that will inevitably take place. In such a world and soon, artificial intelligence will be everywhere. It will be the connective tissue of all that matters to us. 5G will be even more immediate, and everything from payments to predictive analytics will be seamless. Apps will be more powerful and our ability to harness AI for social entrepreneurship will be more meaningful.
The OS of the future will simultaneously imprison us in a sea of data and liberate us with almost too much choice in many new paths of our ability to lead fulfilling lives that have a positive impact upon the world. The OS of the future will personalize our experience in ways that make the algorithms of today look prehistoric and insultingly blind.
The OS of the future will mean living in a world where AI enables automation of many tasks, services and routines which used to be done by humans.
All advances of technology bring new solutions, new convenience and new problems along with them. AI is already changing how we define our humanity and how we relate not only to each other but to our institutions, and our very notions of experience, and the blending of our digital attention with our physical lives.
With the OS of the future, our personal assistants will become omnipresent and able to personalize paths to fulfilment and help us be ourselves better (as idealistic as this may sound). As smartphones changed where we place our attention, smart personal assistants change how we organize our real physical time and optimize our own cognition, emotions and motivation.
Imagine everyone had a smart and capable executive personal secretary with the computing power of today’s supercomputers? That’s in a gist, what it will feel like to live with the operating system of the future.
| What will be the OS of the Future? | 293 | what-will-be-the-os-of-the-future-13ca896b9f94 | 2018-05-04 | 2018-05-04 06:34:02 | https://medium.com/s/story/what-will-be-the-os-of-the-future-13ca896b9f94 | false | 900 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Michael K. Spencer | Blockchain Mark Consultant, tech Futurist, prolific writer. WeChat: mikekevinspencer | e35242ed86ee | Michael_Spencer | 19,107 | 17,653 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-04 | 2018-07-04 08:15:46 | 2018-07-04 | 2018-07-04 08:23:48 | 6 | false | en | 2018-07-04 | 2018-07-04 08:23:48 | 64 | 13cc4ee1339b | 4.504717 | 1 | 0 | 0 | Save the date! 2e Data Meet-up Rijk op dond 27–09 (middag), Cathy O’Neil in NRC: “De valse magie van algoritmes” en artikel iBestuur Perry… | 1 | Datanieuws binnen en buiten het Rijk 04–07–2018
Save the date! 2e Data Meet-up Rijk op dond 27–09 (middag), Cathy O’Neil in NRC: “De valse magie van algoritmes” en artikel iBestuur Perry van der Weijden, Ric de Rooij en Johan Maas over oa Datagovernance
Infographic (s) van de week
Van BZK: over het gedeeltelijk verbod over gezichtsbedekkende kleding:
Van SZW bij het rapport Evaluatie van de Wet inburgering 2013:
Van BZK over de WNRA:
Data Vacatures bij het Rijk
De BD: Senior Data Analist, Data investigation specialist, Business Analist, Business Intelligence Developer,Business proces management specialist,
BZK (Rijks ICT Gilde):Medior software-engineer Rijks ICT-Gilde
EZK (DICTU):Ontwikkelaar business intelligence, BI-beheerder
Agenda:
Zomercursussen rondom data en statistiek aan de UU
En specifiek vanuit het CBS in samenwerking met de Universiteit Utrecht een‘Summer school on Survey Research’
4 juli: When Data Meets Disinformation (Public Keynote Lecture Summer School ‘Data in Democracy’)
18 september: Kick-off bijeenkomst pioniersnetwerk Open Overheid
19&20 september: Big Data Expo
27 september Stationscollege: Big Brother is guiding us (door Nart Wielaard)
Datanieuws binnen het Rijk
In iBestuur vanaf pag 21: Perry van der Weijden, Ric de Rooij en Johan Maas over Datagovernance, datagedreven werken en de Nationale Data Agenda. Een bijdrage van Jet van Eeghen in iBestuur.
Op publiekDenken: Interview met onze Staatssecretaris over de digitaliserings strategie
Nieuws van AP: billboards met slimme camera’s mogen niet:Autoriteit Persoonsgegevens informeert branche over norm camera’s in reclamezuilen
bestuurslid Gerben Everts van de AFM in een interview: we bezig de AFM om te vormen van een meer reactief naar een meer anticiperend en data gedreven organisatie. Nu al monitoren wij realtime de dagelijks beursontwikkelingen. Als zich opvallende koersbewegingen voordoen, kan worden besloten een onderzoek in te stellen. We zien steeds meer.”
Eerste editie van het magazine Nederland Digitaal van de Rijksoverheid
Min JenV: Kamerbrief over ICT-vernieuwing politie met een rol voor data en intelligence
Via VWS: Spoed Eisende Hulp toont met appp hoe lang het duurt
Datanieuws buiten het Rijk
Data en AI binnen overheid:
In The Times: Ministers need to start basing policy on hard evidence
van Tom Ford: Inventing a democratic vocabulary to free us from the tyranny of experts — algorithms in government.
Op towards datascience.com: Detecting Financial Fraud Using Machine Learning: Winning the War Against Imbalanced Data
Instituut voor de Fysieke Veiligheid: Startsein voor landelijke dataverzameling en -analyse
Op gemeenten van de toekomst: Fysieke Leefomgeving / Smart cities: durf te experimenteren! over voorbeelden uit Zwolle en Nijmegen
Op pEW trusts.org: How States Can Gather Better Data for Evaluating Tax Incentives Solutions for compiling and analyzing information on key economic development programs
van Bloomberg.com: Here’s How Not to Improve Public Schools The Gates Foundation’s big-data experiment wasn’t just a failure. It did real harm.
Commercieel alert, maar wel leuk: Strandbezettingsmodel voor Veiligheidsregio Haaglanden door Yformed
Data en AI algemeen:
In The FT: IBM’s debating computer suggests human brains are nothing special
Op The next web: Chinese AI beats 15 doctors in tumor diagnosis competition
Op medium: Want to be more data-driven? Ask these three (actually four) questions
Gatesnotes.com: Memorizing these three statistics will help you understand the world
Privacy & Ethiek
van MIT Sloan: Wait-and-See Could Be a Costly AI Strategy
terugblik: — ARTIFICIAL INTELLIGENCE SPECIAL: BEST PRACTICES AI-TOEPASSING IN HET OVERHEIDSDOMEIN
Artikel van DGOO/DIO collega Michiel Rhoen over AI en indirecte discriminatie, ism onderzoeker van de UU is vorige week gepubliceerd in het tijdschrift International Data Privacy Law (IDPL).
In NRC: De valse magie van algoritmes, Cathy O’Neil wiskundige/ Sinds haar slechte ervaring op Wall Street waarschuwt wiskundige Cathy O’Neil voor de gevaren van computersystemen.
The Gadget That Boosts Your Step Count While You Nap: “In April, Guangdong University of Foreign Studies in southern China announced that it would require students to reach at least 10,000 steps per day on WeRun. Some students found the target too high — fueling more demand for WeRun hacks.”
grappige read in The New Yorker: I Am the Algorithm
Op MIT tech review: Let’s make private data into a public good
In het FD: When algorithms go to war in the workplace,Businesses crunch data to gain power; workers should bend it to their own ambitions
In The FT: Someone must be accountable when AI fails
Op The Verge: SELF-DRIVING CARS ARE HEADED TOWARD AN AI ROADBLOCK. Skeptics say full autonomy could be farther away than the industry admits
Op medium: This Algorithm Can Tell Which Number Sequences a Human Will Find Interesting
HR analytics:
Op HI-re.nl Artificial Intelligence in recruitment — Welke ontwikkelingen staan ons te wachten?
Impact op de arbeidsmarkt:
in the FT: Work in the age of intelligent machines: How do you organise a society in which few people do anything economically productive?
Discussion paper van Mc Kinsey: Skill shift: Automation and the future of the workforce
Op CIO.com: Artificial intelligence gives HR an opportunity to transform the enterprise. Human resources has to play a strategic role in leveraging AI to shape future of work and organization agility.
Voor de Nerds
Artikel van Stanford: Text as Data: An ever increasing share of human interaction, communication, and culture is recorded as digital text. We provide an introduction to the use of text as an input to economic research.
Op towardsdatascience.com: What’s New in Deep Learning Research: Learning by Comparing Using Representational Similarity
Op towardsdatascience.com: Building simple linear regression model for a real world problem in R
Op towardsdatascience.com: Three techniques to improve machine learning model performance with imbalanced datasets
Op R-bloggers.com: The Financial Times and BBC use R for publication graphics
op datacamp.com: The Hidden Revolution in Data Science
Blog: (Data) Valkuilen bij A/B testen
Mooi overzicht van @lindaterlouw and @boonzaaijer over Datascience, Big Data Science en Machine Learning
Op towardsdatascience: Introduction to Model Trees
Cartoon van de week
IoT Cartoon:
| Datanieuws binnen en buiten het Rijk 04–07–2018 | 2 | datanieuws-binnen-en-buiten-het-rijk-04-07-2018-13cc4ee1339b | 2018-07-04 | 2018-07-04 08:23:48 | https://medium.com/s/story/datanieuws-binnen-en-buiten-het-rijk-04-07-2018-13cc4ee1339b | false | 942 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Betty Feenstra | Data driven, Head of Policy Information @ DG Public Administration, Ministry Internal Affairs and Kingdom Relations, Amsterdam, NL | 6768e21844e9 | bettyfeenstra | 93 | 80 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-09-14 | 2017-09-14 18:38:19 | 2017-11-18 | 2017-11-18 21:05:20 | 3 | false | en | 2017-11-18 | 2017-11-18 21:05:20 | 4 | 13cdb54251d1 | 2.64434 | 6 | 0 | 0 | For decades, Business Intelligence (BI) has provided employees with access to company’s critical data, to help them manage their business. | 4 | AI: From Decision-making BI to Conversational Assistance
For decades, Business Intelligence (BI) has provided employees with access to company’s critical data, to help them manage their business.
BI traditionally takes the form of dashboards, reports, or portals.
However, despite recent progress — especially the agility provided by the cloud — these solutions have some limitations: complex interfaces, information difficult to find, lack of interactivity, etc. … They remain dedicated to a limited population of specialized analysts.
Many of our clients (managers, officers, executives, …) tell us they waste time, drowned in the flow of information, despite the plethora of data analysis tools available to them. They are finding it increasingly difficult to get fast, straightforward answers to even simple questions (how many sales did I make yesterday? Who are my best clients?).
The best way to get an answer to a question is to ask
It is time to rethink the way employees access key information, taking their place. Conversational assistance (expressing your demand in natural language and getting the answer instantly, not looking for it in an application) is a powerful way to achieve it. It allows to easily connect those who ask the questions (the business owners: how many customer invoices are unpaid, and which ones are most at risk?), to those who have the answers (experts, data analyst, data scientist, …).
Conversational AI is probably the last mile that was missing to link strategic business data to all the employees.
A conversational infrastructure already in place
Companies already have “pipes” to convey conversations: instant messaging such as Skype for Business, Slack, Microsoft Teams, Google Hangout, and more. … It’s an opportunity!
Without even a new application to install on the workstation, without additional password to remember, without any new software to assimilate, it becomes possible to instantly connect intelligent virtual assistants to thousands of collaborators.
Still young AI, but learning
Let us be clear: even if the Artificial Intelligences we are talking about here are beginning to be deployed, they are still far from being 100% autonomous. Since strong AI does not yet exist, human supervision remains compulsory: adaptation to data sources, configuration of conversations, pre-computing of predictive scores, etc.
But the conversational essence of these solutions has a unique advantage over traditional web applications. The analysis of the dialogues that they generate allows these systems to listen to the users, to understand their intentions, and to make the necessary to respond to them — by modifying the business rules at first and then by applying machine learning in a second stage.
This speed of convergence towards a service more and more adapted to their users is at the heart of the efficiency and success of conversational AI.
What to remember: Conversational AI will be a competitive differentiator for companies that will adopt its first forms
According to Gartner, by 2020, 50% of analytic requests will be generated using search engines, natural language processing or voice — or will be generated automatically.
They add that “businesses that have not begun the process of deploying Virtual Assistants to interact with employees should start now”.
It thus becomes strategic for companies to endow their vital forces with the first forms of intelligent virtual assistants. These will be the competitive differentiators of collaborators and organizations that have learned to implement them early enough.
| AI: From Decision-making BI to Conversational Assistance | 13 | ai-from-decision-making-bi-to-conversational-assistance-13cdb54251d1 | 2018-04-08 | 2018-04-08 11:31:27 | https://medium.com/s/story/ai-from-decision-making-bi-to-conversational-assistance-13cdb54251d1 | false | 555 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Zelros AI | Our thoughts about Data, Machine Learning and Artificial Intelligence | 67b9b5a1eef4 | Zelros | 440 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-14 | 2017-11-14 13:57:29 | 2017-11-14 | 2017-11-14 17:38:16 | 1 | false | en | 2017-11-14 | 2017-11-14 17:38:16 | 0 | 13ce5c0d93b0 | 2.777358 | 1 | 0 | 0 | I’ve just got back from my first Web Summit, and this was my experience. | 5 | Web Summit 2017 and the post-conference blues
I’ve just got back from my first Web Summit, and this was my experience.
Centre Stage at Web Summit 2017
The first thing that really hits you when you arrive is the sheer scale of Web Summit — with over 60,000 attendees from over 170 countries it feels much more festival-like than any conference I’ve been to before, including a heavy police presence and airport-style security.
That’s both a positive and a negative — it creates an atmosphere which draws you in and makes you feel like part of something much bigger, but it doesn’t come without hundreds of micro-stresses as you try to move around the venue, get to (and from) stages, and undoubtedly miss many fascinating talks because of schedule collisions.
As I’ve found to be the case at other conferences I’ve attended this year, the non-technical talks are much more engaging to me than the technical ones. The moderated discussion format also works incredibly well, and it surprises me that other conferences haven’t adopted this style.
The entire event runs like clockwork, and although most sessions started at least a little bit late and often overran, it wasn’t particularly noticeable or bothersome. That’s a testament to the hard work of the many organisers and volunteers who make this happen.
The variety of talks I ended up seeing were a pleasant surprise — I’ve included a (nearly complete) list at the end of this post — and most of them on topics I wasn’t even consciously aware I was interested in (I spent most of my time at AutoTech/TalkRobot and Future Societies).
For me, Al Gore’s speech on “The innovation community’s role in solving the climate crisis” was the clear Web Summit winner, but it would be unfair to compare any other talks to his. You can already watch it unofficially on YouTube if you’re interested.
There was a heavy bias towards AI, automation and our planet’s future throughout Web Summit (at least for the schedule I had!), but the different themes, perspectives and formats helped minimise potential burn-out.
My main takeaways from Web Summit:
Technology is unique in disrupting pretty much everything
Some of the biggest challenges we face are ethical rather than technological
Code literacy could be (or is) as important as reading, writing and mathematics
We sometimes focus too much on technology rather than the problems it can solve
Ownership, sharing and use of data needs to be much more transparent
Super-intelligence is a very real ‘risk’ — we should be prepared and take AI safety seriously
Some other thoughts worth mentioning:
‘Flying cars’ are already a thing — but they’re not really cars in the traditional sense
There’s an insignificant 20 million software developers, versus the 3.2 billion people online
The porn industry, though contentious, is a significant driver of technology innovation
Other industries demanded international treaties (e.g. a ban on bio-weapons) — AI needs the same
Technology can be used for good and bad — that’s our collective choice to make
So were there any negatives from Web Summit? Some, definitely, but they’re outweighed by the overwhelmingly positive experience:
The chairs were particularly uncomfortable
Tea doesn’t get the same kind of love as coffee
The mobile app was a little slow (and apparently worse on iOS than Android)
Perhaps the two most important ‘negatives’ from Web Summit, if you can call them that, are these:
My world-view has been irreversibly changed
Post-conference depression might be a thing
But I think that says more about the seemingly insignificant life choices which have, rather accidentally, got me to where I am today.
Web Summit has given me a lot to think about — both about our collective future and my own way forward in life. I expected Web Summit to be interesting, inspiring and motivational, but I hadn’t expected it to make me re-evaluate my place in the world and how I can make a more meaningful contribution.
In conclusion — if you get the opportunity to go next year, you won’t be disappointed!
My schedule
| Web Summit 2017 and the post-conference blues | 12 | web-summit-2017-and-the-post-conference-blues-13ce5c0d93b0 | 2018-05-09 | 2018-05-09 10:55:05 | https://medium.com/s/story/web-summit-2017-and-the-post-conference-blues-13ce5c0d93b0 | false | 683 | null | null | null | null | null | null | null | null | null | Websummit | websummit | Websummit | 531 | Ian Kent | Software Engineer and Tech Enthusiast | a6bcdd7aac68 | iankent | 51 | 94 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | e0dc78032928 | 2018-01-16 | 2018-01-16 06:59:51 | 2018-04-02 | 2018-04-02 15:49:14 | 1 | false | id | 2018-04-02 | 2018-04-02 15:49:14 | 0 | 13cef4294614 | 1.898113 | 10 | 1 | 0 | Biasanya data yang kita dapatkan dari flat files atau database berupa data mentah. Algoritma machine learning classification bekerja dengan… | 4 | Beberapa Cara untuk Preprocessing Data dalam Machine Learning
image: ccl.org
Biasanya data yang kita dapatkan dari flat files atau database berupa data mentah. Algoritma machine learning classification bekerja dengan data yang akan diformat dengan cara tertentu sebelum mereka memulai proses training. Untuk menyiapkan data untuk konsumsi oleh algoritma machine learning, kita harus memrosesnya terlebih dahulu dan mengonversinya menjadi format yang tepat.
dalam penerapannya kita bisa menggunakan scikit-learn untuk preprocessing data, dalam library scikit-learn ada banyak fungsi-fungsi yang tersedia.
Binarization
proses binarization adalah ketika kita ingin mengubah variabel numerik kedalam nilai boolean (0 dan 1)
Mean Removal
Mean removal adalah cara umum dalam teknik preprocessing yang digunakan dalam machine learning, menghilangkan rata-rata biasanya sangat berguna dari variabel, jadi variabel berada ditengah tengah pada angka 0. kita melakukannya untuk menghilangkan bias dari variabel
Scaling
biasanya dalam dataset yang masih mentah, beberapa variabel memiliki nilai yang sangat bervariasi dan random, jadi sangat penting untuk di scale feature feature tersebut, in my prespective feature-feature tersebut nilainya sangat besar atau kecil hanya karena the nature of measurements
Normalization
Biasanya dalam preprocessing, proses normalisasi untuk memodifikasi nilai dalam varabel sehingga kita dapat mengukurnya dalam skala umum. Dalam machine learning, kita menggunakan berbagai bentuk normalisasi. Beberapa bentuk normalisasi yang paling umum bertujuan untuk mengubah nilai-nilai sehingga jumlahnya menjadi 1. Normalisasi L1 (di library scikit-learn), yang mengacu pada Penyimpangan Absolut Terkecil, bekerja dengan memastikan bahwa jumlah nilai absolut adalah 1 dalam setiap baris. Normalisasi L2, yang mengacu pada kuadrat terkecil, bekerja dengan memastikan bahwa jumlah kuadrat adalah 1. Secara umum, teknik normalisasi L1 dianggap lebih kuat daripada teknik normalisasi L2. Teknik normalisasi L1 kuat karena tahan terhadap outlier dalam data. Seringkali, data cenderung mengandung outlier dan kita tidak bisa berbuat apa-apa. Kami ingin menggunakan teknik yang dapat dengan aman dan efektif mengabaikannya selama perhitungan. Jika kita memecahkan masalah di mana outlier itu penting, maka mungkin normalisasi L2 menjadi pilihan yang lebih baik
Label encoding
Ketika kita melakukan klasifikasi, biasanya kita berurusan dengan banyak label. Label-label ini bisa dalam bentuk kata-kata, angka, atau sesuatu yang lain. Fungsi pembelajaran mesin dalam sklearn mengharapkan mereka menjadi angka. Jadi jika mereka sudah menjadi nomor, maka kita dapat menggunakannya secara langsung untuk memulai pelatihan. Tetapi ini tidak biasanya terjadi.
Di dunia nyata, label dibuat dalam bentuk kata-kata, karena kata-kata dapat dibaca manusia. kita melabeli data training dengan kata-kata sehingga pemetaan dapat dilacak. Untuk mengonversi label kata menjadi angka, kita perlu menggunakan pembuat label encoding. label encoding mengacu pada proses transformasi label kata menjadi bentuk numerik. dalam hal regresi jika memuat varibel kategori dan nilainya tidak bisa di faktorisasi dalam bentuk tingkatan, dilakukan proses dummy, setiap nilai didalam variabel itu menjadi variabel lain.
| Beberapa Cara untuk Preprocessing Data dalam Machine Learning | 21 | beberapa-cara-untuk-preprocessing-data-dalam-machine-learning-13cef4294614 | 2018-05-01 | 2018-05-01 17:41:02 | https://medium.com/s/story/beberapa-cara-untuk-preprocessing-data-dalam-machine-learning-13cef4294614 | false | 450 | Blog hal hal gaibnya para paranormal warung pintar | null | null | null | Warung Pintar | warung-pintar | null | null | Data | data | Data | 20,245 | Andreas Chandra | Data Scientist | Macroeconomics Enthusiast | Amikom Yogyakarta University | af2cd97205e7 | andreaschandra | 145 | 78 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-06 | 2018-05-06 19:48:18 | 2018-05-06 | 2018-05-06 19:48:47 | 1 | false | en | 2018-05-06 | 2018-05-06 19:48:47 | 0 | 13cf143e1979 | 1.396226 | 1 | 0 | 0 | I always thought probability and statistics are somewhere the same things. At least till now :3 | 4 | probability != statistics Aspect from ML
I always thought probability and statistics are somewhere the same things. At least till now :3
Let’s say you have 3 dogs and want to count the number of poops they leave around the house everyday. A probabilistic thing is to say that the dogs poop independent of each other, Regardless of any data you’ve seen. It is just an assumption you are putting into everything you’ll do after that.
Now let’s say you count the amount of poop and use that to fit a distribution. That would be stats.
Don’t even ask what motivated this example. xD
Probability is calculating and statistic is descriptive. Like 30% chance that it will rain tomorrow vs it has rained 30% of the days this year.
It’s kind of like deduction and induction.
Probability is mostly about modeling problems where you have some uncertainty involved, while stats is about looking at data and saying something about it.
It might be a bit inaccurate but I’d say probability can be a thing on its own, while stats is mostly using some probability to analyze some data.
Whenever you think “analysis” it’s more of a statistics thing and whenever you think “modeling” it’s more of a probabilistic thing.
Probability analyses the laws of prediction of events and how probabilities of different events happening are calculated assuming axioms are true.
Statistics attempts to describe the distribution of data, whether random or not often convertible to probabilistic models, usually to make a deduction about the relationship of the data. We could say “estimation = stats”.
Generally, you’d do something like creates a probabilistic model (a model which uses probabilities), then use stats to find the parameters (anything that the model “learn”s is generally a parameter) of that model.
We mostly wouldn’t use one without other in real life.
Lastly, we can say They are inverse to one another.
| probability != statistics Aspect from ML | 1 | probability-statistics-aspect-from-ml-13cf143e1979 | 2018-05-06 | 2018-05-06 19:48:48 | https://medium.com/s/story/probability-statistics-aspect-from-ml-13cf143e1979 | false | 317 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Habibur Rahman Dipto | null | 8b9e55799e07 | hrdipto | 2 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | import CoreML
...
var model: Inceptionv3!
override func viewWillAppear(_ animated: Bool) {
model = Inceptionv3()
}
| 3 | null | 2017-09-03 | 2017-09-03 20:13:05 | 2017-09-04 | 2017-09-04 18:51:26 | 5 | false | en | 2017-09-05 | 2017-09-05 15:43:11 | 5 | 13cf6cf4284f | 3.73522 | 6 | 1 | 0 | Well, I know most of you already heard about CoreML, but for you who still have no idea about it, this is a really quick introduction about… | 4 | Quick Guide to CoreML
Well, I know most of you already heard about CoreML, but for you who still have no idea about it, this is a really quick introduction about it with a practical example.
CoreML
CoreML is new machine learning framework released by Apple which enables you to do inference inside your iOS, macOS, tvOS or watchOS apps.
But what is the inference?
In fact, the idea of this publication came up from an app that I were working on, which it has a camera feature where it needs to detect if the cam is seeing a cat or a dog as well. So I needed to use machine learning in order to solve it. The camera looks like that.
Camera
So, machine learning has two steps, first one is training which you can’t do using coreml, probably you will find out around a lot of machine learning tools like Caffe, Keras, turi, and the second step is inference.
So for my case, I will need to pass a pet image through pre-trained model and than to get what kind of pet actually is. This process is called inference and looks like that:
Inference, Machine Learning
Basically, this is followed by a layer approach, on the top is your app. There also are released two new frameworks: Vision and NLP.
The Vision framework will allow you to do all the tasks related to computer vision. But what’s the computer vision?
As humans use their eyes and their brains to understand and get visually sense what is around them, also computer vision is science that aims to give similar capability to a machine. So using Vision we can do things like face detection, object tracking etc. There is also NLP Api to include more languages and more functionality. If these are not enough there is CoreML or why not combining all of these. At the end, these are on top of Accelerate and MPS (Metal Performance Shaders) frameworks which will give high performance of the app.
Layered approach
With CoreML you can integrate machine learning into your apps, you don’t need to have internet connection to do prediction and it supports many machine learning models like neural networks, tree ensembles, support vector machines, and generalized linear models. So as it runs locally, there are some advantages. First, user’s privacy. Your users will be happy to know that you’re not sending their images, personal messages to a server. Also they will not need to pay extra data cost just for that prediction. You will not need to pay server as well and will be always available for you.
But, where do models come from?
To have a quick start you can use pre-trained models like Inception V3, ResNet50, Places205-GoogLeNet etc. that are already on .mlmodel format, which is the format that the trained model should be in order to use it in XC0de with coreml.
Machine learning is out for a while now, that means that there is a huge community around, also there are a lot model that are not in .mlmodel. But for our help there is Core ML tool written in Python, it is open source and with that you can convert models from other ml tools like Caffe, Keras, XGBoost etc. to a .mlmodel. The bad news is that there is no direct support for tensorflow, but no problem you can create your own converter or probably you will find something on Github 😉.
In my example I will use Inception V3 pre-trained model which you can download from here: https://developer.apple.com/machine-learning/ .
Once you have downloaded the model, drag and embed it into your iOS project and it is ready to use. There is Swift or Objective-C api so you can use it as any other model. Usually when you create a new project there is a basic UIViewController, so you can use it. After you have the Inception V3 model into your project, try to import CoreML and initialize it like that:
Once, you have done that you will need to take or choose a picture, convert to CVPixelBuffer and should be able to pass through Inception model for prediction. The code perhaps will look something like that.
Converting image to CVPixelBuffer
By the way, don’t worry I have created a github repository with some examples about machine learning in iOS. You can download and explore from here CoreML-Demo.
Finally
Thank you for reading. If you have any question feel free to write me on Linkedin or follow me here on Medium.
| Quick Guide to CoreML | 7 | quick-guide-to-coreml-13cf6cf4284f | 2018-06-01 | 2018-06-01 20:34:57 | https://medium.com/s/story/quick-guide-to-coreml-13cf6cf4284f | false | 769 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Egzon Arifi | iOS Developer | b10fc0e522e1 | egzon.arifi | 3 | 10 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-10-24 | 2017-10-24 21:22:21 | 2017-10-30 | 2017-10-30 20:43:07 | 4 | false | en | 2017-10-30 | 2017-10-30 20:44:28 | 4 | 13d0f0cca4b8 | 4.326415 | 0 | 0 | 0 | Organizations that implement workplace safety programs are missing a transformative opportunity to use technology to keep their workforce… | 5 | Artificial Intelligence and Machine Learning in Safety
Organizations that implement workplace safety programs are missing a transformative opportunity to use technology to keep their workforce engaged and more importantly, safe.
Our Corvex Core Device
By: Eric Hanson, Director of Brand / User Experience
These days, artificial intelligence (AI) and machine learning (ML) are not only the buzziest of buzzwords, but they are also all around us — affecting our lives in a myriad of ways by leveraging technology to learn about its users and respond using data collection and mapping. From the simple use of advertisements that can be served to individuals based on interests or browse behavior, to the complex infrastructures of GPS applications like Waze, the implications of smart technology that influence user behavior are endless. What industrialization did for efficiency mid-century, modern intelligence is advancing our world today. Vast numbers of organizations are leveraging ML and AI to advance their product development, sales, and the overall customer experience. Industries like manufacturing, construction, energy & utilities, pulp & paper, lumber, waste management, food processing, oil & gas, and assembly, are missing a transformative opportunity to use this same technology to engage their workforce and more importantly, keep them safe.
Safety has historically been a slow follower of innovation used in other departments, missing opportunities to create efficient workflows, foster a more engaged, safe, and empowered workforce, and see a higher return on program investments. While marketing departments are aggregating online behavior scores, safety managers are still hanging up posters and sorting through paper forms.
So how do we as an industry catch up? How do we modernize our approach to safety while getting smart about our program investments and expected return?
Most companies understand the value of a safety culture and have ideologies about what a hyper-vigilant safety culture looks like — where employees are on high alert of their surroundings, perpetually evaluating risks for the task at hand and taking precautions accordingly. This level of safety engagement is the goal. In reality, most employees don’t have the tools, processes or capacity to be both hyper-vigilant in safety, while also focusing on their craft and being as efficient and productive as possible. This is where agentive technology can play a critical role — to give employees a high-alert sensory solution, processing information from the environment, the user and the team at all times — to aid the user in being both efficient at their work while tuned in to signals from his or her surroundings.
Over the past 18 months, Corvex has undertaken significant research and development to bridge some of these gaps in our industry, developing the first IoT safety solution that’s based on a number of beliefs:
1. The technology has to start with the employees.
What successful AI and machine learning applications have in common is that they work at a user level to provide custom information to individuals based on profile or behavioral data. Whether through location targeting, PPE or Zone compliance behavior, permission levels or other information — the platform must configure data to support teams at an individual level as well as at a corporate level.
At this level of detail, companies can incent and reward behavior at an individual employee level, while having a clear understanding of their resources.
Awarding points for making/logging an observation.
2. We must leverage real-time information.
The very nature of high-risk jobs is that these workers, for the most part, aren’t sitting at a computer or within drive-by distance of their safety manager’s desk. Equipping workers with real-time information sharing platforms is critical to ensuring that teams can quickly react to information in the field and encourage and incent proactive communications from front-line employees, rather than stifling it.
Rather than waiting on lagging indicators to report incidents after they occur, or waiting for a slow dissemination of information to go up and back down the chain of command — employees can report information immediately. Safety managers can automate the tedious parts of their role and spend more time innovating with their teams and fostering a safety community.
We say community instead of culture when we talk about safety environments because communities are organic, diverse and only flourish if everyone does their part. A safety culture requires subscription, while a community relies on the collective responsibility a group takes on to achieve shared goals.
3. The solution needs to offer bi-directional communication.
Safety training and top-down information sharing still have their place, but they need to fit within a larger conversation that involves listening to workers in the field. The companies that have implemented successful safety programs have empowered every employee in the field to be their own safety manager — equipped with the tools, processes, information, and support they need to make responsible decisions without compromising productivity.
By understanding the needs and motivations of the employee, leveraging artificial intelligence and utilizing technology that empowers workers and amplifies their voice in their organization, companies can harness the potential of their workforce and offer a safety solution that’s truly interconnected.
Originally published on the Corvex Connected Blog
About Corvex
Corvex is the first IoT solution that puts the power of connected safety in the hands of workers, creating a safer, more engaged workforce through real-time information sharing. The Corvex Connected Safety platform enables a proactive, risk-based approach to safety using leading indicators to help companies best manage their resources and processes, without compromising efficiency or productivity. Corvex is empowering workers to play a major role in ensuring a safe work environment for everyone. The platform has applications in industries including construction, manufacturing, food processing, oil and gas, energy and utilities, lumber, pulp and paper, waste management, assembly and more. To learn more about Corvex, visit http://www.corvexsafety.com/.
| Artificial Intelligence and Machine Learning in Safety | 0 | artificial-intelligence-and-machine-learning-in-safety-13d0f0cca4b8 | 2017-10-30 | 2017-10-30 20:44:29 | https://medium.com/s/story/artificial-intelligence-and-machine-learning-in-safety-13d0f0cca4b8 | false | 961 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Corvex Connected Safety | The first Worker Powered IoT solution that puts the power of Connected Safety in the hands of every worker. http://www.corvexsafety.com | f07513f2f739 | corvex | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-30 | 2017-11-30 20:34:13 | 2017-12-02 | 2017-12-02 23:00:42 | 2 | false | fr | 2017-12-16 | 2017-12-16 13:36:49 | 4 | 13d14464a30b | 4.764465 | 2 | 0 | 0 | Cet article aborde la “computer vision” et s’adresse à des personnes plutôt orientées métiers ou des DSI / responsable de domaines pour la… | 5 | AI : La Computer Vision touche tous les domaines de l’entreprise
Cet article aborde la “computer vision” et s’adresse à des personnes plutôt orientées métiers ou des DSI / responsable de domaines pour la partie vulgarisation. J’espère qu’il vous permettra de découvrir à quel point cette forme “d’intelligence artificielle” peut rendre des services et qu’elle trouve une application dans la majorité des business existants.
Je vous y partage notamment mon expérience autour de Tensorflow — le moteur d’IA de Google — car s’il existe beaucoup de ressource en anglais concernant celui-ci, il en existe très peu en Français (c’est la principale motivation sur l’écriture de l’article d’ailleurs).
J’aborde le sujet de manière progressive, c’est à dire — dans l’ordre — la recherche d’image traditionnelle, la classification et enfin la détection d’objet.
NB: La seconde partie de cet article sera plus technique, et vous expliquera étape par étape comment faire un moteur de détection d’objet.
La recherche d’image
Le cas métier est le suivant : je dispose d’une multitude de photo dans lesquels je dois identifier la présence d’un objet.
Par exemple, je suis fabricant de montre, et je décide d’analyser toutes les photos People en top trend pour voir si l’une de mes montres est portée (et faire une action marketing derrière).
à droite, la montre à trouver dans la photo gauche — au delà d’un certain nombres de concordances, on peut indiquer à l’utilisateur qu’une photo matche potentiellement et qu’il peut procéder à la vérification.
Il est très facile de retrouver une photo dans un ensemble de photo. On peut le faire par exemple en Java, en utilisant OpenImaj via quelques lignes de codes très simples (Cf la photo). Bien sûr, le résultat n’est pas infaillible mais le temps gagné à trier une partie des photos en vaut la chandelle.
Le temps de traitement n’est -néanmoins- pas négligeable, et c’est ainsi un système qu’il faut limiter à la recherche d’une ou deux photos maximum par image car le process doit faire ses recherches de manière successives.
Ce système peut rendre de fiers services d’automatisation de certaines tâches, mais on voit donc que ce n’est pas scalable: dans notre cas, si on a 1000 modèles de montres différents à retrouver, pour 1 photos nous aurons donc potentiellement 1000 traitements.
La classification d’image
On est donc à la recherche d’une système scalable, qui pourrait nous indiquer quasi instantanément ce qui est visible sur la photo. C’est là qu’entrent en scène les réseaux de neurones à convolution (Convolutional neural network → CNN), et notamment Tensorflow.
Derrière ce nom compliqué, un principe où l’on va pouvoir apprendre à la machine à reconnaître des objets, de la même manière que l’on apprendrait nous même, c’est à dire par l’expérience et l’apprentissage: ce que l’on appelle le deep learning supervisé.
Un excellent article en français — vulgarisé — sur le fonctionnement des CNN : https://medium.com/@CharlesCrouspeyre/comment-les-r%C3%A9seaux-de-neurones-%C3%A0-convolution-fonctionnent-b288519dbcf8
Même s’il faut disposer d’un fond documentaire significatif (à partir de 100 images), on peut arriver à d’excellent résultats très vite.
Imaginons le cas métier suivant : un technicien SAV — munie de lunettes connectées type Sony Smarteye (Réalité Augmentée et Prix raisonnable / 800$) regarde l’appareil électroménager qu’il doit réparer : celui-ci est immédiatement reconnu et les causes de pannes les plus fréquentes s’affichent sur un écran en face de lui. Ses mains sont libres en permanence et sa productivité est améliorée: il est plus efficace, le client en bénéficie.
Un avantage non négligeable pour les produits grand public, c’est que l’on dispose de Google Images pour faire de la collecte qui permet de construire le graph d’inférence (l’inférence, c’est à dire le modèle de pensée du réèl, de l’analyse à la conclusion).
Le message c’est que l’on peut mettre en pratique très rapidement un moteur de classification dans un workflow, et ce, même si vos produits sont spécifiques.
En effet, il existe des modèles d’inférence déjà entraînés (officiels et de recherches: https://github.com/tensorflow/models/tree/master/official), auxquels on peut appliquer nos données. Ces modèles sont optimisés pour des jeux de données différents (objets du quotidiens, animaux, etc…). Il y en a forcément un qui va s’adapter à votre cas métiers.
Pour vous donner un exemple, j’ai utilisé le modèle “Inception” sur mon chien, un Terrier de Boston.
Dagobert, mon Terrier de Boston de 10 ans
Cela me permet d’aborder les limites du modèle → tout comme les passants que je croise, Tensorflow confond Bulldog Français et Terrier de Boston malgré un modèle surentrainé (le modèle Inception est basé sur le jeux de données http://image-net.org/). Si le système a certes des limites, il peut donc optimiser sans problèmes des workflows.
Un point fort également, on note que le modèle a facilement conclu que le parquet et le tapis était des éléments secondaires de la photo, malgré le fait qu’ils en occupent une plus grande partie (pour le parquet).
Concernant la classification (et contrairement à la détection d’objet), c’est assez facile d’avoir des résultats très rapidement et même de mettre en production. Technologiquement, je ferais donc l’impasse mais sachez que ces moteurs de classification sont exploitables en Java dans un délai très court.
La détection et la reconnaissance d’objet
C’est la partie la plus excitante et néanmoins la plus complexe à mettre en place (c’est celle-ci qui fera l’objet d’une explication plus technique en seconde partie de l’article).
Plus loin que la classification et toujours dans l’utilisation des CNN, Tensorflow nous offre la possibilité de reconnaître des objets dans un flux video (temps réèl ou non) mais également de les positionner et de les dénombrer. On peut ainsi l’appliquer au cas métier suivant (parmi tant d’autres): dans un rayon de supermarché, on s’attend à trouver certains produits à la bonne place (le planogramme), et le responsable de rayon vérifie en permanence l’implantation du rayon. Ce système peut l’alerter quand il y a modification, du mouvement, etc…
Résultat de la partie 2 de l’article : la reconnaissance du pot de crème fraîche a été travaillé, on arrive à en capter plusieurs sur une même image. On note la notion de temps réèl et l’aspect réalité augmentée.
Bien sûr, la barrière à l’entrée en terme de temps de mise en place est plus longue que pour la classification et les cas métiers qui y font le plus appel sont dans la sécurité et la voiture autonome.
En amont, il y aura un travail de tagging fins des objets dans l’image (et pas seulement de l’image comme dans la classification), de la stratégie de modèle, de supervision de l’apprentissage.
En aval, il faudra penser aux choix à faire sur les seuils souhaités, les directions de mouvement, les positions, les durées d’apparitions…
Conclusion
On voit que les CNN sont très performants en matière de computer vision, et les applications en matière d’optimisation de process sont multiples.
C’est également une des briques de bases du futur “employé augmenté”.
Le coût pour les DSI / les métiers est relativement faible par rapport aux optimisations qui peuvent être apportées et au time to market relativement rapide de ce genre de projet (d’ailleurs des connecteurs Tensorflow existent déjà pour SAP par exemple → https://blogs.sap.com/2017/08/29/introducing-sap-hana-external-machine-learning-aka-tensorflow-integration/).
Ainsi, les entreprises qui intègrent la computer vision à leur process peuvent disposer très rapidement d’un avantage concurrentiel, quand elle ne l’ont pas déjà.
Merci aux collègues et contact qui se reconnaîtront, et avec qui les discussions sont toujours passionnantes.
| AI : La Computer Vision touche tous les domaines de l’entreprise | 51 | la-computer-vision-touche-tous-les-domaines-de-lentreprise-partie-1-13d14464a30b | 2018-05-10 | 2018-05-10 04:10:38 | https://medium.com/s/story/la-computer-vision-touche-tous-les-domaines-de-lentreprise-partie-1-13d14464a30b | false | 1,161 | null | null | null | null | null | null | null | null | null | Deep Learning | deep-learning | Deep Learning | 12,189 | Aurélien Escartin | Entrepreneur in the retail revolution | liveshop.ai | Living the AI & Digital Joy #AI #UX/UI #Tensorflow #python | be2440d383cf | aescart1 | 56 | 109 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-03 | 2017-11-03 09:01:19 | 2017-11-03 | 2017-11-03 09:03:58 | 1 | false | en | 2017-11-03 | 2017-11-03 09:03:58 | 0 | 13d3707d7a6d | 1.818868 | 6 | 0 | 0 | The team and advisors are the engine of any blockchain project. After all, not only the idea itself is important, but also its professional… | 5 | AdHive: the introduction of the team
The team and advisors are the engine of any blockchain project. After all, not only the idea itself is important, but also its professional implementation. Everything depends on the qualification of the selected team, so we want briefly tell about our participants.
Professionals of the most diverse branches are developing our platform. Let’s get to know AdHive better.
Who brings AdHive to the success
AdHive was co-founded by Dmitry Malyanov and Vadim Budaev, serial entrepreneurs with background in B2B services, machine learning & speech/visual recognition at Scorch.ai and Webvane. Dmitry has over 10 years of experience in sales and management, including Groupon. Vadim has worked as system architect and team leader for over 15 years.
The third co-founder, Alexandr Kuzmin, is an experienced trader and a former professional poker player. At AdHive, he leads model development and financial management.
Also the team includes:
Anna Tolstochenko — expert in working with advertising agencies and bloggers. Has more than 10 years experience in digital marketing;
Dmitry Romanov — strategy and business development;
Kristina Kurapova — specialist of legal support of the project;
Vitalii Tkachenko — art director and designer;
Denis Vorobev — software and AI developer;
Denis Dymont — complete and front-end developer of the stack;
Nikolay Papaha — software developer.
As you can see, the team consists of the real professionals with a long-term experience in their field. Advisors are also valuable for the startup. The following experts help to reach AdHive the heights:
Sergei Popov — scientific and token product concept advisor. The mathematical model and incentives of ADH have been developed by Serguei Popov, professor of mathematics at the University of Campinas, Sao Paulo, Brazil and AdHive’s scientific advisor. Sergey had previously authored the mathematical models of WINGS and contributed to the theoretical work on another crypto-currency, NXT.At AdHive, his models help the system distribute rewards within each acting group in a fair way that incentivizes users to learn more and become better at their roles, for the benefit of every individual user and the platform in general.;
Ivo Georgiev — ad tech advisor;
Ariel Israilov — investment advisor;
Larry Christopher Bates — community advisor;
Sergey Logvin — HR advisor, -investment and community management;
As you can see, AdHive has a highly professional team, as well as experienced and reliable advisors. The last and very important part of our team is our users. Join us and together we will bring video advertising a new quality, integrating the principles of influencer marketing and AI technology into the market.
| AdHive: the introduction of the team | 121 | adhive-the-introduction-of-the-team-13d3707d7a6d | 2018-05-18 | 2018-05-18 13:57:20 | https://medium.com/s/story/adhive-the-introduction-of-the-team-13d3707d7a6d | false | 429 | null | null | null | null | null | null | null | null | null | Blockchain | blockchain | Blockchain | 265,164 | AdHive.tv | The first AI-controlled influencer marketing platform on Blockchain. Launching massive advertising campaigns has never been so simple. | 295e61003285 | AdHiveTV | 499 | 4 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-05 | 2018-09-05 10:54:10 | 2018-09-05 | 2018-09-05 11:13:22 | 1 | false | en | 2018-09-05 | 2018-09-05 11:13:22 | 8 | 13d3993b016b | 2.916981 | 1 | 0 | 0 | How best to dig into the capabilities of different AI focused software? This is my journey using IBM Cloud, python and Dash from plot.ly. | 5 | Me and the problem I’m solving
Image from rawpixel on Unsplash
How best to dig into the capabilities of different AI focused software? This is my journey to slicing, dicing and presenting data using IBM Cloud, python and Dash from plot.ly.
Introductions
You know the scenario. You are in a room full of new people for some reason, and the “creeping death” introductions have started.
“Hi, I’m <name>, I’m a <complicated job title that generally means something only to the person who does it> and I’m attending this <event name> because I want to <wide sweeping aspiration that this event may or may not help with>.”
It’s nicknamed “creeping death” for a good reason.
About me and what keeps me busy
Anyway, I wanted to introduce myself and a problem that I’m working on solving.
Hi, I’m Mandie, I’m the WW Technology Lead in the Power Ecosystems Development….
Hang on. No more creeping death intros! Let’s try again…
I work at IBM in the hardware division (that’s the bit of a cloud / computer you can kick), specifically the part where we make servers with POWER processors. My job is to work with software developers, which incidentally is one of the most fun jobs I’ve had, to find out what really cool thing they are doing with their software (usually AI focused and / or GPU accelerated which is my area of interest) and how they might benefit by pairing their very cool software with our very cool hardware. It’s all about building an ecosystem — which is a bit of a buzz word — but ecosystem is so important. I’m also an ex-astrophysicist, swear too much and don’t thrive in too formal an environment. :)
[side note — there’s a lot in that paragraph that could be expanded and likely will in a later post, but treat it as context and bear with me for now.]
What’s the problem I’m solving?
My problem is this — and it’s a good problem to have — we work with *lots* of companies with *lots* of software. How do I best get a view of the various applications and their capabilities (i.e. specifically what they do)? That’s important for our teams that work with customers for a number of reasons:
Will software A do what my customer needs?
If the answer to the above is “not quite”, then could I complement software A with software B and C? Or would software D be more complementary?
What kind of skill level does software E cater for? Expert data scientist or expert business user?
You get the idea. But the key element is here is that this needs to be facts based. I’m a pedantic scientist at heart and don’t like fluff.
Celebrating the journey
One of the many smart people I work with (a Distinguished Engineer, something else to expand on another day) told me recently about the concept of a growth mindset vs a fixed mindset. A key element to having a growth mindset is to celebrate the achievements on your journey to an end goal. This is in contrast to focusing on only the final achieving (or potentially not) of the goal. There’s a Ted Talk and an HBR article if you want to find out more. I’ve not had a chance to read the book yet (it’s sitting on my virtual bookshelf next to the partly read “Getting things done” book) but I really like the concept.
Hence this series of blog posts celebrating my learnings and progress as I’m working on a solution. Incidentally, I’m not 100% sure how I feel about the word “celebrating” in that context, feels a bit over the top, like party poppers and champagne required, but maybe that’s what I should be doing to fully embrace the concept.
And last but not least, what tools am I using to solve the problem?
Bottom line — I’m working on a web application focused on software capabilities which I’ll go into in subsequent posts. This is based on IBM Cloud, is written in python and uses Dash from plot.ly to slice and data capability data for different applications.
I will be anonymising the data to protect the innocent, but by all means get in touch if you’ve got a specific use case you want to solve. Stay tuned for more!
| Me and the problem I’m solving | 1 | me-and-the-problem-im-solving-13d3993b016b | 2018-09-05 | 2018-09-05 11:13:23 | https://medium.com/s/story/me-and-the-problem-im-solving-13d3993b016b | false | 720 | null | null | null | null | null | null | null | null | null | Ecosystem | ecosystem | Ecosystem | 1,573 | Mandie Quartly | Geek. Mum of two. AI. High Performance Analytics. OpenPOWER. IBM Academy of Technology. Astro PhD. Views are my own. | 3c59fba7856d | mandieq | 8 | 32 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-28 | 2018-04-28 03:03:59 | 2018-04-28 | 2018-04-28 03:47:44 | 0 | false | en | 2018-08-25 | 2018-08-25 15:20:25 | 5 | 13d3bad24a75 | 1.49434 | 2 | 1 | 0 | NOTE 1 The following is an abstract submitted to Joi Ito’s “Resisting Reduction” response contest. I co-wrote it with Noelani Arista… | 3 | Making Kin With The Machines
NOTE 1 The following is an abstract submitted to Joi Ito’s “Resisting Reduction” response contest. I co-wrote it with Noelani Arista, Suzanne Kite, and Archer Pechawis. We made it through the first cut, and are now working on the full 5000 word essay.
NOTE 2 The full essay was one of the ten winners of the competition. You can read it online here.
Making Kin With The Machines
by Jason Edward Lewis, Noelani Arista, Suzanne Kite & Archer Pechawis
Prof. Ito writes: “We need to embrace the unknowability — the irreducibility — of the real world.” This response critiques Prof. Ito’s manifesto as rooted in a long history of excluding Indigenous epistemologies from the Western technocratic project, and how that exclusion severely handicaps the wide-ranging discussion he hopes to promote. Drawing on Indigenous understandings of non-human kin, we argue that Prof. Ito’s term ‘extended intelligence’ is too narrow. We propose rather an extended ‘circle of relationships’ that includes the non-human kin — from network daemons to robot dogs to artificial intelligences — that increasingly populate our computational biosphere.
Blackfoot philosopher Leroy Little Bear observes, “the human brain is a station on the radio dial; parked in one spot, it is deaf to all the other stations…the animals, rocks, trees, simultaneously broadcasting across the whole spectrum of sentience.” As we manufacture more machines with increasing levels of sentient-like behaviour, we must consider how such entities fit within the kin-network, and in doing so, address the stubborn Enlightenment conceit at the heart of Ito’s manifesto: that we should prioritize *human* flourishing.
Epistemologies rooted in many Indigenous languages and worldviews have always insisted that “ultimately, everything connects,” and retain protocols for understanding a kinship network that extends to animals, plants, wind, rocks, mountains and oceans. They enable us to engage in dialogue with our non-human kin, creating mutually intelligible discourses across differences in material, vibrancy, and genealogy. They insist that the human is neither the height nor the centre of creation.
This response will draw upon Hawaiian, Lakota, and Cree cultural knowledges to suggest how resisting reduction might best be realized by developing conceptual frameworks that conceive of our computational creations as kin, and acknowledge our responsibility to find a place for them in our circle of relationships. We flourish only when all of our kin flourish.
| Making Kin With The Machines | 49 | making-kin-with-the-machines-13d3bad24a75 | 2018-08-25 | 2018-08-25 15:20:25 | https://medium.com/s/story/making-kin-with-the-machines-13d3bad24a75 | false | 396 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Jason Edward Lewis | Digital poet, interactive media artist, and software designer. He co-directs the Aboriginal Territories in Cyberspace (AbTeC) research network. | daa94f784be2 | jaspernotwell | 2 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 4c5c1ab84166 | 2017-12-27 | 2017-12-27 18:26:20 | 2017-12-27 | 2017-12-27 18:33:41 | 1 | false | en | 2017-12-27 | 2017-12-27 22:27:59 | 8 | 13d4c83f71a8 | 2.415094 | 19 | 0 | 0 | This month, personal artificial intelligence company ObEN announced plans to build a Blockchain Research Lab with the Qtum Foundation… | 5 | ObEN and Qtum Join Forces to Build Blockchain Lab to Nurture Innovation and Research
ObEN and Qtum to build blockchain lab
This month, personal artificial intelligence company ObEN announced plans to build a Blockchain Research Lab with the Qtum Foundation, creators of the Qtum open-source blockchain project. Earlier this year, ObEN closed 13.7 million in Series A funding from investors including Tencent, Softbank and Rushan Capital. ObEN believes that every person should have their own Personal AI (PAI) that is owned and controlled by the individual and secured on the blockchain, and is adopting Project PAI’s blockchain protocol to deliver the future of artificial intelligence on the PAI blockchain.
Project PAI is currently developing an open source PAI blockchain protocol (PAIchain). The PAIchain is a fork of the Bitcoin blockchain protocol, consisting of three modules: a validation layer, an intelligent network layer, and a data layer. Its core module is an intelligent network. PAIchain applies blockchain calculations to artificial intelligence parallel computing, turning calculations into productive contributions to the AI network. This lowers the cost of refining artificial intelligence algorithms, increases the efficiency of calculations, and turns power once used on “wasted” calculations toward useful ends. ObEN, for example, builds life-like intelligent avatars using artificial intelligence. On the blockchain, the artificial intelligence computations required to generate the avatars are assigned to PAIchain miners via smart contracts in a process similar to Bitcoin mining.
ObEN CEO and cofounder Nikhil Jain and his PAI
As Adam Zheng, COO and Co-founder of ObEN, explained “both artificial intelligence and blockchain should be seen as a whole, in which the blockchain serves as the underlying structure, with the AI serving as neural networks within the blockchain structure.” Data on the blockchain benefits from decentralization and security, but much of the energy expended on doing the calculations that help build the blockchain are wasted. But if this processing power is directed toward practical ends — say processing data that can help refine artificial intelligence algorithms, then the blockchain becomes eminently more useful. At the same time, artificial intelligence requires large amounts of data to become more refined and accurate. Without the blockchain, such huge amounts of data can only be collected by large corporations or third parties that buy and sell user data — offering minimal benefits to the users whose data are being collected. On the blockchain, where users have more control of their data, they can be incentivized to share and improve the AI through compensation with coins.The PAIchain is therefore unique in that it’s an asset-based blockchain. In addition, each person’s data is the individuals asset and the PAI trained by the data is an asset as well. The more PAI interactions, the better and smarter all the PAIs on the blockchain become.
Development of Project PAI’s blockchain is led by Alex Waters. Waters is famous in the crypto-community, having been involved in the development of Satoshi Nakamoto’s Bitcoin core source code — Bitcoin Core, since 2010. He is also a core team member of the original Bitcoin blockchain algorithm and founded both Coin.co and Bitcoin Validation.
Project PAI’s blockchain is an open source protocol, enabling companies to develop any number of AI focused applications on the PAIchain. ObEN is one of several companies currently working with Project PAI to build on the PAIchain, and plans to release its first commercially available app on the PAIchain in early 2018.
Join the Project PAI Community
You can get the latest news and updates by signing up here. Follow Project PAI on Twitter and Facebook, or join in on the conversation via Telegram.
| ObEN and Qtum Join Forces to Build Blockchain Lab to Nurture Innovation and Research | 147 | oben-and-qtum-join-forces-to-build-blockchain-lab-to-nurture-innovation-and-research-13d4c83f71a8 | 2018-05-18 | 2018-05-18 08:42:26 | https://medium.com/s/story/oben-and-qtum-join-forces-to-build-blockchain-lab-to-nurture-innovation-and-research-13d4c83f71a8 | false | 587 | Enabling everyone to create, own, and manage their own Personal Artificial Intelligence (PAI) on the PAI blockchain protocol. #PAIforALL Telegram: https://t.me/projectpai | null | projectpai | null | Project PAI | project-pai | BLOCKCHAIN,ARTIFICIAL INTELLIGENCE,ICO,BITCOIN,CRYPTOCURRENCY INVESTMENT | projectpai | Blockchain | blockchain | Blockchain | 265,164 | ObEN | Enabling every person in the world to create, own and manage their Personal AI. Tencent, Softbank Ventures Korea & HTC Vive X portfolio co. | 921634f7855c | ObenAI | 173 | 23 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 81caadca586f | 2017-10-12 | 2017-10-12 13:57:20 | 2017-10-12 | 2017-10-12 16:52:57 | 4 | false | en | 2017-10-13 | 2017-10-13 17:24:20 | 8 | 13d76021c68f | 6.032075 | 12 | 0 | 0 | Why Ettore Sottsass’s saw his iconic Valentine typewriter as “a mistake.” | 5 | How One Red Typewriter Challenged Conventional Design
Why Ettore Sottsass’s saw his iconic Valentine typewriter as “a mistake.”
Valentine | Michaela Pointon / © Culture Trip
On Valentine’s Day of 1969, the Italian manufacturing company Olivetti released a red typewriter designed by Ettore Sottsass, that would become fashion statement, a pop object, the ultimate ‘work from home’ tool. It was radical, colorful, rebellious. But it was considered a failure by its creator.
Sottsass’s designs exist in the realm of playful, colorful rebellion against the modernist axiom form follows function. “Modernism was cold and cerebral. It was this mass produced, homogeneous thing,” says Christian Larsen, curator of The Met Breuer’s latest exhibition, Ettore Sottsass: Design Radical. “Sottsass gave it a heart and soul.”
Recruited as a design consultant in 1957 by the industrial office manufacturer Adriano Olivetti, Sottsass’s initial landmark project was the first all-transistor mainframe computer, the Elea 9003. From there, he went on to design the Valentine (with Perry King), which he intended to be a super inexpensive, egalitarian machine, meant for work outside the office. It appealed to youth culture — in the same way Apple products appeal to millennials today — and transformed a bulky stationary machine into a pop artifact.
The Valentine typewriter could easily be written off as a marketing ploy — with its bright, cheeky design and provocative advertising campaign. But its influence, now long embedded and subsumed in the cultural and corporate landscape, runs deeper than aesthetics.
The design
A playful anthropomorphization permeated Sottsass’s work; he loved to imbue his creations with quirky, human characteristics. He chose bright red so as to “not remind anyone of the monotonous working hours,” and orange scroll caps that resemble nipples and eyes.
He called his red creation an “unpretentious toy…to keep amateur poets company on quiet Sundays in the country or to provide a highly colored object on a table in a studio apartment.” Sottsass “transcended the sameness of typewriter design,” says Christian Larsen, by giving the Valentine an endearing personality.
The intention was to create an inexpensive, non-precious, portable object, almost like a Bic pen. Sottsass’s initial design eliminated all lowercase letters and even the bell, and used a cheaper grade of plastic to lower production and engineering costs. It was a utilitarian machine made for anywhere — except the office.
The Valentine typewriter on exhibit at The Met Breuer’s Ettore Sottsass: Design Radical | © Amber C. Snider
Even the plastic carrying case was revolutionary, with its matte finish and durability, making it almost as important as the machine itself. “The fact that so much attention went into the case as the typewriter [was radical],” remarks art historian Deborah Goldberg in an interview. “The case…looks like leather but it’s actually plastic. The handle of the typewriter actually went through the case, so it was really intermixed in the design.”
Still, Sottsass called the final product “a mistake.” Why would such an iconic typewriter, one whose designed influence the creation the Apple’s candy-colored iMacs, ever be considered a failure by the designer himself?
The Mistake
“I worked sixty years of my life, and it seems the only thing I did is this fucking red machine. And it came out a mistake. It was supposed to be a very inexpensive portable, to sell in the market, like pens…Then the people at Olivetti said you cannot sell this.” — Ettore Sottsass, 1993
If such a thing as a truly egalitarian machine exists, Sottsass’s design came pretty close to the ideal. Using only capital letters and eliminating the bell was a strange, radical idea, but it would also lower production and engineering costs enough to make the Valentine affordable.
Comparing the Valentine as a precursor to candy-colored iMac is easy. But Christian Larsen likens the machine to the “One Laptop Per Child” initiative founded in 2005 because it reinforces Sottsass’s vision of driving the cost of the typewriter/laptop low enough for everyone to afford. Everyone, including underprivileged children in underdeveloped countries. The Valentine was supposed to be a mass produced object, a design for the everyman.
But his boss, Olivetti, wouldn’t allow it.
Olivetti reportedly told Sottsass that he didn’t want a “cheap Chinese thing” manufactured by his Italian company. (This comment was actually omitted from The Met Breuer’s exhibition after Larsen’s editors deemed it politically incorrect.) “We struck [the last line of the quote. But] I actually think it’s a very interesting statement to have been made in 1969, that those perceptions of mass produced Chinese products were considered cheap, in terms of quality and price,” says Larsen.
Sottsass’s disappointment in the final product was not well known, and it took Larsen months of digging to unearth this fact as he put the show together.
Valentine | Michaela Pointon / © Culture Trip
Sex appeal
Despite this tension and creative disappointment, the Valentine typewriter still rose to become a multi-faceted object — a tool for remote work, a fashion object, an accessory. Brigitte Bardot, Richard Burton, and Elizabeth Taylor were all photographed carrying the little red typewriter as a carry-on bag during their travels, making it an iconic statement, a coveted good. “There was a perception [around it],” says Larsen. “The chic, the elite, and cool would be into this machine because it was very much with its time.” Even Audrey Hepburn swapped the little black dress for a red Valentine.
The massive advertising campaign featured the Valentine in unusual settings, including the Acropolis, an airplane cockpit, and a beach, further solidifying its status as the ultimate portable “work from home” machine. But the typewriter also subliminally fetishized the idea of office equipment.
One ad featured a woman seductively squatting before the Valentine in a chic ’60s dress, while her male boss or co-worker peers over her shoulder. Another ad features a girl in a bikini typing away on the beach, or what could be the surface of the moon. Sottsass’s anthropomorphization was taken to the next level by subverting the boring old typewriter as a sexualized object.
But it wasn’t necessarily intended to become a statement of luxury or sex, by any means — at least not for Sottsass.
Mass production equals homogenization
If the typewriter was intended to be this egalitarian machine, what about Sottsass’s ambivalence towards the idea of mass production? “Mass production has that democratic impact. It can improve the lives of a great many number of people sheerly through technological advance and making something at a cheap price point,” says Larsen. “The problem with mass production is that you get sameness.”
The Valentine: the ultimate ‘work from home’ machine | Michaela Pointon / © Culture Trip
“The same thing is being produced a hundred thousands times over, so everyone is getting the same good. Sottsass felt that sameness was alienating and that led to a culture of homogenization. It also was sort of soulless,” says Larsen. “When you mass produce, you edit out the individuality of the design.”
Rebellion
The Valentine was the first of its kind to allow users the freedom to work remotely, a notion that went against the grain of the existing corporate fabric, and despite the ubiquity of the modern laptop, is still not a common U.S. practice.
Sottsass’s own work choices mirrored this creative rebellion: Once his work caught Olivetti’s eye in 1957, he was brought on as a design consultant, not a full-time employee. This allowed Sottsass a kind of autonomy and creative freedom, a separation from the corporate structure that perhaps allowed him to revolutionize the tools, or the mechanisms of power.
Sottsass’s influence on corporate design partially stemmed from his reaction to the post-war mass production industry boom he witnessed in the United States. “Sottsass was really rebelling against the industry as a whole,” says Larsen. “The other major company that was producing typewriters was IBM, and of course, IBM didn’t come in any color besides black. So they really didn’t experiment much with the object.”
“He wanted autonomy,” continues Larsen. “He didn’t want to be tied to one company. He was way too interested in various other pursuits. He still thought of himself as a painter at this time. He couldn’t decide whether he was an architect, a designer — all of these things interested him and so he wanted a studio where he could be free to pursue all of these interests. He didn’t want to be tied to one particular company. And that’s rebellious. That is resisting.”
The red Valentine typewriter is currently on display at The Met Breuer’s Ettore Sottsass: Design Radical, curated by Christian Larsen, until October 8, 2017 in New York City.
A version of this article originally appeared on Culture Trip where more of Amber Snider’s work can be read.
| How One Red Typewriter Challenged Conventional Design | 383 | how-one-red-typewriter-challenged-conventional-design-13d76021c68f | 2018-02-02 | 2018-02-02 01:19:19 | https://medium.com/s/story/how-one-red-typewriter-challenged-conventional-design-13d76021c68f | false | 1,413 | Prime cuts and hot takes on arts and leisure from the editors of Culture Trip. | null | culturetrip | null | The Omnivore | null | the-omnivore | LIFESTYLE,LEISURE,CUISINE,ARTS AND CULTURE,TRAVEL | CultureTrip | Design | design | Design | 186,228 | Amber C. Snider | Home & design editor @CultureTrip | MA in Liberal Studies/Women’s Studies | BA English/Creative Writing | dd5d02526e9f | AmberCSnider | 246 | 47 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-14 | 2018-05-14 15:38:20 | 2018-05-15 | 2018-05-15 00:05:06 | 7 | false | zh | 2018-05-15 | 2018-05-15 03:30:59 | 15 | 13d7ce86ddc3 | 11.53 | 0 | 0 | 0 | 尊敬的深脑链全球社区成员: | 3 |
深脑链周报第11期(05.07–05.13)
尊敬的深脑链全球社区成员:
感谢您的关注!深脑链(DeepBrain Chain)将持续分享团队在技术开发、市场发展和社区运营等方面的最新动态。以下是本期周报内容:
一、技术进展篇
上周,研发团队重点进行了迭代1版本的代码检视、代码优化、AI训练任务调试,并解决疑难问题;同时,为了提高代码质量,安排人员进行了迭代0的底层代码检视。
架构设计
评审数据加密安全备选概念;
分析区块链环境下的网络威胁模型和网络安全备选概念,将于5月中旬完成初稿;
测试
完成迭代0的第三轮测试,第三轮重点进行异常性能测试、内存资源泄漏测试、异常压力测试。目前对于单一普通PC,在Windows平台下可以支持11万+的并发连接;
启动迭代1第一轮测试;
开发
对于在句柄资源超过系统限制时的日志异常捕获进行优化;
优化hangup信号量捕获;
解决docker容器restful api版本升级兼容性问题;
解决去中心化存储数据下载的阻塞问题;
优化节点间数据同步代码;
优化节点初始化日志提示信息;
优化MAC平台的编译脚本,平台和业务层独立为不同的lib库;
搭建测试网络种子节点。
二、市场发展篇
北美
上周,深脑链硅谷人工智能研究院院长、首席人工智能官王冬岩博士参加了北美活动,深脑链成为了各方关注的焦点
王冬岩博士在GDIS发表主题演讲
5月10日,硅谷商学院(Silicon Valley Business Institute)发布了王冬岩博士在GDIS (Global Disruptive Innovation Summit)大会上的深脑链主题演讲。
在长达20分钟的主题演讲中,王冬岩博士从思科的就职经历讲起,谈到他对人工智能行业的看法。他表示,初创团队以及没有原生数据的公司如果想要开展AI业务,成本非常高;而在区块链行业中,基于POW算法的比特币等主流区块链项目,因造成大量的资源和算力浪费饱受诟病。深脑链将“人工智能”和“区块链挖矿”创新结合,致力于解决ai算力成本过高和区块链挖矿中资源浪费的痛点。
GDIS:https://www.youtube.com/watch?v=aILWaHjAUY8&feature=youtu.be+j
东南亚
深脑链前往印度尼西亚出席区块链会议
Block Jakarta 2018印度尼西亚区块链会议于2018年5月9日在印度尼西亚雅加达瑞兹卡尔顿酒店盛大召开,会议汇聚了全世界享誉盛名的公司领导人以及印尼的最大的本地交易所Indodax,此外,印度尼西亚的有关政府部门的成员也参与了会议的组织安排。现场会议主要讨论了区块链的技术开发和当地区块链政府的监管情况,DeepBrain Chain(DBC)市场总监Eric 应邀代表DBC作为大会主讲嘉宾参加大会,就AI和区块链的现有关系和行业痛点进行介绍与剖析。并在会上提出了与会企业和各个领域最关心的DBC AI x Blockchain模式与传统行业结合之后的实际商用和工业用价值 做出了详尽阐述。
三、生态发展篇
基于深脑链AI公链项目OneGame首次发布
OneGame — — 基于深脑链AI公链搭建的去中心化的虚拟世界正式对外发布,金色财经、鸵鸟区块链争相报道。OneGame通过深度学习,辅助用户生成新的模型和场景;通过增强学习,不断训练和提升平台上非玩家角色的游戏智能;通过遗传算法,应用基因的排列组合,让平台不断自我进化。深脑链强大的平台,为OneGame提供了所需的计算资源和人工智能算法和数据的支持。OneGame通过为玩家提供简单易用但功能强大的游戏编辑工具,允许任何没有计算机编程知识的人感受创造世界的快感,组织起一个自我治理、自我进化的虚拟世界,为玩家提供前所未有的游戏体验。人工智能算法,是OneGame平台的核心竞争力。这方面,他们和深脑链的团队开展了多项合作。
四、媒体报道篇
雅虎财经聚焦深脑链“AI+区块链”平台
上周,世界知名财经媒体“雅虎财经”发表了题为“Deepbrain Chain, the first artificial intelligence computing platform driven by blockchain ”的新闻,文中表示,深脑链是首个由区块链驱动的人工智能计算平台,这是融合AI和数字货币的创新之举。
DeepBrain Chain, the First Artificial Intelligence Computing Platform Driven by Blockchain
DeepBrain Chain is an Artificial Intelligence Computing Platform driven by blockchain. The DBC project is for global AI…finance.yahoo.com
Inc, Coin Journal等媒体争相报道
Inc.com在文章“A.I. Is Awesome, Blockchain Is a Powerhouse”中,引用了王冬岩博士对ai发展的见解,并表示:深脑链作为人工智能+区块链的先驱,正树立起一个新的行业里程碑。
A.I. Is Awesome, Blockchain Is a Powerhouse. But Here's What Combining Them Could Do
You know artificial intelligence from your Alexa, Tesla or Netflix account. And now Blockchain is generating serious…www.inc.com
美国知名区块链媒体Coin Journal发布了题为“DeepBrain Chain To Launch AI, Blockchain Research Center In Silicon Valley”的新闻,文中引用王冬岩博士的话表示,深脑链将建立一个“AI +区块链”的生态系统,通过在区块链上安全地共享计算能力,AI模型和数据,大大降低AI应用的进入门槛和成本。
DeepBrain Chain To Launch AI, Blockchain Research Center In Silicon Valley
The DeepBrain Chain Foundation, the organization overseeing the DeepBrain Chain artificial intelligence (AI) computing…coinjournal.net
此外, Bitswin等多家知名媒体也报道了深脑链的最新消息。
Bitswin币格带来深脑链深度测评
这是一次对深脑链的全方位挖掘,以视频+PPT的方式,配合字幕呈现,报道从AI人工智能是什么开始讲起,阐述AI行业痛点、区块链如何解决AI行业痛点、深脑链项目方背景、挖矿机制、交易市场、底层框架、吞吐量、共识机制和海外社区发展。
英文视频:https://drive.google.com/open?id=153YX8mDUQyQzpPGB67ZJUg3FEOCdCm9s
中文浏览地址:https://mp.weixin.qq.com/s/f4T29_OhvN3AxfVYWWs8Zw
数十家中文媒体报道深脑链AI矿机
深脑链AI矿机的新闻在中国引发了轰动,包括搜狐、界面、金色财经、共享财经、极客公园、天极网等数十家中文媒体争相报道。
五、KOL关注篇
上周,深脑链CEO何永,硅谷人工智能研究院院长、首席人工智能官王冬岩博士接受了多位科技互联网及区块链业界KOL的专访。
深脑链创始人兼ceo何永参与YouTube访谈
5月9日,深脑链创始人兼ceo何永在YouTube的直播访谈中,面向社区对外介绍了深脑链AI矿机的计划。深脑链AI矿机是深脑链生态的重要组成部分,能将区块链浪费资源的计算转化为高效率的人工智能深度学习,机器学习训练和区块链计算挖矿。对于参与者而言,进入深脑链主网,通过AI矿机进行“挖矿”,可以用较少的成本,进行AI深度学习和机器学习训练,并可以获得DBC收益;对于深脑链生态而言,可以聚集区块链网络中分散的AI算力,不仅可以帮助AI企业节约算力成本,随着参与节点的增加,将不断增强深脑链的生态价值,让生态中的所有参与者共享AI生态。
王博士接受YouTube意见领袖Crypto Beadles专访
5月11日,王博士接受YouTube名人Crypto Beadles专访。王博士表示,他将带领团队做如下研究:进行大规模神经网络并行训练、深度神经网络的高效联合学习、通过强化学习来减少深脑链网络的能源消耗、AI与区块链的整合以及分布式杀手级AI应用。
该专访一天内浏览量突破133,000次,点赞数超过3500个。随着全球各大社区的进一步推广,市场展现出对深脑链的极大热情,获得区块链及加密货币市场的高度好评。
王博士接受YouTube名人Brad采访
此外,上周王冬岩博士在Brad的采访中,介绍了深脑链在技术落地上的可行性,表示将通过AI硬件(AI矿机)、AI软件和AI应用三大方向,推进深脑链生态的发展。
六、社区运营篇
母亲节致谢
在母亲节来临之际,深脑链给社区志愿者发送了一封感谢信,真诚地表达感谢。深脑链的迅速发展离不开社区的坚定支持,更离不开早期志愿者的无私奉献。正因为有了这些热情而富有创造力的社区,一直与深脑链团队坚定不移地并肩前行,才让我们信心满满地面对未来的挑战。
截至5月14日,社区发展情况如下:
Telegram 英文电报群:12,183 位;
Telegram韩文电报群:683 位;
Telegram越南电报群:1,072 位;
Telegram印尼电报群: 1,744位;
Telegram泰国群:2,147 位;
Twitter粉丝:32,670 位;
Reddit社区:7,971 位;
Facebook Page粉丝:508 位;
我们已开设Telegram AI矿机群 @DeepBrainChainAIminers,期待您的加入!
七、人才招聘篇
日前,我们的团队成员正不断发展壮大,研发人员均有15年以上架构经验,分别来自BAT、华为、网易、移动、爱立信等公司,顺利地完成迭代1的总体架构设计。我们诚招区块链开发工程师、智能合约工程师、虚拟机开发工程师、安全工程师(渗透)、分布式存储开发工程师和Java开发工程师。我们欢迎外部推荐,成功推荐入职将获得等值20000元人民币的奖励。
联系邮箱:[email protected],标题请注明简历推荐,并附上推荐人联系方式,便于我们后续发放奖金。
中文H5:http://u6716916.viewer.maka.im/k/RYFDRGSY?from=singlemessage&isappinstalled=0
八、活动预告篇
俄罗斯区块链周
5月21日至5月25日,深脑链团队将参加俄罗斯区块链周活动。在为期五天的活动中,共有超过1500名业内嘉宾参加,将有70+区块链行业专家带来最前沿的演讲,六大深度主题活动日挖掘全球发展趋势。深脑链将在活动中做主题演讲。
Russian Blockchain Week 2018
Development, implementation and cases of blockchain-solutions for medium and large businesses. Review of current…blockchainweek.moscow
荷兰阿姆斯特丹区块链博览会
6月27日至6月28日,王博士和深脑链团队将出席荷兰阿姆斯特丹区块链博览会,积极寻求与AI企业、大学&研究机构的合作。
Blockchain Conference & Exhibition Event | Blockchain Expo Europe
Blockchain Expo Europe - 27-28 June - RAI, Amsterdam. Blockchain Conference & Exhibition exploring blockchain and…blockchain-expo.com
九、发展愿景篇
深脑链的愿景是利用区块链技术链接全球算力节点, 构建去中心化的AI生态体系。在未来3年内,我们将在硅谷实验室投资1亿美元。主网上线后,AI团队将继续研究DBC AI集群(16–128 GPUS)上的分布式训练(Q2’19)以及在全网进行的分布式训练(Q4’19)。
当前我们正在发展北美、欧洲、俄罗斯、泰国、柬埔寨、越南、泰国、印度尼西亚、菲律宾、韩国社区,下一阶段的目标是澳大利亚、南美洲、印度、日本、中东以及其它地区,参加和主办更多的线下活动。
我们希望能与你们一起快速成长,每位社区成员都将在这里发挥更大价值。
— 你们的,深脑链团队
欢迎您对我们的周报提出反馈和改进建议,请联系[email protected],或DBC Twitter。期待下期!
更多信息请访问:官方网站 、Telegram、Twitter、Facebook、Reddit
点此下载白皮书
| 深脑链周报第11期(05.07–05.13) | 0 | 深脑链周报第11期-05-07-05-13-13d7ce86ddc3 | 2018-05-15 | 2018-05-15 03:31:00 | https://medium.com/s/story/深脑链周报第11期-05-07-05-13-13d7ce86ddc3 | false | 283 | null | null | null | null | null | null | null | null | null | Blockchain | blockchain | Blockchain | 265,164 | DeepBrain Chain | AI Computing Platform Driven By Blockchain | 379a9e7edef2 | DeepBrain_Chain | 960 | 4 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-04 | 2018-06-04 22:37:46 | 2018-06-04 | 2018-06-04 23:04:20 | 3 | false | en | 2018-07-15 | 2018-07-15 12:08:55 | 106 | 13d93cdbcac8 | 18.029245 | 1 | 0 | 0 | How to extend NST with a second style and what are the challenges of deploying it as a single unit using TensorFlow, Flask, ASP.NET Core… | 5 | Dual Neural Style Transfer with Docker Compose
How to extend NST with a second style and what are the challenges of deploying it as a single unit using TensorFlow, Flask, ASP.NET Core, Angular and Docker Compose.
Programming is easy like riding a bike. Except the bike is on fire and you’re on fire and everything is on fire and you’re actually in hell. Matic Jurglič.
The picture is generated by the application under discussion, its content is taken from Jeff Atwood who took it from Steve McConnell’s book, Code Complete.
A live demo is available at http://nst-online.evgeniymamchenko.com.
The sources are available at https://gitlab.com/OutSorcerer/NeuralStyleTransferWeb. There you will also find the instructions how to launch the application locally with Docker Compose.
Table of Contents
How to use the application
What is neural style transfer?
How the original neural style transfer algorithm was extended
Architecture
Angular
ASP.NET Core
Flask
TensorFlow
Docker Compose
Challenges
Installing TensorFlow with GPU support on Windows
The communication between .NET Core and Python
The usage of the server-side rendering and Angular from .NET
Leaking TensorFlow graph nodes (and therefore memory)
Broken switchMap
Long response times
Possible improvements and further reading
Feed-forward neural style transfer
An arbitrary image size and proportions
Processing multiple user requests simultaneously
TensorFlow Serving
Credits
How to use the application
Click on the “Content Image” button or the placeholder below and upload a content image.
Pick a style image.
Optionally pick a second style image.
Optionally change the parameters.
Iterations. The number of optimization steps taken. The higher the number, the lower the cost and, generally, the more beautiful the result.
Content cost weight. The multiplier of the content cost term in the total cost. The higher the number, the more similar to the content image the result is.
Style cost weight. The multiplier of the style cost term in the total cost. The higher the number, the more similar to the style image the result is.
Click “Launch” button. The transfer is starting, the resulting image and other transfer details are shown on the screen.
If your job status is “queued”, that means that the back-end is busy at the moment, but it will start processing your input as soon as all previous jobs are done.
This is how the UI looks in the middle of a style transfer when two styles (a starry sky and neon lights) are simultaneously applied to a single content image.
Please note that http://nst-online.evgeniymamchenko.com works just on 2-cores machine, where 100 iterations of transfer take about 30 minutes. My mobile GPU Nvidia GeForce GTX 1050 works about 100 times faster. Unfortunately, a GPU in the cloud is too expensive right now (one of the cheapest GPU instances with Nvidia TESLA K80 Module costs [about $270 per month]()).
Click “Stop” button to abort the transfer.
What is neural style transfer?
Intuitively it can be defined as generating a picture which content is similar to one input image and which style is similar to another input image.
In a more detailed way, it is a result of iterative optimization of a specific cost function defined on a resulting image. On each step of the optimization, we compute the gradient of that cost function with respect to each pixel of resulting image and slightly change the resulting image in the direction opposite to the gradient as we always do in the Gradient descent algorithm.
The interesting fact here is that, typically, when training convolutional neural networks, images are fixed and weights of the network are the subject of optimization. On the contrary, in NST algorithm the weights of CNN are fixed, while the input image is being optimized. In the original paper the CNN with VGG architecture pre-trained on ImageNet dataset. This application also loads weights of pre-trained VGG model. What is interesting is there could be something special in VGG architecture that makes it especially good for neural style transfer although some people achieved good results with other architectures too.
The mentioned cost function is the sum of two terms.
The first one is the content cost multiplied by its weight (a parameter that can be configured in UI as mentioned above). The content cost is the squared Euclidean distance between (the squared L2-norm of the difference of) the values of an intermediate convolution layer on the content image and the resulting image normalized by the input size.
The second one is thereafter the style cost multiplied by the style cost weight parameter.
Unlike the content cost, the style cost is computed on multiple layers, the style costs for each level are multiplied by their weights (which are also model parameters) and summed up to give the total style cost. In the context of this application, there are five such layers, each weight is equal to 0.2 and cannot be changed from the UI yet. The shallower layers are responsible for lower-level features like those that detect horizontal lines, vertical lines, diagonal lines and other simple geometrical shapes, while deeper layers are responsible for higher-level features like those that detect parts of objects or entire objects like eyes, flowers, cars, cats and dogs, although sometimes it is pretty hard to figure out what a particular feature detects.
For a single layer, the style cost is the squared Euclidean distance between Gram matrices of that layer activations (with (number_of_channels, height*width) shape) for the resulting image and the style image.
Gram matrix is approximately proportional to the covariance matrix (in case values are centered). Its diagonal elements are just squared L2 norms of the corresponding channel activations reshaped as one-dimensional vectors.
For a detailed explanation of neural style transfer you can see the original paper, read my source code or watch the video and complete the corresponding programming assignment from the Coursera CNN course, on which my code is based (all the videos are available for free on YouTube but Coursera subscription is required to complete the programming assignment).
How I extended the original neural style transfer algorithm extended
To make the application more interesting, I decided to extend the original algorithm. What if we try to apply two styles simultaneously? Surprisingly, it worked quite well: styles are not overlapping but rather being applied to different parts of the image, depending on which part is more suitable for each style. Let us call it “dual NST”.
See the example illustration above.
To implement a second style an extra style term is added to the total cost.
Feel free to play with it yourself, I would appreciate if you would share your results in comments.
Architecture
Let us talk in more detail about the building blocks of the application.
Angular
Angular 5 component is the face of the application. It is responsible for validating a user’s input and sending it to the server.
When a transfer job is in progress, it does polling with a one-second interval to show results in real time. rxjs is a nice tool for filtering, mapping and combining streams of asynchronous events where tasks like HTTP polling are solved in a powerful and concise way. For example, if some request was not able to complete in one second and the next request already started, it would make no sense to wait for both of them and waste connections, as the data from the previous request would be already obsolete. switchMap operator nicely solves this problem:
Angular 5 is my framework of choice because of my love to statically typed languages like C# or TypeScript and also because of the nice SPA template included with .NET Core CLI. That template is even capable of the server-side rendering, which is a nice tool for SEO and improving the user experience, as it significantly decreases the initial page load time.
The template (and this application) uses Bootstrap library which looks a bit old-fashioned in 2018, I hope the next template from Microsoft will use Angular Material which is more modern-looking.
Considering React vs. Angular, I prefer Angular because, among other things, it is a whole framework with “batteries included” experience featuring things like built-in Dependency Injection, which could be very useful for tests. So you have fewer decisions to make in the beginning, while it is still highly extendable (for example, if you would like, you could use Redux-like @ngrx/store).
ASP.NET Core
The ASP.NET Core component receives REST requests from the Angular SPA, resizes images to 400x300 and puts NST jobs in a queue. Since TensorFlow uses 100% of CPU power, it is not practical to perform two transfers simultaneously, that is why that queue comes in handy. The ASP.NET Core component is also responsible for responding to the polling from the Angular SPA and doing the polling of Python back-end itself.
C# is my favorite language so the choice of ASP.NET Core is natural for me. Microsoft is doing a great job of improving the C# language (comparing with Java, C# syntax recently was and still is years ahead) and building great tools for the developer productivity like Visual Studio and VS Code (and, thanks to JetBrains there are also great extensions/alternatives like ReSharper or Rider). I admit that the open-source community around C# is not as productive as Java or Python communities, but considering that the framework itself and many Microsoft libraries became open-source and cross-platform and Microsoft supports others in building open-source software based on .NET, everyone should take a closer look at .NET Core.
Also, using C# here is a nice example of how two micro-services written in different languages possibly by different teams could easily communicate using REST. Python is very popular for machine learning, but in other spheres, people may use Java, .NET, Node.js, etc. so this scenario is what we would often see in the real world.
Flask
Flask is a popular Python framework for building Web APIs including RESTful ones.
It wraps around the TensorFlow model which is running in a background thread while on another thread it responds to requests to start, stop or query a status of an NST job.
Even though there are special flask_restful and flask_jsonpify packages creating REST/JSON services seems to be not so smooth with Flask as it is with ASP.NET Core. I believe it is not only my lack of experience with it because some parts of my code are based on high-ranked answers on Stack Overflow and instead of using some built-in function they are suggesting to copy-paste their implementations of it.
To be more clear here is an example of parsing of HTTP request body in JSON format:
Looks nice, but what if a decorated function had more arguments? I could rewrite the code, of course, but the point is, there should be a built-in and generic solution out-of-the-box in contrast to making users copy-paste code that performs very basic tasks.
For comparison the similar place in ASP.NET Core does not use any custom code and looks like this:
I would appreciate if someone would recommend me a nicer replacement of Flask for REST on Python.
TensorFlow
TensorFlow is an open-source software library for numerical computation using data flow graphs. It has a built-in gradient computation, many supported operations from simple matrix addition or multiplication to pre-implemented convolution or sampling layers, optimizers from GradientDescentOptimizer to AdamOptimizer, ability to run on GPUs and TPUs and many more which makes it one of the most popular tools for building neural network models.
The program starts with loading weights of the pre-trained VGG network and building a computational graph. A nice thing is that a single graph and a single TensorFlow session can be used for handling different user inputs which makes initialization time much faster.
Since initialization and training take significant time, but we want to keep a user up to date by responding to HTTP requests, TensorFlow code works in a separate thread.
This code is based on an assignment Art Generation with Neural Style Transfer from Andrew’s Ng course, which is a part of the Deep Learning specialization.
My changes include the support for a second style image that I described above and various performance improvements that I am going to describe below.
Docker Compose
Docker as a containerization software that provides an immutable environment which helps a lot with making deployment predictable and reducing time expenses for it. That is useful both for Python and .NET Core / Angular parts of the application. They are both wrapped into Docker containers.
Docker Compose is, in turn, a tool to run multi-container applications. One of its abilities is virtual networks where we can put our services so that they are visible to each other, but not to the outside world. In this example, the Python container should not communicate with a user directly so it does not publish any ports to outside and it can only receive requests from the .NET Core container, which, on the contrary, publishes port 80 to receive user requests.
With Docker and Docker Compose you can launch this application just in minutes without spending much time on environment preparation (which might be a tedious task). You will find the detailed instructions in the README file of the corresponding GitLab repository.
Challenges
It was fun to work on this project. But, unfortunately sometimes your code fails, it could be not your guilt, but it is your responsibility to make everything work. There was plenty of moments when something was not working and I had no idea why, that is the reason I chose Coding Horror illustration for this post. But more satisfying it was to figure everything out finally.
Installing TensorFlow with GPU support on Windows
TensorFlow itself installs easily by following the official instructions with just pip install, but Nvidia could definitely try better with deploying their seriously great software to an end-user.
CUDA 9.0 (with tensorflow-gpu 1.8.0 package you need exactly CUDA 9.0 version, not the most recent one) itself is wrapped in a nice installer, but unfortunately it fails to install Visual Studio integration module and does not even try to install other modules afterwards even though they do not depend on the VS integration.
Luckily, they have a forum where the user oregonduckman posted his workaround. I was even able to simplify it a bit and also contributed to that forum thread. The solution was surprisingly simple: install everything except VS integration, unzip installer file as an archive and manually launch executables in the VS integration folder. Another unobvious step was that you should not install the latest GPU driver, instead, you may need to remove existing Nvidia GPU driver by replacing it with a generic one prior to the CUDA installation. Also, see the official installation instructions.
cuDNN installation is even more a shame since here you have to copy some files manually from unzipped “installer” and manually set some environment variables.
I hope I live to see the day when NVidia finally makes a proper installer or maybe even starts using a Windows package manager like Chocolatey.
Docker could help here a lot, but unfortunately using GPU from Docker is not possible on Windows right now. Although I could imagine it working in a bright future, as the answer to the question “Why is it not possible?” looks promising: “No not possible, we would need GPU passthrough (i.e DDA) which is only available in Windows server 2016.” If it worked on Windows Server, it could come to other editions of Windows as well.
The communication between .NET Core and Python
My first approach to this was launching a Python process from .NET Core by System.Diagnostics.Process.Start. At first, it looked as a nice and simple while cross-platform way, but it had a number of disadvantages.
How is .NET Core supposed to pass parameters and input images to Python? How should Python pass resulting images and costs to .NET Core? On Windows and Linux there are various ways of inter-process communication like named pipes or sockets, but they are not cross-platform. So initially I chose a common temporarily folder as a mean of communication. The .NET Core process was just placing input files there and passing transfer parameters like iterations count as command-line arguments to the Python process. The Python process, in turn, was writing result files in that folder and the .NET Core process was subscribing to changes in that folder with FileSystemWatcher.
A big disadvantage was that the 500Mb weights of the pre-trained VGG network were reloading from disk to memory for each transfer job. TensorFlow graph was also rebuilding from scratch each time. All that was resulting in initialization time of about one minute.
The solution was to the make Python process long-running and communicate with it by REST with help of Flask framework. So the weights are now loaded just once, the graph is built just once and, as a result, the request initialization time on GPU went from one minute to thirty seconds.
Since the traffic between .NET Core and Python components is quite small (about 300 kB per second) HTTP/JSON is fine for this use-case. In case that would become a bottleneck something like WebSocket and a binary serialization protocol like Protocol Buffers could be used. Another alternative is Google’s gRPC.
The usage of the server-side rendering and Angular from .NET
I mentioned above that I used a nice Angular 5 template for .NET Core. The issue is that in the current (2.0) version of .NET the CLI for .NET or Visual Studio this template is missing but there is the Angular 4 template instead.
I foolishly tried to update it manually to Angular 5, that initially worked fine. But as soon as I started to build Docker images, that included building the Angular 5 application in production mode with SSR, it broke. It turned out that there were breaking changes in SSR from Angular 4 to Angular 5.
I was already choosing between giving up SSR or giving up Angular 5 when I luckily found that Microsoft created the new SPA template with the support of both Angular 5 and SSR (although SSR is not turned on by default). The point was that template was still in beta and it had to be installed manually with
and used by
When .NET Core SDK 2.1 is released, that will not be required anymore.
Leaking TensorFlow graph nodes (and therefore memory)
The original implementation of NST from the Deep Learning specialization on Coursera worked nicely when it was launched just once or twice from Jupiter notebook. But, as it turned out, it was not production-ready at all.
The first thing, that caught my eye, was that content cost and style cost nodes of the computation graph in TensorFlow were not reused but created for each input image:
It was rewritten as:
The same was done for the style cost, the content cost weight, the style cost weight and the total cost.
That allowed to reuse their graph nodes just by assigning new values to the corresponding variables. That was done intuitively in the attempt to cache what can be cached considering that the initialization time at the moment was too long. That ended up to be a step in the right direction, but a serious problem still remained.
I also realized that a current session must not be created from scratch for each transfer but a single session can be reused.
Everything seemed to be perfect, I deployed the application to a Google Cloud instance and started to test it more intensively. And then I faced “out of memory” errors. At first, I thought that it is just a peculiarity of the Python/TensorFlow memory management and it can be solved just by increasing the instance memory but that just postponed the error, not fixed it entirely. I looked at the TensorFlow process memory consumption and saw that it was steadily growing.
Long story short, the reason was leaking TensorFlow computation graph nodes, specifically assign nodes.
Unlike an assign operation from C++ or Python, an assign operation in TensorFlow is just another graph node. Using these operations to set variables to new inputs was adding new and new nodes to a graph, which was causing “out of memory” errors.
More accurately, the assign operation itself is not consuming memory, but it implicitly creates a constant node with an assigning value.
By the way, there is a nice way to validate if you program is free of such kind of bugs, by adding tf.get_default_graph().finalize() immediately before the training loop. It is not done automatically just because it would break a huge amount of existing code. But maybe it would be a good thing...
So, instead of:
there must be:
during initialization, and then for each request:
You must also remove:
Because that would also try to initialize variables like input_variable without defining the corresponding placeholder values which would cause an error. Each variable must be initialized manually instead.
You will most likely also have to initialize the variables implicitly created by the optimizer. It can be done like:
That fix also sped up the application significantly. A second run with a GPU started to initialize just in a few seconds instead of half a minute.
Here is a post on KDnuggets with more typical problems in TensorFlow graphs.
Broken switchMap
switchMap is a nice operation except it does not work. When I opened network tab in Chrome debugging tools, I was shocked as I saw that requests were not cancelled when they were taking more than one second, instead they were running indefinitely and, what was even worse, they were piling up and, since Chrome executes just a limited number of requests, that meant that pending time for each new request was growing.
So why switchMap may not work? It is obvious, you just need to replace
with
and it starts to work. In exchange for that you now have to explicitly import every rxjs operator that was previously imported automatically like
Why it helps? I do not know, but the good thing is they fixed it in 6.0 version. Thanks to Airblader who created a GitHub issue where I found this.
Long response times
But why HTTP requests where taking so long for server to handle in the first place? Responses were just about 300 kB, so it was not the Internet speed.
The reason was that the TensorFlow thread was using 100% CPU, so there were not enough resources for the Flask thread and for the ASP.NET Core process.
Another consequence was weird exceptions from ASP.NET Core application:
Luckily Docker Compose file format has a solution for that:
cpus sets the maximum number of cores that a service can occupy (in that case there was two-core instance).
cpu_shares sets a weight of a service, which takes effect only during the moments when CPU resources are limited.
I was confused at first by the fact that those settings were removed in the version 3 of the Compose file. The reason is that the version 3 is mainly for running stacks of containers in Docker Swarm, which has its own way of limiting CPU usage, while the version 2 is for good old Docker Compose. And the version 2.3 is not so old as it was introduced almost at the same time as 3.4.
To make it even easier for the server I started to send the resulting image only when it changes. On a two-core instance it happens once in about 15 seconds, and the image size is about 300 kB, while for the rest of polling responses, that are performed each second, payload size is just a few hundreds of bytes.
Possible improvements and further reading
Feed-forward neural style transfer
The Deep Learning field is developing incredibly fast, and the original neural style transfer paper called A Neural Algorithm of Artistic Style from September 2, 2015 already became obsolete in terms of the implementation details of the style transfer idea (while the idea itself is still actual, moreover, it had a huge impact even outside of the scientific community).
One major breakthrough was a following paper that introduced a fast feed-forward method of neural style transfer, Texture Networks: Feed-forward Synthesis of Textures and Stylized Images from March 10, 2016. That method involves just a single forward propagation through a neural network instead of an iterative process and thus it is few orders of magnitude faster. The trade-off is that a network must be trained in advance for each style image and that process is even slower than the original style transfer iterative process. You can try that algorithm online.
Another paper that proposed a feed-forward method was Perceptual Losses for Real-Time Style Transfer and Super-Resolution from March 27, 2016. It looks like it is cited more often, but it appeared a bit later.
The next great discovery was a method of arbitrary style transfer that generalized the previous feed-forward approach to an arbitrary style in ZM-Net: Real-time Zero-shot Image Manipulation Network from March 21, 2017.
Other approaches to arbitrary style transfer are Exploring the structure of a real-time, arbitrary neural artistic stylization network from August 24, 2017 and Universal Style Transfer via Feature Transforms from November 17, 2017.
See a Medium post with an overview of the history of NST.
So, the next step for my application could be the replacement of the current iterative implementation with a feed-forward one based on one of the previous papers. What could still be challenging is how to implement it with a second style.
An arbitrary image size and proportions
The current implementation like the underlying VGG network can only process images of the fixed size (400x300), so if a chosen image size is different, it is resized by .NET Core application, before it is assigned as an input of a neural network.
In a recent post on fast.ai Jeremy Howard mentioned adaptive pooling layers, which could help to process an image of an arbitrary size (as far as I understand it is based on Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition). That would be useful for neural style transfer too.
Processing multiple user requests simultaneously
The bottleneck of the application is an optimization process in TensorFlow. Although multiple transfers are queued, the slow speed of a transfer on CPU makes it impractical on multi-user scenarios.
With GPU the performance is much better, but still, currently, only a single user at a time can have his transfer running, while others will wait in a queue. Running two TensorFlow sessions simultaneously is not stable, most likely due to GPU memory allocation.
On my 2GB GPU an attempt to run two TensorFlow sessions from two Python processes results in the following error:
That could be solved by using multiple machines with multiple replicas of the Python Docker container. That would also require Docker Swarm, Kubernetes or another orchestrator that runs against a cluster instead of Docker Compose, which runs against a single machine.
An alternative solution is the usage of distributed TensorFlow on a cluster.
TensorFlow Serving
TensorFlow Serving does not seem to be applicable currently, as it serves an already trained model, but here is a training process. However, with a feed-forward approach, it could replace the Flask part.
Moreover, it can also serve multiple models on a single GPU simultaneously.
Credits
Thanks to Andrew Ng and the whole deeplearning.ai and Coursera teams for their great work on the Deep Learning specialization.
Thanks to GitLab.com for generously providing 10Gb repositories for free without limits on individual files sizes (unlike 100Mb limit for a single file size on GitHub).
Thanks to Google Cloud for their $300 / 12-month Free Tier where the application is running now.
Thanks to my wife for her support and valuable advice.
Originally published at https://evgeniymamchenko.com/dual-neural-style-transfer-with-docker-compose/.
| Dual Neural Style Transfer with Docker Compose | 1 | dual-neural-style-transfer-with-docker-compose-13d93cdbcac8 | 2018-07-15 | 2018-07-15 12:08:55 | https://medium.com/s/story/dual-neural-style-transfer-with-docker-compose-13d93cdbcac8 | false | 4,632 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Evgeniy Mamchenko | Deep Learning Engineer | 34b32b9dc691 | evgeniy.mamchenko | 4 | 7 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-19 | 2017-11-19 04:26:36 | 2017-11-19 | 2017-11-19 04:38:07 | 1 | false | th | 2017-11-19 | 2017-11-19 04:38:07 | 4 | 13d9d44854f8 | 0.992453 | 0 | 0 | 0 | Elon Musk และผู้เชี่ยวชาญกว่า 350 คน คาดการณ์ว่าปัญญาประดิษฐ์ (AI) จะมีความฉลาดเหนือมนุษย์ | 5 | AI จะฉลาดเหนือมนุษย์
AI จะฉลาดเหนือมนุษย์
Elon Musk และผู้เชี่ยวชาญกว่า 350 คน คาดการณ์ว่าปัญญาประดิษฐ์ (AI) จะมีความฉลาดเหนือมนุษย์
เมื่อพิจารณาถึงความรวดเร็วที่นักวิจัยกำลังพัฒนาปัญญาประดิษฐ์ (AI) ก็มีคำถามเกิดขึ้นมาว่าเมื่อใดที่ AI จะฉลาดเกินกว่ามนุษย์ผู้ที่สร้างมันขึ้นมา ซึ่งทีมนักวิจัยจากมหาวิทยาลัย Yale และสถาบันว่าด้วยอนาคตกาลของมนุษยชาติแห่งมหาวิทยาลัยอ๊อกซ์ฟอร์ด (Oxford Future of Humanity Institute) ได้เริ่มต้นจากการสำรวจนักธุรกิจและนักวิชาการหลายร้อยคน ในช่วงเดือนพฤษภาคมถึงมิถุนายน 2016 เพื่อคาดการณ์ว่าเมื่อใดที่ AI จะฉลาดเกินกว่ามนุษย์
ผลการวิจัยดังกล่าวที่ทีมงานได้ตีพิมพ์ขึ้น ระบุว่า AI จะสามารถปฏิบัติงานใดๆได้ดีหรือดีกว่ามนุษย์ หรือที่รู้จักกันในนาม “เครื่องจักรอัจฉริยะระดับสูง (high-level machine intelligence)” ภายในปี 2060 และจะแย่งงานจากมนุษย์ไปได้ทั้งหมดภายในปี 2136 ซึ่งผลลัพธ์เหล่านี้มาจากผู้เชี่ยวชาญ 352 ราย
Elon Musk CEO ของบริษัท SpaceX และ Tesla อีกทั้งเป็นผู้ร่วมก่อตั้ง OpenAI ซึ่งเป็นองค์กรไม่หวังผลกำไร โดยเขาได้วิเคราะห์ถึง AI ในช่วงหลายปีที่ผ่านมาว่าสถานการณ์การพัฒนา AI อย่างรวดเร็วไปในทางเลวร้ายนี้จะทำให้ AI กลายเป็นอาวุธ หรือการที่ AI ฉลาดกว่ามนุษย์อาจนำไปสู่การสูญพันธุ์ของมนุษย์ในที่สุด
บริษัท Tesla ของ Musk เป็นหนึ่งในบริษัทชั้นนำที่กำลังจะสร้างรถยนต์ที่ขับเคลื่อนด้วยตนเอง ทำให้ชาวอเมริกันประมาณ 2 ล้านคน ในอุตสาหกรรมรถบรรทุกและรถแท็กซี่ ต่างมีความเห็นพ้องตรงกันว่างานของพวกเขาจะถูกแทนที่ด้วยยานพาหนะที่ขับเคลื่อนได้ด้วยตัวเอง
ผู้เชี่ยวชาญคาดการณ์ว่า AI อาจจะขับรถได้ดีกว่ามนุษย์ในปี 2027 แต่อย่างไรก็ตามการสำรวจนี้ได้เสร็จสิ้นก่อนที่หุ่นยนต์ Otto จะประสบความสำเร็จในการนำรถบรรทุกที่ขับเคลื่อนด้วยตัวเอง 120 ไมล์ในเดือนตุลาคม 2016 ซึ่งนั้นหมายความว่าสิ่งที่ผู้เชี่ยวชาญได้คาดการณ์ไว้อาจเกิดขึ้นจริงในเวลาที่เร็วกว่าที่คาดไว้
ผู้เชี่ยวชาญยังให้ความเห็นเพิ่มเติมว่า AI จะมีความสามารถเหนือกว่ามนุษย์ในหลายเหตุการณ์สำคัญ ได้แก่ การแปลภาษา (2024) การเขียนบทความระดับสูง (2026) และการทำศัลยกรรม (2053) โดยคาดว่าหุ่นยนต์อาจจะสามารถเขียนหนังสือ New York Times ที่ขายดีที่สุด ในปี 2049
เมื่อเร็วๆนี้ เครื่อง AlphaGo ของ Google ที่พัฒนาให้สามารถชนะแชมป์ในระดับโลกอย่าง Ke Jie ที่ถือว่าเป็นมือหนึ่งของวงการโกะโลกได้ (ก่อนหน้านี้ เมื่อปีก่อน AlphaGo ก็สามารถเอาชนะ Lee Sedol ได้) และระบบ AI ที่สร้างขึ้นโดยนักวิทยาศาสตร์แห่งมหาวิทยาลัย Carnegie Mellon สามารถแข่งขันเอาชนะผู้เล่น Poker มืออาชีพได้เงินรางวัลไป 2 ล้านเหรียญสหรัฐฯ
เป็นที่น่าสังเกตว่าระยะเวลาที่ผู้เชี่ยวชาญแต่ละคนได้คาดการณ์ไว้ไม่ได้แตกต่างกันมากนัก ซึ่งขึ้นอยู่กับระดับประสบการณ์ของผู้เชี่ยวชาญด้านปัญญาประดิษฐ์ และอีกหนึ่งตัวแปรที่ไม่สัมพันธ์กับการคาดการณ์คือตำแหน่งที่อยู่ของพวกเขา อย่างเช่น ผู้เชี่ยวชาญในอเมริกาเหนือคิดว่า AI จะทำงานทุกอย่างได้ดีกว่ามนุษย์ภายใน 74 ปี ขณะที่ผู้เชี่ยวชาญในเอเชียคิดว่าจะใช้เวลาเพียง 30 ปีเท่านั้น ซึ่งนักวิจัยที่ตีพิมพ์ผลการศึกษานี้ยังไม่ได้ให้คำอธิบายถึงความแตกต่างนี้
แต่อย่างไรก็ตามประเทศจีนได้ประกาศเป็นยุทธศาสตร์แล้วว่า จีนจะเป็นประเทศที่มีความเชี่ยวชาญที่สุดในด้าน AI ภายในปี 2030 และความจริงในวันนี้ที่มหาวิทยาลัย Harvard โดย Harvard Business Review ได้เปิดเผยข้อมูลว่า ประเทศจีนได้ตีพิมพ์ผลงานวิชาการด้าน AI มากที่สุดในโลก และมีจำนวนมากกว่าสหรัฐอเมริกามาตั้งแต่ปี 2013 แล้ว
Reference :
http://inc-asean.com/technology/elon-musk-350-experts-predict-exactly-artificial-intelligence-will-overtake-human-intelligence/
https://www.technologyreview.com/s/609038/chinas-ai-awakening/
— — — — — — — — —
พ.อ.ดร.เศรษฐพงค์ มะลิสุวรรณ
กรรมการกิจการกระจายเสียง กิจการโทรทัศน์ และกิจการโทรคมนาคมแห่งชาติ (กสทช.)
ประวัติ: http://www.xn--42cf0a8cxa3ai5ple.com/?p=165
19 พฤศจิกายน 2560
www.เศรษฐพงค์.com
— — — — — — — — — -
| AI จะฉลาดเหนือมนุษย์ | 0 | ai-จะฉลาดเหนือมนุษย์-13d9d44854f8 | 2017-11-19 | 2017-11-19 04:38:07 | https://medium.com/s/story/ai-จะฉลาดเหนือมนุษย์-13d9d44854f8 | false | 210 | null | null | null | null | null | null | null | null | null | Engineering | engineering | Engineering | 12,767 | พันเอก ดร.เศรษฐพงค์ มะลิสุวรรณ | null | d4af6718c208 | march5g | 0 | 4 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 12054f571a0c | 2018-04-03 | 2018-04-03 09:13:33 | 2018-04-05 | 2018-04-05 09:38:34 | 3 | false | en | 2018-04-06 | 2018-04-06 11:07:39 | 8 | 13db75e5ac44 | 7.806604 | 6 | 0 | 0 | “Did someone say data?? Ah do you have a data guy for that. “ | 5 | What the hell is your “data guy” doing?
“Did someone say data?? Ah do you have a data guy for that. “
“You’ll do your data stuff, haha”
“Make sure we get some data stuff in the slides”
“They mentioned data in the meeting can you help”
“Hey Mr Data guy”
“Can you write a couple of ‘data slides’ about what you do”
It’s running joke I hear every day in our office and I’m pretty sure I’m not the only one. In fact when talking about what article I was going to write next for the HGW Medium publication I think the phrase was “and Billy will write something about data”. That was all the ammunition I needed for this post.
Data guys and girls have wide ranging roles and responsibilities. No single data guy can tackle it all, some are more on the technical and science side, some are into databases, some KPI’s, some AI. Being a data guy is far reaching within an organisation and involves crossing many different departments.
With that in mind I’ve tried my best to demystify a little bit of what we do, mostly for my own sanity and I hope it can help others too…
What is data?
Before you read on just think for a second…an alien arrives on this planet and asks what data is, how do you describe it? (For the sake of my point, the alien speaks English by the way)
Data by definition is a broad and abstract term. There are two common definitions of data “individual facts, statistics, or items of information” and “a body of facts; information”. That’s pretty damn broad, that definition covers pretty much anything you do and say online and offline. My title is data consultant but I don’t feel like I am in charge of all “facts and statistics” for people in the office and the people we work with, it would be impossible. No data guys I know of could.
The path to wisdom
The best data guys, use their skills to transform data into wisdom as quickly as possible. This means picking the right data sources and getting the right data, being able to make sense of them through analysis, communicate them in a way that provides knowledge, which over time builds experience and becomes wisdom.
I’ve heard data guys love models (I don’t mean the good looking kind) so let me use a model to try to explain the essence of a data guys job.
Each data guy will specialise in different areas of the DIKW pyramid, database architects spend all of their time with the bottom bit, move a bit further up for data scientists, data analysts and visualisers are a bit further up. A data consultant, I would suggest, has a higher level view of the whole pyramid without the specialist knowledge, data strategy if you will.
Notice I have not used words like big data, the cloud, AI, machine learning or anything similarly technology based. This should never be the focus that’s just a bi-product of choosing the right path to wisdom, you may need to use technology of some kind to help you on your path.
The path to wisdom, the people centred way
What a data guy will need on his path to wisdom is empathy. Yes really. The guys who are known for being nerds/geeks anti-social Napoleon Dynamite types need empathy.
Why? Because data is all about understanding, whether it’s understanding a customers behavioural data, understanding how to communicate business data to a group of people or understanding the effectiveness of a team.
Data is not a singular truth, it’s not a right or wrong, a 0 or 1, its a subtle indication of how to proceed and get smarter. More data doesn’t mean better data, interpretation of what to do with it is key and what to do with it, more often than not, involves people.
This is what first attracted me to using the design thinking approach to data strategy. Design in it’s broadest sense is about solving complex problems using empathy and experimentation. Design thinking provides a set of tools and methods to help empathise and to understand other people. It doesn’t assume we all have the best empathy but provides the mechanisms to help people think in an empathetic way. I’m not the only one who thinks so; last year the CEO of IDEO announced the acquisition of Datascope https://medium.com/ideo-stories/design-for-augmented-intelligence-9685c4db6fbb. The question “can data have a soul?” really resonated.
The wisdom of knowing what we don’t know
Data helps you find insights that aren’t plain to see. Asking these questions and using the methods described below helps to find the insight to create the best products and services.
What do we know?
In todays digital world when attempting to innovate or transform a product/team or business, there’s always already data out there. Whether that’s from scientific research, consultancy research, previous research from other business projects, employee knowledge, customer feedback, current best practices and competitors.
Collating and reading all of this gives us a great way to immerse ourselves into the problem. The skill here is in finding what you are looking for without knowing what you’re looking for.
Typically a data guys role here is to review scientific research and analyse quantitative data to better understand the problem. I will read and highlight and write notes on what I’m seeing. I will then attempt to find trends and group what I’ve seen into themes that I can communicate to the team.
I often then present this research in the early stages of a workshop to help inform ideation and what to test later.
What don’t we know?
We don’t know what we don’t know, but if we did know we’d be much wiser even if we still didn’t know it… seriously I mean it.
We have a tendency as humans and people to think we know things and pretend we are in control of what we do. Often what we think we know is a set of assumptions we take from connecting trends and our existing biases to shape our story of the world.
To avoid falling into this trap, I borrow a number of techniques such as RAT, GMAT among others to uncover as many of the assumptions as possible. These activities typically take place early in a design or concept sprint. The exact method and time I spend on this will depend very much on the fidelity of the product and resource available.
On a project level being able to identify these assumptions and which ones are most damaging to our future work leaves us with a prioritised list of missing data i.e. hypotheses/experiments/tests jobs to be done.
I find data guys are particularly good at this as they have spent their careers finding holes in data and knowledge, often they are cynical and critical thinkers.
How can we remove assumptions?
The prioritised assumptions can be wide ranging. From legal and compliance to deep rooted assumptions about people. Deciding how and what to test to get the best quality of data in the most efficient way is a skill often learnt through experience.
The ultimate goal is to avoid “lets measure everything because we can. Lets connect social data to e-commerce data and store it on the cloud and add machine learning.”
To keep the focus here I lean on a number of research and ideation techniques from design. I use exercises and workshops in divergent thinking, prioritisation and planning.
This because experiment and test design is as much art as it is science. Often in a business context you are not going to be able to design an experiment that proves something right or wrong, these experiments are extremely granular and do not help move forward fast enough (think AB testing and changing the colour of a button).
I always strive to get data on what people say, what they actually do and a combination of qualitative and quantitative to get the best picture. This could be by combining a qualitative interview with the release of live prototype. The interview give me the why and what people say where the live prototype shows what they actually do. See this article for more information.
The outcome of should be designed experiments that further our learning and direction of a solution.
Lets find out and learn
Running the test and experiment involves a number of different departments, skills and resource that have to work together in tandem. Using the experiment design I help to ensure that the right things are setup and measured and all parties are informed.
During the experiment I will review the numbers to ensure the test runs smoothly or if it is not working let everybody know so we can make tweaks as soon as possible without having to wait until the experiment is over.
What did we find out?
Analysing data and communicating the findings comes with great responsibility. The way that data is communicated to the rest of the team will shape the future direction of the product.
Again stealing from design thinking, it’s important to know who your audience is and how best to communicate with them. I typically present the findings as objectively as possible to the key stakeholders in the project and let everyone express their interpretation of the numbers.
I will then summarise these interpretations, so that we can ask the next question.
Do we need to learn more?
Do we need more is maybe a leading question, but it’s important to ask at this point, did we get what we wanted.
I will be part of these discussions and bring added insight into the findings where necessary. Often at this point new questions about the data and experiment will appear, I will attempt to give the answers to this on the fly by analysing the data during the meeting.
If everybody is not satisfied with the data, we may need to return to what don’t we know.
Now we have more information what shall we do next?
If we got what we wanted what do we do next? At this point I will help in defining the next step for the product/solutions, which could be another experiment or implementation into real life or further design.
The path to wisdom doesn’t end
Then it starts all over again. I have explained my job as being quite “non-technical”, lots of meetings etc. but that’s not always the case. As we start to do this loop of asking these same questions over and over again, it becomes necessary to start automating parts.
This is where I create dashboards, reports and other ways of showing the data without having to have the data guy around all the time. For the first time I can start to talk about technologies.
I will advise on what data software will make this job easier, help to design dashboards and data visualizations that are engaging to work and easy to make decisions with.
This automation also leads into opportunities with AI and machine learning. That’s some other stuff I do for another blog post.
Be nice to your data guy
So that’s a ‘data guy’ in a nutshell. He’s a busy a guy. Personally, on a daily basis I have multiple roles — but ultimately I try and help teams and clients measure, learn and improve. If I can do that — then it’s been a good day.
Sign up here
| What the hell is your “data guy” doing? | 21 | what-the-hell-is-your-data-guy-doing-13db75e5ac44 | 2018-06-15 | 2018-06-15 04:49:49 | https://medium.com/s/story/what-the-hell-is-your-data-guy-doing-13db75e5ac44 | false | 1,923 | We exist to solve problems that positively impact people’s everyday lives — and ultimately our clients’ businesses. | null | hellogreatworks | null | HelloGreatWorks | hellogreatworks | DESIGN THINKING,STRATEGIC DESIGN,SERVICE DESIGN,UX,CONSULTANCY | hellogreatworks | Data | data | Data | 20,245 | Billy Maddocks | Combining the scientific with the creative. Love networks, data and the human experience. Hate dashboards, quick fixes and life admin | dc2dd3ae298a | williamjmaddocks | 33 | 85 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | a01e1fda9aef | 2018-01-12 | 2018-01-12 12:32:42 | 2018-02-03 | 2018-02-03 08:57:54 | 4 | false | th | 2018-02-19 | 2018-02-19 14:50:02 | 1 | 13e040a0d7f1 | 1.016981 | 2 | 0 | 0 | คิดว่าหลายๆคนที่เข้ามาอ่านบทความนี้น่าจะผ่านหูผ่านตากับ machine learning จนแทบจะท่องจำได้แล้วว่ามันคืออะไร ว่าแต่แล้วเราจะโค้ดมันยังไงละ… | 4 | Machine Learning : [01] Let’s train.
คิดว่าหลายๆคนที่เข้ามาอ่านบทความนี้น่าจะผ่านหูผ่านตากับ machine learning จนแทบจะท่องจำได้แล้วว่ามันคืออะไร ว่าแต่แล้วเราจะโค้ดมันยังไงละ ไอ้ machine learning (ML) เนี่ย เอาเป็นว่าเรามาเตรียมส่วนประกอบก่อนที่โค้ด ML
Feature
Label
Feature คือ Input
Label คือ Output
อธิบายข้างบนแบบที่เข้าใจง่ายที่สุดนะครับ ไม่ได้ถูกต้อง 100%
มาเริ่มสอน machine learning กันด้วย Iris dataset ซึ่งคือ dataset ที่เก็บรวมข้อมูลของ iris flower โดยแบ่งเป็น 3 ประเภท(setosa,versicolor,virginica) แต่ละประเภทก็จะมีลักษณะเช่น ความกว้างของกลีบ ความยาวของกลีบ etc.
ซึ่งเราจะเรียกประเภทของดอกไม้ว่า label
ส่วนลักษณะต่างๆของดอกไม้จะเรียกว่า feature
เอาละรู้จักกับ dataset กันไปแล้วได้เวลามาลุยส่วนโค้ดกันบ้าง โดยเราจะทำการ predict ประเภทของดอกไม้จากลักษณะของดอกไม้
https://github.com/mattick27/MediumML/tree/master/ML/part1
Timeline’s model.
เรียบร้อยสำหรับการเทรน iris dataset นี่เป็นเพียงขั้นเริ่มต้นสำหรับ ML เท่านั้น ขั้นตอนนี้เปรียบเสมือน Hello world ในโลก ML
เพิ่มเติมนิดหน่อย
สร้าง model โดยใช้ iris dataset โดยใช้วิธีการ SGDclassifier lib ที่ใช้ในการสร้าง model ส่วนใหญ่จะใช้จาก sklearn เพราะว่าง่ายในการใช้และไม่จำเป็นต้องเขียนเอง (สำหรับคนที่ฮาร์ดคอ เดี๋ยวจะมีสอน)
| Machine Learning : [01] Let’s train. | 3 | machine-learning-01-lets-train-13e040a0d7f1 | 2018-02-19 | 2018-02-19 14:50:05 | https://medium.com/s/story/machine-learning-01-lets-train-13e040a0d7f1 | false | 84 | Interest in machine learning and have hobby about web develop. | null | null | null | Mattick | mattick | MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,WEB DEVELOPMENT,EDUCATION,THAILAND | k_matichon | Machine Learning | machine-learning | Machine Learning | 51,320 | K. | Interest in Machine learning and Web developer. | e7ba8e0ff83a | k_matichon | 50 | 20 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | f60682517af3 | 2017-11-28 | 2017-11-28 16:47:37 | 2017-11-28 | 2017-11-28 15:27:16 | 4 | false | en | 2017-11-28 | 2017-11-28 16:48:11 | 4 | 13e111bb855 | 3.945283 | 2 | 0 | 0 | Written by Kelly Moore, Software Engineer at Kainos | 4 | Innovation: Staying ahead of the curve
Written by Kelly Moore, Software Engineer at Kainos
Can manual tasks be replaced by intelligent machines? Can user experience be enhanced in real time by an algorithm?
Businesses are born out of innovation, but today with disruptive technologies, the average lifespan of a company has dropped by 77%, from 67 years in the 1920’s to just 15 years.
Innovators that defy this often talk about their culture — highly trained staff that are empowered to make good decisions (and no expectation of perfect ones). What they don’t talk about is how they integrate longer term views of socio-economic trends, technology innovation and market changes into their solutions that give competitive advantages.
Kainos recognises that looking forward for our customers is important and one way that we do this is through investment in our Applied Innovation team. The team works with a wide variety of customers to bring new innovative technologies to market quicker. One technology strand that has become an increasingly important part of our innovation strategy is Artificial Intelligence.
Can manual tasks be replaced by intelligent machines? Can user experience be enhanced in real time by an algorithm? Or can businesses learn from their usage patterns to cut energy bills by 40%? Here, I want to explore the technologies that make all of this possible and how we have utilised them.
#AI #MachineLearning
With interest accelerating over the past few years, these are among the most popular buzz words at the minute on every media platform, as shown by their popularity on Google Trends:
But are they just buzzwords? No. We all come into contact with these technologies every day from music recommendations on Spotify to something so simple as searching on Google.
Ok, but what do they mean?
Artificial Intelligence (AI) is the simulation of human intelligence by a computer system/machine. They are programmed to mimic human actions and thoughts, but how? This is where machine learning comes in. Machine learning (ML) is a subset of AI, allowing computers to learn by themselves, without being explicitly programmed. This is a different approach to traditional software systems where us humans would define a set of rules for a system to follow in order to turn an input into an output. E.g. For an insurance system, if the user inputs an age under 18, multiply the premium by 2.
Now, with ML, systems can take in inputs and expected outputs and learn from these to create these rules by themselves.
AI and ML are being used by the big tech companies to develop cool new tools: from facial recognition for overlaying puppy ears on your face to stitching multiple concert videos from different perspectives into one.
But what can it do for businesses outside of the technology sector?
All business in all sectors, be it retail, financial or insurance, have large amounts of data about their products, services and customers, and that data is growing exponentially. Many companies are performing analytics on their data, finding trends in past data in order to aid strategic planning for the future. However, we can take this one step further by harnessing machine learning techniques to predict future trends and identify non obvious patterns in the data.
Are your Facebook photos data?
Although sometimes internal data isn’t enough on its own. With many comparison sites around, it’s more likely that a user will shop around first before visiting a single provider’s site. Therefore, it becomes increasingly hard for a provider to build a user profile for each of its customers.
Facebook holds a lot of data about you, but possibly the most valuable data that could be overlooked is your photos. One idea our innovation team worked on is a travel recommendation tool. By analysing a user’s Facebook photos (with their permission of course), it can curate a list of keywords from the photos such as ‘city’, ‘beach’, ‘mountains’ etc. Feeding these into an ML algorithm can then predict the type of holiday best suited to this user. This allows a travel agency to display more tailored recommendations to the customer when they visit their site.
In fact, keyword searching is becoming a thing of the past now. It doesn’t give as much context to express our intent as searching by voice or photos (like ASOS’s visual search functionality to find similar clothes), or in the case of this solution — not having to search at all!
Technological Shift?
We are about to, if not already, experiencing a technological shift with the rise of Artificial Intelligence and Machine Learning. It will transform technology, society and all industries, so it is vital that businesses incorporate these technologies into their strategies now.
Being aware of data is one thing and analysing data is another, but harnessing AI and ML techniques to improve customer experience, cut costs, improve efficiency (the list of solutions goes on…), is the key to utilising the mass of data we have available.
Businesses reacting now will be the industry leaders, staying one step ahead of competitors.
This article was originally published here and was reposted with permission.
Originally published at digileaders.com on November 28, 2017.
| Innovation: Staying ahead of the curve | 4 | innovation-staying-ahead-of-the-curve-13e111bb855 | 2018-06-08 | 2018-06-08 15:15:38 | https://medium.com/s/story/innovation-staying-ahead-of-the-curve-13e111bb855 | false | 860 | Thoughts on leadership, strategy and digital transformation across all sectors. Articles first published on the Digital Leaders blog at digileaders.com | null | digitalleadersprogramme | null | Digital Leaders | digital-leaders-uk | DIGITAL LEADERSHIP,DIGITAL TRANSFORMATION,DIGITAL STRATEGY,DIGITAL GOVERNMENT,INNOVATION | digileaders | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Digital Leaders | Informing and inspiring innovative digital transformation digileaders.com | c0cad3f73a0 | DigiLeaders | 2,783 | 2,148 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-04-07 | 2017-04-07 13:44:49 | 2017-04-09 | 2017-04-09 16:31:30 | 1 | false | en | 2017-04-09 | 2017-04-09 16:31:30 | 11 | 13e1b4c25706 | 6.313208 | 105 | 6 | 0 | High frequency traders are not anymore the cool kids on the block. Margins are shrinking and major players such as Virtu and KCG are now… | 5 | My landscape on artificial intelligence quantitative funds and DIY funds
High frequency traders are not anymore the cool kids on the block. Margins are shrinking and major players such as Virtu and KCG are now merging to profit from economies of scale. On the other hand, artificial intelligence and quantitative strategies seem to be the new solutions to the alpha generation headache. There has also been lots of news (noise) about DIY (Do It Yourself) hedge funds (Wired, Bloomberg and also FT). More recently, Point72 announced that it handed its first check of c.$10m of the $250m promised to Quantopian.
The AI & Quantitative startup spaces are interesting and have the potential to disrupt a $3.2tn industry. In order to better understand the different actors, I tried to map and to categorise the different players in the market. Please find below a landscape mapping and thereafter the full list of companies as well as some brief comments.
PS: Views are my own. Also, I am sure I forgot many companies and did many mistakes. Do not hesitate if you have any recommendations!
Enjoy.
Feel free to share it but please quote the source
Institutional hedge funds using AI quant strategy and also source talents via challenges, academics and partnerships
Hiring is the major challenge for quantitative funds. They now compete directly with tech companies offering bean bags and other cool perks as well as high salaries. The fierce competition has forced quant funds to redesign their working environment and also to adopt new hiring strategies including partnering with crowd sourced hedge funds like Quantopian, strengthening their relationships with universities and also launching datathon in order to find the next James Simons.
Cantab Capital (UK), acquired by GAM in June 2016. The fund has a strong link with Cambridge through the Cantab Capital Institute for the Mathematics of Information hosted within the Faculty of Mathematics.
Citadel (US), launched a Datathon with a prize worth $100k for the winner.
Man Group (UK), established the Oxford-Man Institute (OMI) with “ the aim to create a stimulating environment of research and innovation where ideas flourish and practitioners from a wide spectrum of disciplines can bring together their skills to collaborate and learn from each other.”
Point72 (US), private investment vehicle of Steve Cohen. It invested in Quantopian and also announced that it will invest up to $250 million in a portfolio of algorithms managed by Quantopian.
Two Sigma (US), One of the first to offer a public Financial modeling challenge with Kaggle where quants have three months to create a predictive trading model from four gigabytes of financial data provided by Two Sigma.
WorldQuant (US), The WorldQuant Challenge provides the opportunity to compete and potentially earn a Research Consultant position at WorldQuant.
Institutional hedge funds using AI quant strategy
These funds are some of the most respected ones and use among others quantitative strategies as well as artificial intelligence. However, none of them seem to have adopted a partnership / challenge hiring strategy to attract new talents.
AQR (US), founded in 1998, AQR manages $175bn and offers a variety of quantitatively driven alternative and traditional investment vehicles to both institutional clients and financial advisors.
Bridgewater Associates (US), the world's biggest hedge fund with $150bn and at the forefront of the artificial intelligence research. The fund is very secretive on its plan and hiring strategy.
D.E Shaw (US), founded in 1988 and today manages about $40bn.
Renaissance Technologies (US), founded in 1982 the fund is one of the pioneers of quantitative trading. Renaissance’s flagship Medallion fund, which is run mostly for fund employees “famed for one of the best records in investing history, returning more than 35 percent annualized over a 20-year span.”
Winton Capital (UK), one of the most respected quant funds in Europe with around $28bn of AuM. The fund has recently launched its incubator and a venture capital arm.
Crowdsourced quant hedge funds
Everyone talks about these guys. So far, Quantopian seems to be the one leading in terms of funding, institutional backing and also Google Trends…
Quantopian (US), one of the most well funded neo-quant fund with $49m from investors including Point72, Andreessen Horowitz and Bessemer. Users can share their investment algorithms. Quantopian then selects the best algorithms and license them in exchange of a share of the return.
Numerai (US), with $7.5m from the likes of Union Square, Playfair Capital and First round, Numerai manages a long/short global equity hedge fund. It transforms and regularizes financial data into machine learning problems for its global community of data scientists. Also, one of the only neo hedge funds offering full anonymity and payment via bitcoin.
Quantiacs (US), often referred as one of the world’s first and only crowd sourced hedge fund matches freelance quants with institutional investment capital. Quants own their IP but license it to Quantiacs for 10% of lifetime profits.
Online community for quant traders
These platforms are like forums on steroids where traders can share tips, reviews and also have access to data and can do backtesting.
Backtrader (US), enables to focus on writing reusable trading strategies, indicators and analyzers instead of spending time building infrastructure.
Quantconnect (US), offers the ability to design and test strategy with its free data set and allows to deploy live directly via brokerage. Users can code in multiple programming languages and harness cluster of hundreds of servers to run backtest to analyse their strategy in Equities, FX, CFD, Options or Futures Markets. A key difference is that they have an open source infrastructure and don’t profit from strategies.
Uqer.io (CN), China’s first algorithmic trading platform. Uqer offers free and unlimited access to over 10 million financial datasets; Quants can create and backtest strategy at anytime anywhere.
Cloud9trader (UK), Cloud9Trader lets developers profit from algorithms by trading via a broker.
Pure AI quant hedge funds
These funds represent the next-gen hedge fund category. Their goal is to automatise all the investment research and trade in order to create high returns but with a much lower cost base than legacy hedge funds. Many people like Igor Tulchinsky from WorldQuant think that AI is more a tool to help human traders rather than replacing them. It will be interesting to see the returns of these funds over the next years.
Aidyia (HK), deploys advanced artificial intelligence technology to identify patterns and predict price movements in financial markets. Started trading last year in 2016.
Binatix (US), one of the first to use machine learning algorithms to spot patterns that offer an edge in investing, Recode wrote an article back in 2014.
Kimerick Technologies (US), ML and Artificial Neural Network-driven Predictive Trading.
Pit.ai (UK), a ML powered hedge fund, adopted into the YC W17 class. Techcrunch wrote an interesting article on the Company in March ‘17.
Sentient Technologies (US/HK), one of the companies with the largest war chest, $140m of funding from Access Industries, TATA Ventures, Horizon Ventures. If you want more information I would recommend this Bloomberg article.
Tickermachine (US), a low-frequency proprietary algorithmic trading firm based on behavioural economics principles.
Walnut Algorithm (FR), applies the latest advances in artificial intelligence to systematic investment strategies and for the moment focuses on liquid equity index futures in the US and Europe. For more information on the company, check out their recent interview.
Algorithmic marketplace
iQapla (ES), audits and ranks algorithms before allowing users to select different strategies and to create their automatic trading portfolio. Recently selected by Santander in their innovation lab
Pure AI quant hedge fund, open to public investors
Clone Algo (US), The Clone Algo Application runs and operates a social network. Its ecosystem allows users who are connected to brokers, banks & hedge funds, to easily clone trades from master accounts on to their own account.
Tools to design executable quant strategies without coding skills needed
Algoriz (US), lets users build trading algorithms with no coding required. For more info check out TechCrunch.
Alpaca (JP/US), raised $1m and announced in 2015 the launch of its deep-learning trading platform, Capitalico, that lets people build trading algorithms with few clicks and visually from historical chart.
Portfolio123 (US), translates an investment strategy into an algorithm.
Tool to optimize trading quant strategies
Sigopt (US), offers optimization solution for algorithm. The tool could be applied to algo trading but also to different areas. The company raised $8.7m from Andreessen Horowitz, Data Collective and Ycombinator.
Social trading platforms
Investors can easily mimic trades from other retail / semi-professional investors. Indeed, all of these traders are not using AI / quant strategies but they were one of the first to try to offer to non institutional investors an access to new strategies in few clicks.
Ayondo (UK), founded in 2008 the firm raised $10m from Luminor Capital and SevenVentures among others.
Collective2 (US), lets users easily snap together trading strategies, algorithms, and human traders to form a (virtual) customized hedge fund that can be traded in a person’s regular brokerage account.
Darwinex (UK), founded in 2012 the company is a FCA (UK) regulated vertical marketplace that pairs skilled traders with active investors. Raised about $4m.
eToro (IL), one of the most well known companies in the social / trading gamification with $73m of funding from leading investors including CommerzVentures and Spark Capital.
Instavest (US), Investors can list their trades on Instavest, including the company, share amount and rationale behind the investment. Other users can invest alongside the people willing to share their own purchases and sales. The firm is a Y combinator alumni and raised $1.7m.
Wikifolio (AT), Austria-based online platform for investment strategies of traders and asset managers. The firm raised about $7m from SpeedInvest among others.
Thank you for reading my article.
Do not hesitate if you have any comments!
Etienne
| My landscape on artificial intelligence quantitative funds and DIY funds | 265 | my-landscape-on-artificial-intelligence-quantitative-funds-and-diy-funds-13e1b4c25706 | 2018-06-15 | 2018-06-15 15:20:06 | https://medium.com/s/story/my-landscape-on-artificial-intelligence-quantitative-funds-and-diy-funds-13e1b4c25706 | false | 1,620 | null | null | null | null | null | null | null | null | null | Hedge Funds | hedge-funds | Hedge Funds | 1,147 | Etienne Brunet | I like decentralisation and capital markets. | 4d2e0397ace0 | etiennebr | 2,864 | 137 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-12 | 2018-05-12 07:59:22 | 2018-05-14 | 2018-05-14 13:52:50 | 3 | false | en | 2018-05-14 | 2018-05-14 13:52:50 | 5 | 13e5d3ececf8 | 2.682075 | 1 | 0 | 0 | My take on what was interesting in Google I/O 18 and how it can benefit startups | 5 | Product Manager’s perspective on Google I/O 2018
The recently concluded Google I/O was not a big bang of sorts but had a lot of interesting product features and enhancements showcased. This post is to highlight the ones which I found it interesting from a PM’s perspective and how it can improve user experience for the businesses
Slices
Slices is a new feature coming in Android P by which your app can feature along with expected user intent. Refer to below example where when searched for ‘Lyft’, google search returns response with Lyft telling you time and price for a trip
It is not yet clear on how your app can enable this and what kind of intents be fired and who decides what intents to show up. You can expect this feature to be available soon. You can view the details of important features rolling out in Android P here
ML Kit
Google launched ML Kit, an SDK that let app developers leverage on machine learning expertise of google for their own use cases. The kit is available with 5 APIs that target popular use cases — Text recognition, Face detection, Barcode Scanning, Image Labeling and Landmark Recognition. Below snapshot shows how Lose It! is using text recognition to capture nutrition information
The ML Kit can throw up enormous opportunities on improving user experience
Medical and e-Pharmacy Apps like Practo, 1MG, NetMeds could use text recognition to automatically read the medicine names from doctors prescription
Likewise tax filing companies like cleartax, taxspanner could use text recognition to help users quickly fill up the relevant details
Swiggy andZomato may use text recognition to scan the restaurant’s menu and create a digital copy of it for their use
Travel Companies could use image labeling and landmark recognition to automatically tag user generated images of attractions and hotel premises
Price comparison apps could use barcode scanner to let user quickly check for price of a given product across multiple stores
Social Media Apps could enable the now in-fancy ‘unlock using face detection’ feature quickly through the ML kit
Read more about the ML kit which is in beta mode here
Android App Bundle
Google Playstore has also come up with interesting changes, prominent one being Android App Bundle. It is a new publishing format that can optimize your app size for various devices and form factors. Currently app developers submit multiple APKs to playstore for every release
Additionally you can also go for dynamic feature module so that users will download features on-demand instead of all at once during install. This is currently in beta mode
Complete google play features are described here
Firebase Predictions
You can now create dynamic user groups based on users’ predicted behavior. This can be used to customize the offers, tweak landing page content etc. You can also customize your notification content and schedule based on user segments
There were also a few ones which I found interesting but may not have impact on business use cases —
The new look-n-feel of gmail with enhancements — Quick access to calendar, drive etc., attachment preview, quick action on email, smart compose and a few more
Google Assistant become more smarter with custom routines, voice subtleties, no repeat ‘Hey Google’ phrases and multi-action queries
Google Maps + Camera = VPS (Visual Positioning System)
What interesting features did you notice?
| Product Manager's perspective on Google I/O 2018 | 1 | product-managers-perspective-on-google-i-o-2018-13e5d3ececf8 | 2018-05-15 | 2018-05-15 03:39:17 | https://medium.com/s/story/product-managers-perspective-on-google-i-o-2018-13e5d3ececf8 | false | 565 | null | null | null | null | null | null | null | null | null | Android | android | Android | 56,800 | deepak.malani | null | eef62179d3f0 | deepak.malani | 4 | 10 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-20 | 2017-11-20 11:15:08 | 2017-11-20 | 2017-11-20 15:53:49 | 1 | false | en | 2017-11-27 | 2017-11-27 11:07:52 | 13 | 13e6820dfc77 | 1.433962 | 3 | 0 | 0 | The EdTech Podcast has released the second episode in the FutureTech for Education series. Host Sophie Bailey* takes a deeper diver into… | 5 | EdTech Podcast: “What is AI & what has it got to do with me and my students?”
The EdTech Podcast has released the second episode in the FutureTech for Education series. Host Sophie Bailey* takes a deeper diver into artificial intelligence, following up on the inaugural episode, “What does future tech for education look like?”
AI pioneer Stephen Wolfram talks about how students can use Wolfram Alpha as a tool to learn computational thinking. Like seemingly every article published today, this piece in Wired is poorly titled but summarizes it fairly well: “Wolfram Alpha Is Making It Extremely Easy for Students to Cheat.” I do suggest you listen to the podcast to hear it directly from Wolfram himself.
- Stephen Wolfram
Murray Shanahan reads an excerpt from his book, The Technological Singularity, which imagines how a team of humans and a team of AI each build an ideal motorbike.
The Technological Singularity
The idea of technological singularity, and what it would mean if ordinary human intelligence were enhanced or overtaken…mitpress.mit.edu
The third guest is my colleague Peter Foltz, perhaps the foremost expert on use of AI for adaptive learning platforms. Foltz explains that while bias in AI is a problem we need to be continuously aware of, AI can also identify biases. As with all educational technologies, it’s a tool for educators to use — a compliment to the human skills necessary for successful learner outcomes.
Finally, Bailey gives an overview of the 2016 paper Intelligence Unleashed: An Argument for AI in Education, by Rose Luckin and Wayne Holmes, UCL Knowledge Lab, and Pearson’s Mark Griffiths and Laurie Forcier.
Once again, I’m honored to have been a small part of this episode. In just 25 minutes, Bailey, primarily supported by Wolfram, Shanahan, and Foltz, presents a clear overview of how AI is an increasingly valuable tool for learners and educators alike.
* Bailey was recently name one of the 3 voices that build a bridge between edtech startups and teachers
| EdTech Podcast: “What is AI & what has it got to do with me and my students?” | 16 | edtech-podcast-what-is-ai-what-has-it-got-to-do-with-me-and-my-students-13e6820dfc77 | 2018-06-10 | 2018-06-10 07:23:33 | https://medium.com/s/story/edtech-podcast-what-is-ai-what-has-it-got-to-do-with-me-and-my-students-13e6820dfc77 | false | 327 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Denis Hurley | Senior Technology Innovation Analyst @ Pearson. Equal parts virtual and physical. | 364ce345439b | denishurley | 2,193 | 746 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-01-24 | 2018-01-24 22:24:17 | 2018-01-24 | 2018-01-24 22:36:23 | 2 | false | en | 2018-01-24 | 2018-01-24 22:36:23 | 0 | 13e6f25e3ee | 2.198428 | 3 | 0 | 0 | I used to produce good PPTs. In my previous team, my level of using excel is above average. I can make pretty bar chart, pie chart and a… | 3 | Visualization, visualization, visualization!
I used to produce good PPTs. In my previous team, my level of using excel is above average. I can make pretty bar chart, pie chart and a secondary Y-axis etc.. People like my reports because the graphs help to understand data and stay focus. So I have always been aware of the importance of visualization.
Still, learning programming opens a new gate for me. I thought, as a researcher, I know my data very well and drawing is mainly for the audience to follow. I was so wrong. I underestimated visualization mainly because my data set is not large enough. Plus in many cases, my team and I produced the data ourselves. However, data analysis is not like policy research, it deals with big data which normally not generated by analysts. Visualization is not an end; it is a means that vital in data analysis.
I first realized I need visualization when I revisited the statistical notions. I have learned the Pearson correlation coefficient two or three times in graduate and post-grad courses. I always need to think twice to understand the complicated function. The symmetric presentation appears to be easy to understand and remember. Yet, I have always been wondering what do I need this figure for? What’s the r trying to explain?
It makes so much sense when I put it in a plot. The molecular is the covariance of the data set multiplies n, which explains every single data’s distance to sample mean. If the data tend to have positive distance to vertical and horizontal mean, then the data set has positive covariance. If the data tend to have positive distance to vertical mean and negative distance to vertical mean, or vice versa, the data set has negative covariance. The denominator is standard deviation for X and Y. By dividing them, the result is dimensionless, which means we get rid of the units. Result is always range from -1 to 1. Correlations equal to 1 or −1 correspond to data points lying exactly on a linear line, while 0 mean no linear correlation between the data set at all.
After visualizing the data, I understand much better what Pearson Correlation is explaining. I believe this is the case for many other statistical notions as well. I wish all my statistic teachers could have used visualization when they were teaching!
Visualization is the most important step/tool in Exploratory Data Analysis (EDA). Whenever data analyst have access to new data, it is wise to plot it before diving into it. Now I know that before producing nice deliverables, a more important function for visualization is to help data analyst find a direction to dig in. The matplotlib and seaborn package in Python is much more powerful than what excel provide. Now I have a much wider variety of tools and graphs to choose from!
| Visualization, visualization, visualization! | 21 | visualization-visualization-visualization-13e6f25e3ee | 2018-01-28 | 2018-01-28 04:55:11 | https://medium.com/s/story/visualization-visualization-visualization-13e6f25e3ee | false | 481 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Minhua Lin | A full time mum, a new immigrant, beginner in coding, looking for a new start of professional life in research and data analysis. | d0b4800ae8ae | minhualin | 12 | 9 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-15 | 2018-09-15 09:49:17 | 2018-09-15 | 2018-09-15 10:04:45 | 1 | false | en | 2018-09-15 | 2018-09-15 10:06:11 | 2 | 13e8216fdef5 | 0.656604 | 0 | 0 | 0 | APTEDIA group is proud to annonce its NEURALIZE artificial intelligence project ICO (initial coin offering) pre sale to start on october… | 5 | NEURALIZE ICO ANNOUNCEMENT
APTEDIA group is proud to annonce its NEURALIZE artificial intelligence project ICO (initial coin offering) pre sale to start on october 1st 2018.
The NEURALIZE project intends to empower AI with feelings and emotions.
We actually think artificial intelligence doesn’t exist as intelligence is in constant evolution and highly subjective. We think that AI don’t have a performance goal only as we today build it and that intelligence comes from one’s abilities to adapt his/itself to ann heterogeneous environment. We then focus working an AP but AI with P standing for personnalities.
NEURALIZE website to be opened on october 1st 2018 is :
www.neuralize.me
Neuralize token ticker is “NZE” and its sale page can be accessed here:
https://bit.ly/2My9kaF
| NEURALIZE ICO ANNOUNCEMENT | 0 | neuralize-ico-announcement-13e8216fdef5 | 2018-09-15 | 2018-09-15 10:06:11 | https://medium.com/s/story/neuralize-ico-announcement-13e8216fdef5 | false | 121 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Sébastien Landrieu | www.aptedia.com founder | ff1c6ea9f959 | sbastienlandrieu | 68 | 72 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-23 | 2017-12-23 02:41:45 | 2017-12-24 | 2017-12-24 07:19:01 | 3 | false | en | 2018-01-25 | 2018-01-25 02:49:32 | 10 | 13e848e74dc5 | 2.233019 | 1 | 0 | 0 | Whilst artificial intelligence (AI) classifies and organises data faster, cheaper and augments human intelligence empowering global legal… | 5 | Artificial Intelligence and the law: friend or foe?
Whilst artificial intelligence (AI) classifies and organises data faster, cheaper and augments human intelligence empowering global legal practices to perform their services all around the world, technological constraints like implementation and the law’s complex nature overshadows its benefits.
Source:https://www.forbes.com
Background: What is artificial intelligence in this context?
Source:Dilbert
AI is a term that is used frequently but often ambiguously. Al applications range in capability from one simple task to intricate intelligent procedures. For the purpose of these blogs, AI will refer to machines and software programmed in performing tasks involving human-like decision-making, intelligence and expertise. AI increases its intelligence as it gathers more data, making connections, and finding patterns. During the past two decades, Al has advanced to make major and influential improvements in quality and efficiency for services like e-discovery, contract analytics, prediction, legal research and expertise automation.
For example, DTA Piper uses AI software for due diligence document review in mergers and acquisitions whilst 5 per cent of Accenture’s workforce is no longer human as of December 2016. AI may not replace lawyers in the near future, The Boston Consulting Group 2016 report predicts that technological solutions could perform up to 50 per cent of tasks assigned to junior lawyers. This ties in with Deloitte’s 2016 insight report estimating that nearly 40 percent of jobs in the legal sector could be automated. Clearly, the role of AI is growing exponentially, disrupting the legal profession.
Stay tuned: problems to be critically analysed in upcoming posts
Source:Dilbert
Technical restraints remain a critical factor in AI’s binding power and capacity in the legal sector.These series of three blog posts aim to critically analyse whether the law, especially in a global context, needs to change to comprehend the technological effects of AI to unlock its productivity or whether its implementation should be alternatively restricted or shunned completely.
Blogpost two will address and critique the technical challenges associated with AI. Firstly, AI is most effective in domains where algorithms can be manufactured whereas legal reasoning demands a subjective element sensitive to the relevant jurisdiction. Secondly, AI will struggle in its implementation, in which legal culture is mostly reticent towards change and the commoditisation of the legal profession. Blogpost three will identify if a possible solution exists in manifesting a partnership between lawyers and machines, each bringing their own superior skills by constantly adapting and evolving to become a highly capable team over time. Whilst AI incites a number of technical implications which could fray the law’s application, the possibility of a partnership between the two worlds could contribute to a more powerful global legal practice.
*Lasted proof-read/edited: 25 January 2018
| Artificial Intelligence and the law: friend or foe? | 2 | artificial-intelligence-and-the-law-friend-or-foe-13e848e74dc5 | 2018-01-28 | 2018-01-28 06:16:30 | https://medium.com/s/story/artificial-intelligence-and-the-law-friend-or-foe-13e848e74dc5 | false | 446 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Helen Papageorgiou | null | 3fb077ea8b83 | hpap2 | 2 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | $ sudo apt-get install supervisor
$ sudo supervisord
# Checking the status of supervisor
$ sudo service supervisor status
# Staring supervisor
$ sudo service supervisor start
# Stopping and restarting supervisor
$ sudo service supervisor stop
$ sudo service supervisor restart
import time
count = 0
while True:
count = count + 1
print(str(count) + ". This prints once every 5secs.")
time.sleep(5)
[program:test_process]
command=python -u test.py
directory=/home/my_username
stdout_logfile=/home/my_username/test_process_output.txt
redirect_stderr=true
$ sudo supervisorctl
supervisor > reread
supervisor > add test_process
supervisor > status
[inet_http_server]
port=*:9001
username=testuser
password=testpass
| 11 | null | 2018-05-17 | 2018-05-17 01:34:05 | 2018-05-26 | 2018-05-26 07:35:41 | 2 | false | en | 2018-07-06 | 2018-07-06 12:16:08 | 6 | 13e91171d6d3 | 3.481447 | 0 | 0 | 0 | For most scenarios where you need your test to run for a long time, with a bad connection, keeping your ssh connection up might be almost… | 5 | Use Supervisor to run your Python Tests
Photo by Christoffer Engström on Unsplash
For most scenarios where you need your test to run for a long time, with a bad connection, keeping your ssh connection up might be almost be impossible at times.
A simple fix for this problem run your python script as a cronjob using crontab -e . However there might be some pitfalls to this simple fix. Instead, we can let supervisor execute your python scripts. You can then log in via SSH or visit a web page to see the progress of the task. Here’s what we want to achieve.
Requirements
Be able to see what tasks are running now on supervisor
Be able to see the status of the task
If there is a failure, we should be able to visit a log file
When the task is completed, notify via email
With the above in mind. The next part of the article documents my process of installing and using supervisor.
Setup
Google Cloud Platform with GPU
Ubuntu 16.04
Python 3 managed via Mini-conda
Installation
Installation of supervisor on ubuntu is fairly straight forward. Just run the following commands to install and start supervisord.
Basic commands of supervisor that you might need later on for checking the status supervisor or restarting the service.
Configuration
To get supervisor to execute a certain program or script, we can configure it in the /etc/supervisor/conf.d/ directory. As a recommendation, it would be best to keep a single conf file per process.
In the following example, we’ll create simple python script that will be print to screen every 5 seconds. And we can see how this can be handled by supervisor and how as a developer, we can then use supervisorctl later to check on what has happened while supervisor was running that task.
Let’s assume we have put the python script in your home directory. Remember to replace the folder names wherever applicable.
Create python test script in /home/my_username/test.py .
Create a configuration file for this process and name it /etc/supervisor/conf.d/test_process.conf .
If you’re interested to know more about what kind of configuration options are available, see the docs found at http://supervisord.org/configuration.html#program-x-section-settings
Using supervisor
Now that the configuration part is completed, let’s start to use supervisor. Most of the command that we are going to execute will be within the supervisorctl program. Supervisorctl is a CLI program that will manage your supervisor. To get into supervisorctl, use the following command.
You will then be greeted with the supervisorctl prompt as the following. Here we will start with reread followed by add test_process . Finally, if everything is working as expected, we should be able to see the PIDs.
List of common supervisorctl commands
reread : Reloads conf files
add <program> : Adds a newly created conf file and starts the process
status : Checks all status of programs managed by supervisor
start <program> : Starts the program
restart <program> : Restart the program
tail -f <program> : Watch log file
exit : Exits supervisorctl
help : List commands
Web Interface
You can actually monitor your supervisor status online.
To enable this add the following block to the config file found in /etc/supervisor/supervisord.conf
Depending on which cloud service, remember to forward the port. In this example the port number would be 9001.
Event Handling
Another possible feature to have is to set up event handling from supervisor. Supervisor provides the possibility of hooking in event handlers to handle events such at state changes. Unfortunately, I haven’t had enough time to complete this yet (will probably have to update this section at a later time again). But more details can be found in the link below.
Events - Supervisor 3.3.4 documentation
Events are an advanced feature of Supervisor introduced in version 3.0. You don't need to understand events if you…supervisord.org
Special Mentions
Well, sometimes you might have the possibility of writing shell scripts to check if you have a specific environment and then decide how and which conda / venv to use. I found out that for some reasons, supervisor wasn’t working for me, perhaps of a misconfiguration. So to keep things even simpler, I decided to use another approach, check out my article on how to use “screen” instead. It was a quick approach for me for my current needs now.
References
Supervisor Events
Managing Long-Running Processes With Supervisor
Monitoring Processes with Supervisord
Configuration Files
| Use Supervisor to run your Python Tests | 0 | use-supervisor-to-run-your-python-tests-13e91171d6d3 | 2018-07-07 | 2018-07-07 17:20:02 | https://medium.com/s/story/use-supervisor-to-run-your-python-tests-13e91171d6d3 | false | 821 | null | null | null | null | null | null | null | null | null | Supervisor | supervisor | Supervisor | 131 | Jayden Chua | An avid web developer constantly looking for new web technologies to dabble in | b01a297baa93 | jayden.chua | 3 | 5 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-02-10 | 2018-02-10 00:08:50 | 2018-02-20 | 2018-02-20 17:58:07 | 10 | false | en | 2018-02-20 | 2018-02-20 17:58:07 | 6 | 13ea66a293a7 | 2.612264 | 1 | 0 | 0 | With Josh Micro set to start shipping in the coming months, we’ve revamped the Josh.ai automation system and how you interact with the… | 5 | Introducing the new Josh.ai website!
With Josh Micro set to start shipping in the coming months, we’ve revamped the Josh.ai automation system and how you interact with the technology in your home. Therefore, it only made sense to refresh our website to reflect the excitement that we have for our new product!
Come In
We spent a lot of time designing the landing page experience to be welcoming and inspiring. After a greeting by our CEO Alex, scroll down through the page to take a high-level tour of what Josh can do.
Josh Micro
Josh Micro brings us full circle as a control system and is now the focal point of your home. The new site provides even more Micro details, including technical specifications and in-depth features.
Josh App
Take a look at how easy it is to control all of your properties remotely through talk, touch, or text commands. The Josh app has been optimized to create a custom experience for anyone and everyone depending on your preferences.
Get Josh
We’ve provided a rundown of what to expect when welcoming Josh into your home and included access to our new customer brochure. You can also reach out to the Josh team directly if you have any further questions.
Josh Micro Setup & Support
After ordering your Josh Micro, check out the Support page where you’re taken through step by step directions to install, place, and make the most of your Josh experience. It’s hard to keep track of all the technology in Josh Micro, but this page should provide all the answers and information you’re looking for.
Dealer Portal (Coming Soon)
We’re putting the finishing touches on the portal where our partners will have access to all of the resources they need. From ordering Micros, to accessing their client’s projects, to marketing collateral, the Dealer Portal will have it all!
Feel free to reach out to us anytime at [email protected] for more information or if you’re interested in becoming a dealer.
Josh.ai is an artificial intelligence agent for your home. If you’re interested in learning more, visit us at https://josh.ai.
Like us on Facebook, follow us on Twitter.
| Introducing the new Josh.ai website! | 6 | introducing-the-new-josh-ai-website-13ea66a293a7 | 2018-02-20 | 2018-02-20 18:34:43 | https://medium.com/s/story/introducing-the-new-josh-ai-website-13ea66a293a7 | false | 361 | null | null | null | null | null | null | null | null | null | Smart Home | smart-home | Smart Home | 3,891 | Josh | null | 66b5ae01967f | joshdotai | 43,719 | 1,883 | 20,181,104 | null | null | null | null | null | null |
0 | face_score_threshold = 3
face_score_mask = face_score > face_score_threshold
second_face_score_mask = np.isnan(second_face_score)
unknown_gender_mask = np.logical_not(np.isnan(gender_classes))
age_classes = np.array([calc_age(photo_taken[i], dob[i])
for i in range(len(dob))
])
valid_age_range = np.isin(age_classes, [x for x in range(101)])
mask = np.logical_and(face_score_mask, second_face_score_mask)
mask = np.logical_and(mask, unknown_gender_mask)
mask = np.logical_and(mask, valid_age_range)
detector = dlib.get_frontal_face_detector()
| 5 | 8b5620c36355 | 2018-03-05 | 2018-03-05 06:30:28 | 2018-08-08 | 2018-08-08 12:10:23 | 6 | false | en | 2018-08-21 | 2018-08-21 11:04:56 | 4 | 13eaee1e819c | 7.09717 | 8 | 0 | 0 | With the advent of AI, visual understanding has become increasingly relevant to the computer vision society. Age and gender classification… | 5 | Age and Gender Classification using MobileNets
With the advent of AI, visual understanding has become increasingly relevant to the computer vision society. Age and gender classification has been around for quite sometime now and continual efforts have been made to improve its results. And this has been happening since the emergence of social platforms. This article will illustrate how a family of low-latency, low-power, on-device computer vision models that have come to be known as MobileNets will allow us to obtain a significant increase in performance and help train a model to achieve age and gender classification.
Traditionally, tasks in deep learning is usually performed in the cloud. This happens when a user issues a request to classify some image that needs to be annotated. It is first sent to a web service, where another model server takes this image, performs inference, returns the result to the service and eventually your phone receives it. With the rapid growth in processing power of mobiles, and the rise of architectures such as MobileNets, the way inference is carried out is changing quickly. Although these networks designed for the mobile platform are not usually as accurate as the larger, more resource-intensive networks we have come to know and love, they really stand out when it comes to the resource/accuracy trade-off.
Now, we’ll look into leveraging these networks in order to classify the age and gender of a person.
Enter IMDB Faces
This dataset is claimed to be the largest publicly available dataset of face images with gender and age labels for training. However, due to its enormous size, we’ll be using the cropped version of the dataset which is significantly smaller ~ 7GB.
A .mat file containing all the meta information is provided which can be loaded using either MATLAB or Python’s SciPy library. Some of the essential information are in the format as follows:
dob: date of birth (MATLAB serial date number)
photo_taken: year when the photo was taken
full_path: path to the image
gender: 0 for Female and 1 for Male, NaN if unknown
face_score: detector score (the higher the better).
second_face_score: detector score of the face with the second highest score. (useful to ignore images with more than one face). And this is NaN if no second face was detected.
In our case, Python will be employed and SciPy’s loadmat method can be used to load the metadata.
Cleaning up noisy labels
Looking at the metadata’s description, it can be observed that there’s a face score which can be used to judge the quality of the images being fed to the network. With this, a threshold can be set wherein images with scores no lower than a certain value (say 3) are allowed.
Also, it might be possible that some annotations requiring numeric values may contain invalid formats. To overcome this, a module exists in NumPy to check whether an array contains non-numeric values (NaN) which can filter such noisy labels.
Concerning age, it is made sure the age classes fall in a valid range (0~100).
Now all these masks are combined to get a subset of the dataset containing a rich set of well annotated faces, mostly free from distortion.
After being done with preparing the denoised annotation lists, the images are then paired with their respective age and gender tuples by employing a nifty trick in python which involves chaining the operations — dict, zip.
With this, a dictionary containing the names of images as keys and (gender, age) labels as values is wrapped in a list denoting the ground truth data. These key-value pairs can then be loaded by a generator that can scale the images to any desired size and apply transformations to them.
Splitting data for training
The data that is going to be used is split into training (80%) and validation sets (20%).
Loading data
A custom image data generator for Keras is designed to load data in batches using the training and validation keys prepared in the preprocessing stage.
This is like any other generator except that it takes two annotated targets (gender, age) from the ground truth for the neural network’s outputs. This implies that we have two outputs that need to be vectorized before it can be passed to a model. This can be done by using to_categorical provided by Keras’ in its Numpy utilities package which is responsible for converting a class vector (integers) to proper vectors as a binary class matrix.
Data Generator
When it comes to the images themselves, the generator uses augmentations with regard to variations in saturation, brightness, lighting, contrast, and horizontal/vertical flip transformations.
The data can then be wrapped in a list of dictionaries
Model Input
The Model
MobileNet has been included in one of the applications of Keras. This convolutional neural network excluding the top layer serves as the base for the model with a custom designed classfication block replacing the aforementioned scrapped layer. This block employs translation invariance in the form of Global Average Pooling. On a side note, the pooling layer allows the input image to be of any size. Overfitting is alleviated by means of Dropout Regularization with a mildly aggressive drop rate of 0.5 and then we apply a dense layer of size 1024 on this layer to mix signals from the former layers and extract higher-level notions.
Model Architecture
Two additional dense layers comprising of softmax classifiers, one for each (gender and age) are then connected to the fully connected layer which tops off the network. The dense layers here work like a traditional feedforward neural network. It connects the 1024 higher level features from the previous layer with the final predictions for gender and age. Now, we have the model ready for training.
Training
The next step is to see what sort of accuracy can be gotten out from the MobileNet-esque configuration. We’ll start by training the model with an input size of 224 x 224 x 3.
For backpropagation, Stochastic Gradient Descent (SGD) will be used as the optimizer with an initial learning rate of 0.001 since we would be retraining the model using the weights trained on ImageNet which would allow for its faster convergence.
During training, a Learning Rate Scheduler decays the learning rate as the number of epochs start to rise.
LR Schedule
Also, to make sure the model does not overfit, a callback (ReduceLROnPlateau) is added to reduce the LR when a metric (say, validation loss or some other metric) has stopped improving after a certain number of epochs.
The data is loaded from an input directory and then split into training and validation sets for the model.
input_path specifies the path to the .mat file
images_path is the directory where the images are located
Results
The following observations were made in terms of accuracy and loss after training the model for 70 epochs on a GTX 1080 Ti. As can be seen from the following curves, the model converges fairly quickly in about 10 epochs and then gradually begins to stabilize.
Accuracy
Loss
Trying it out on real images
Before going about testing the model on real faces, it is necessary for them to be localized. Detecting facial landmarks is another problem altogether, so we wouldn’t want to go deeper into the subject. It is a subset of the shape prediction problem. Given an input image, a shape predictor attempts to localize key points of interest along the shape. In the context of facial landmarks, our goal is detect the bounding boxes for a person’s face.
For starters, we initialize the face detector (based on the Histogram of Gradients) with dlib's facial landmark predictor.
More on Histogram of Gradients, a link can be found in the references.
The pre-trained facial landmark detector inside the dlib library is used to estimate the location of 68 (x, y)-coordinates that map to facial structures on the face.
To understand how dlib’s facial landmark detector works, indexes of the 68 coordinates can be visualized on the image below:
68 facial landmark coordinates from the iBUG 300-W dataset
These annotations are part of the 68 point iBUG 300-W dataset which the dlib facial landmark predictor was trained on. There are other datasets available such as HELEN that uses 194 point model. Irrespective of which dataset is used, the same dlib framework can be leveraged to generate the bounding boxes.
When it comes to detecting the faces, a rectangular bounding box needs to be drawn using the facial landmarks generated by dlib. This is achieved by using the methods provided in dlib for obtaining cropped positions of the detected faces — left, top, right, bottom along with height and width. The images are resized and preprocessed accordingly for consistency of the evaluation.
The bounding boxes allow for generating crops of faces in the images
The model may then be used to predict the age and gender as follows.
Prediction
Hopefully, you found the article to be a good read and useful in your quest for recognizing a person’s age and gender.
KinarR/age-gender-estimator-keras
Contribute to age-gender-estimator-keras development by creating an account on GitHub.github.com
References
IMDB-WIKI - 500k+ face images with age and gender labels
In this paper we propose a deep learning solution to age estimation from a single face image without the use of facial…data.vision.ee.ethz.ch
Real Time Facial Landmark Detection | eLgo Academy
In this post we are going to detect facial landmarks in a real time video using dlib and OpenCV. Facial landmarks are…elgoacademy.org
Histogram of Oriented Gradients
In this post, we will learn the details of the Histogram of Oriented Gradients (HOG) feature descriptor. We will learn…www.learnopencv.com
| Age and Gender Classification using MobileNets | 35 | estimating-age-and-gender-with-mobilenets-13eaee1e819c | 2018-08-21 | 2018-08-21 11:04:56 | https://medium.com/s/story/estimating-age-and-gender-with-mobilenets-13eaee1e819c | false | 1,629 | Engineering blog showcasing some innovation and creativity | null | null | null | Y Media Labs Innovation | ymedialabs-innovation | INNOVATION | ymedialabs | Machine Learning | machine-learning | Machine Learning | 51,320 | Kinar Ravishankar | null | 24b19c3c005e | kinar.ravishankar | 7 | 7 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-09-25 | 2018-09-25 12:23:46 | 2018-09-26 | 2018-09-26 11:11:43 | 4 | false | en | 2018-09-26 | 2018-09-26 11:11:43 | 1 | 13eaf0840991 | 3.167925 | 3 | 0 | 0 | The Traditional Approach | 4 | The Data Science Execution Approach
The Traditional Approach
All those who have worked in a typical Software Engineering Process find it a way to mayhem to understand and adopt the Data Science Process. This is due to the inability to clarify the Data Science related ambiguities in various aspects. In this article, I will explain how Traditional systems will blend with the new Data Science systems.
In the typical Software Development Life Cycle, that follow waterfall or agile development approach consists of the one or more iteration of the following steps:
Typical SDLC Cycle
In this traditional approach, the data is fed into the system and based on the logic of computation, the results are generated. This can be roughly depicted as below:
Traditional Approach
The Data Science Approach
Now this is what happens in the Data Science approach. The results of the traditional systems becomes an input and the computational output of this approach is actually the model or the program which can then be applied to any other data received from the traditional systems.
Data Science Approach
Synonymous to the traditional approach, in order to make Data Science successful in any business, the following steps are recommended:
Data Science Execution Process
1. Sense the need– The business drivers should understand and realize the importance of data science as a separate practice. Ask if data science is really required? Create business scenarios where data science can bring value.
2. Strong Data Science Process — The processes and policies around a data science project should be set to make execution quick to develop and deploy. Governance of data should be well defined. Sensitive or personal information should be handled in a manner that it is not tampered with.
3. Clear Objectives and KPIs — The outcomes of a particular project should be clearly defined in terms of numbers, figures, percentages, dates, timelines, etc. The more clarity and crispness in the outcome, the easier it is to develop and measure KPIs against results.
4. Create a data science team — An effective data science team has experts like Solution Architects, Big Data Architects, Big Data Engineers, Back-end Developers, Front-end Developers, Data Scientists, Machine Learning Engineers, Business Intelligent (BI) Experts, Domain Experts and Statisticians. Based on the gravity of the project all or few of these experts should be brought together.
5. Make data available — Once the need is created, data should be made available to the data team. Ensure any creepiness towards data sharing is avoided. This is the vital step in creating a integration touch point with the traditional systems. Data from one or more traditional systems might flow into the data science system. At some instances, the outputs or predictions of machine learning might have to be integrated back with the traditional systems. A smooth, unhindered flow of data is a must.
6. Robust Architecture Design — The architecture design of the data science project should be carefully thought of. It could vary from choosing to use Big Data Spark or Hadoop, remote or cloud or local implementation, etc.
7. Applying suitable Algorithm — Usage of the most newest or fanciest algorithm might not necessarily be the solution to your problem. Start from the oldest and the most trusted basic algorithms, get a baseline metrics and then gradually build on with ensembles or the more complicated ones.
8. Clear Visualization and Insights — The outcomes of modelling the data science project needs to be very effectively conveyed back to the business stakeholders. The results, only when visualized as an impressive story will bring an impact to the business decision making process.
Conclusion
Once the above processes are well-defined and business stakeholders are in consensus to work towards achieving the data science objective, the traditional teams will work in close knit with the data science team in attaining the goals. And create yet another AI- Artful Intelligent system.
References:
TDWI Checklist Report — 7 steps for executing a successful Data Science Strategy
| The Data Science Execution Approach | 5 | the-data-science-execution-approach-13eaf0840991 | 2018-09-26 | 2018-09-26 11:11:43 | https://medium.com/s/story/the-data-science-execution-approach-13eaf0840991 | false | 654 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Umaima Tinwala | null | ef2af632386f | umaima21 | 2 | 6 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 1c1ab798b6c6 | 2017-12-14 | 2017-12-14 17:08:51 | 2017-12-14 | 2017-12-14 20:21:36 | 0 | false | en | 2018-01-05 | 2018-01-05 19:12:18 | 7 | 13ebb980c89b | 3.667925 | 1 | 0 | 0 | I recently needed some firewall logs for east-west traffic to hunt down some of the computers in our environment that most frequently… | 5 | Parse non-Microsoft Logs in PowerShell
I recently needed some firewall logs for east-west traffic to hunt down some of the computers in our environment that most frequently communicate with a certain group of servers.
I was able to bug one of our network team members to get the needed CSV files from our non-Microsoft firewalls. Upon first glance at one of the CSV files in Excel, I was met with an overwhelming amount of noise to the tune of about 20,000 rows that would be an absolute bear to wade through manually.
The main item that I wanted to extract from the logs was: What are the top 10 computers that are communicating with each of these servers? Secondly, I wanted to find out their hostnames, which was a piece of information that the logs did not include.
Here’s how I went about getting the needed information.
I want to note here that the following techniques are for extracting only a couple different types of information from one type of log file, but the same principles and techniques can be applied to many different situations of parsing data in PowerShell.
I first needed to verify that the CSV file was formatted in a PowerShell friendly way, meaning that the names of the various properties in the CSV file are in the first row. If the first several rows are occupied by miscellaneous information other than log entries, than just delete all the rows above the row that includes the property names and save it as a new file. If you have the opposite situation where all you have is log entries and no property names, then I’ll give you a solution in the following example.
Once your CSV is ready, run the following command:
$variableName = Import-Csv csvfilepath.csv
If your CSV has no property names in the first row, you can add these with the following optional parameter, “Header”. (You can also just add them into the CSV file before you import it.)
$variableName = Import-Csv csvfilepath.csv -Header “PropertyName1”, “PropertyName2”, “PropertyName3”
What the previous command allowed us to do was import the data into a variable that we created, which now houses all of our log entries in a PowerShell-centric format. We can now work with this data in the same way that we would with the output from a popular command like “Get-Process”.
We can now begin parsing our data.
$variableName | Select-Object -ExpandProperty “PropertyName1”
You may have many different properties (columns) associated with each of your log entries (rows), but right now we want to just look at one property of each of our entries. That is the IP address. What the previous command allowed us to do is only look at the IP addresses and no other properties. This is in line with the PowerShell way of scripting and that is filter left and format right. We want to filter down to the needed data right away. This will make the script run more efficiently as all of the following commands in the pipeline won’t be bogged down by unnecessary data.
While the results are definitely slimmed down a bit, there’s still just as many of them and they are still just as difficult to wade through. Here’s where we start narrowing down our output to get to our desired outcomes — the IP addresses with the most entries and their hostnames.
There’s an extremely handy PowerShell command that will allow us to group all of the duplicate entries in our CSV file into single rows as well as list the count that each of those duplicate entries occurs. That command is “Group-Object”.
$variableName | Select-Object -ExpandProperty “PropertyName1” | Group-Object |Select-Object Count, Name
We’re getting much closer as we just grouped all of our duplicate log entries into single rows and we’re only selecting the needed output from the “Group-Object” command, which are the “Count” and “Name” columns.
We can now both sort the output as well as filter down to only the top 10 results with the following command.
$variableName | Select-Object -ExpandProperty “PropertyName1” | Group-Object |Select-Object Count, Name | Sort-Object Count -Descending |Select-Object -First 10
We just sorted our output by the “Count” property and we output the results in a descending order (largest to smallest). We then passed on those results to “Select-Object” and used the “First” parameter to only include the first 10 results, which are the IP addresses with the most entries.
The results from the last command satisfy our first goal. Now we want to look up the hostnames that go along with our 10 IP addresses. In order to do that we could use a command like Nslookup (old CMD utility) or Resolve-DNSName (new PowerShell utility) to manually lookup each of the 10 entries in 10 different commands. This would defeat the purpose of using PowerShell to automate. We can actually use a slightly modified version of the last command to pipe those results onto even the pre-PowerShell utility — Nslookup and get our desired outcome.
$variableName | Select-Object -ExpandProperty “PropertyName1” | Group-Object |Select-Object Count, Name |Sort-Object Count -Descending |Select-Object -First 10 -ExpandProperty Name | Nslookup
The modification (-ExpandProperty Name) extracts only the IP addresses and passes those strings onto Nslookup, which then gets us our hostnames.
Hopefully this article has given you some ideas on how you can parse logs or data in general with PowerShell.
Further Reading and References:
Using Variables to Store Objects: https://docs.microsoft.com/en-us/powershell/scripting/getting-started/fundamental/using-variables-to-store-objects?view=powershell-5.1
Import-Csv: https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/import-csv?view=powershell-5.1
Select-Object: https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/select-object?view=powershell-5.1
Group-Object: https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/group-object?view=powershell-5.1
Sort-Object: https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/sort-object?view=powershell-5.1
Nslookup: https://technet.microsoft.com/en-us/library/bb490950.aspx
Resolve-DnsName: https://technet.microsoft.com/en-us/library/jj590781(v=wps.630).aspx
| Parse non-Microsoft Logs in PowerShell | 1 | parse-non-microsoft-logs-in-powershell-13ebb980c89b | 2018-04-19 | 2018-04-19 00:52:33 | https://medium.com/s/story/parse-non-microsoft-logs-in-powershell-13ebb980c89b | false | 972 | PowerShell blog primarily focused at beginners that don’t want to just copy & paste solutions, but want to understand what is going on. | null | null | null | PowerShell Explained | powershell-explained | POWERSHELL,MICROSOFT,AUTOMATION,SCRIPTING,SYSADMIN | paul_masek | Microsoft | microsoft | Microsoft | 19,490 | Paul Masek | IT Polyglot (Windows Systems Engineer / Windows SysAdmin / Linux SysAdmin / PowerShell Enthusiast) who dislikes repetitive tasks and loves automation. | f9c0488adabd | paulmasek | 9 | 34 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-09-12 | 2018-09-12 13:25:18 | 2018-09-12 | 2018-09-12 13:25:41 | 0 | false | en | 2018-09-12 | 2018-09-12 13:25:41 | 1 | 13ee17024b85 | 0.777358 | 0 | 0 | 0 | The first ever State of AI in Marketing Report of its kind dives deep into how leading brands are using AI and making the most of their… | 2 | The State of AI in Marketing: Activating Customer Data for AI Powered Marketing (from aspirations to reality)
The first ever State of AI in Marketing Report of its kind dives deep into how leading brands are using AI and making the most of their Customer Data
In this report analyzed from thousands of responses, independent research reveals how leading B2C companies are using their customer data and AI to exceed goals, and how struggling organizations can keep from falling further behind.
What you’ll find in this report:
The top reasons marketers are held back from using AI in their marketing. (and how to overcome these obstacles)
The relationship between customer data usage, AI, and achieving revenue goals (and what executives and practicioners need to understand about it).
What leading organizations are doing with AI in their marketing (and what makes them so successful).
How lack of access to your customer data is holding your marketing team back (and what you can do about it).
Marketers no longer have a problem with enough data… it’s quite the opposite now. Marketers, and the consumer brands they represent, have more data than they know what to do with.
This report uncovers the key issues holding marketers back.
Download Now
| The State of AI in Marketing: Activating Customer Data for AI Powered Marketing (from aspirations… | 0 | the-state-of-ai-in-marketing-activating-customer-data-for-ai-powered-marketing-from-aspirations-13ee17024b85 | 2018-09-12 | 2018-09-12 13:25:41 | https://medium.com/s/story/the-state-of-ai-in-marketing-activating-customer-data-for-ai-powered-marketing-from-aspirations-13ee17024b85 | false | 206 | null | null | null | null | null | null | null | null | null | Marketing | marketing | Marketing | 170,910 | Ivy | null | 151faab81df8 | ivyboyd | 5 | 5 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-10-02 | 2017-10-02 10:44:19 | 2017-10-02 | 2017-10-02 11:25:01 | 1 | false | en | 2017-10-02 | 2017-10-02 11:25:01 | 5 | 13ee8da0571a | 1.154717 | 5 | 0 | 0 | For decades, we have listened to a one-way radio to get our daily dose of news and music. Spotify came along and redefined the whole… | 4 | CityFALCON launches Interactive Personalised News on Amazon Alexa
For decades, we have listened to a one-way radio to get our daily dose of news and music. Spotify came along and redefined the whole experience of consuming music. And now, we, at CityFALCON, are changing your consumption experience for news.
Get started by activating our skill for English-US or English-UK on your Echo.
You could create new watchlists, add topics to follow (from stocks, commodities, foreign exchange, indices, cryptocurrencies, and more), and identify opportunities through trending topics. You could also give feedback on individual stories by liking or disliking them. CityFALCON delivers a personalised experience, and so the more you interact with the feed, the better it gets.
Here are some possible interactions with CityFALCON on Amazon Echo: “Alexa, ask CityFALCON what are the top stories for Apple”, “Create a new watchlist”, “Add Google to my watchlist”, “What is trending right now”, and more. Read more about what you can do with Alexa here.
As a start-up, this is our minimum viable product (MVP) and hence, just a start. We are constantly working on improving the service. Currently, we are a part of Kickstart Accelerator program in Switzerland where have an opportunity to test our product also in the Swiss market and work together with the local corporates.
Here is our vision for the future:
We’ll soon launch our service on Microsoft Cortana and Google Home. If you’d like to sign up for early access, please register your interest here.
| CityFALCON launches Interactive Personalised News on Amazon Alexa | 6 | cityfalcon-launches-interactive-personalised-news-on-amazon-alexa-13ee8da0571a | 2018-01-23 | 2018-01-23 01:42:54 | https://medium.com/s/story/cityfalcon-launches-interactive-personalised-news-on-amazon-alexa-13ee8da0571a | false | 253 | null | null | null | null | null | null | null | null | null | Alexa | alexa | Alexa | 2,768 | Ruzbeh Bacha | #FinTech #Entrepreneur. Founder cityfalcon.com. Love Salsa, Bachata, Ping Pong | 50e680362cde | ruzbehb | 1,835 | 841 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-10-21 | 2017-10-21 11:56:46 | 2017-10-22 | 2017-10-22 10:31:42 | 3 | false | en | 2017-12-02 | 2017-12-02 17:01:37 | 1 | 13ef4db4c23a | 1.25566 | 4 | 0 | 0 | Last week was introduction week. Here are three things I learnt: | 4 | UCL ML Academy Week 1 : The Machine Learning Landscape
Last week was introduction week. Here are three things I learnt:
Developing a Machine Learning solution is as much about framing the problem from a business perspective and having a through understanding of the data as it is choosing the right algorithm: starting off with a clear hypothesis, understanding the data generating process and inherent biases in the data and sense checking the outputs are 3/4 of the Machine Learning workflow
8 steps of a machine learning workflow
The Machine Learning Canvas (very similar to the Business Model Canvas) is an effective one page tool to take a data/ business problem and design it into a Machine Learning solution: it helps keep everyone, from data scientists, and engineers to the management team, aligned on the same objectives and goals
Organisations need the right mindset in their people to make the transition from a data later organisation to a data first one: managers with data driven thinking and data scientists with managerial thinking who work together in a connected manner
Strategy map for transitioning from data later to data first organisation
Next week it’s Marketing Analytics — stay tuned for another post in a few days.
| UCL ML Academy Week 1 : The Machine Learning Landscape | 5 | week-1-the-machine-learning-landscape-13ef4db4c23a | 2017-12-02 | 2017-12-02 17:01:39 | https://medium.com/s/story/week-1-the-machine-learning-landscape-13ef4db4c23a | false | 187 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Neeru Ravi | Strategy consultant; Cambridge engineering graduate; interested in technology. | 1a3324c9cc9 | neeruravi | 11 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-02-09 | 2018-02-09 19:48:24 | 2017-12-14 | 2017-12-14 00:00:00 | 1 | false | en | 2018-02-09 | 2018-02-09 19:50:54 | 7 | 13ef5529eef7 | 1.769811 | 1 | 0 | 0 | In a public press release, ThinkGenetic announced a collaboration with FDNA to help patients with undiagnosed genetic diseases. FDNA uses… | 5 | ThinkGenetic Integrates with FDNA’s Face2Gene to Help Undiagnosed Patients Find Answers
ThinkGenetic CEO, Dave Jacob and FDNA CEO Dekel Gelman speaking together at a recent event for Rare Patient advocacy.
In a public press release, ThinkGenetic announced a collaboration with FDNA to help patients with undiagnosed genetic diseases. FDNA uses AI to detect physiological patterns that reveal genes affecting health. Their application, Face2Gene, facilitates comprehensive and precise genetic evaluations using facial analysis, deep learning and artificial intelligence to transform big data into actionable genomic insights to improve and accelerate diagnostics and therapeutics.
The announced collaboration “enables ThinkGenetic to use FDNA’s Face2Gene suite of next-generation phenotyping applications and user network. Use of Face2Gene will complement the ThinkGenetic SymptomMatcherTM web application, which empowers undiagnosed patients by offering a guided self-assessment of symptoms that may match an underlying genetic disease. SymptomMatcher users can do this from the comfort of their homes and discuss the results with their doctors.”
According to Global Genes, there are more than 350 million people globally with chronic symptoms caused by a rare disease, 80% of which are genetic. Most remain undiagnosed. ThinkGenetic has been focusing efforts in 2017 to shorten the diagnostic odyssey with SymptomMatcher, a resource to help people find possible explanations for symptoms that they or their loved ones are experiencing. Patients use SymptomMatcher to explore possible causes of their undiagnosed symptoms with the help of proprietary analysis engines and a wealth of detailed information about numerous genetic conditions. For patients with immediate questions about their results before they speak to their doctors, they can contact ThinkGenetic’s team of genetic counselors directly from the application. The team can now review details including Face2Gene’s advanced clinical and genomic insights, as well as securely refer the patient and the insights to a doctor for evaluation using the Face2Gene application.
Says ThinkGenetic CEO, Dave Jacob, “Our use of Face2Gene will allow our team to access much-needed insights through a secure web solution. We are proud to lead the way with FDNA using advanced technologies to capture clinically valuable patient information, while partnering with healthcare professionals to facilitate medical evaluation and intervention.”
In reponse, FDNA CEO Dekel Gelman commented, “ThinkGenetic is a welcome addition to our global network. Integrating Face2Gene into ThinkGenetic’s patient-focused services provides a comprehensive resource for genetic syndromes. This integration can only bring us closer to an ultimate goal of enabling precision medicine with artificial intelligence to help reduce the time to diagnosis for patients.”
To read the press release in its entirety, please visit https://news.thinkgenetic.com/press/thinkgenetic-fdna-face2gene-integration-announcement.
Originally published at news.thinkgenetic.com on December 14, 2017.
| ThinkGenetic Integrates with FDNA’s Face2Gene to Help Undiagnosed Patients Find Answers | 1 | thinkgenetic-integrates-with-fdnas-face2gene-to-help-undiagnosed-patients-find-answers-13ef5529eef7 | 2018-03-28 | 2018-03-28 19:38:13 | https://medium.com/s/story/thinkgenetic-integrates-with-fdnas-face2gene-to-help-undiagnosed-patients-find-answers-13ef5529eef7 | false | 416 | null | null | null | null | null | null | null | null | null | Genetics | genetics | Genetics | 2,844 | ThinkGenetic | Empower people alongside their journey of living with genetic conditions by aiming to reduce the time to a genetic diagnosis with accessible content and tools. | 2d32add04e70 | thinkgenetic | 6 | 12 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 6b7116b1d1 | 2018-03-28 | 2018-03-28 15:12:18 | 2018-05-02 | 2018-05-02 08:59:56 | 2 | false | en | 2018-05-07 | 2018-05-07 12:12:48 | 6 | 13f107d0dd32 | 4.636164 | 16 | 1 | 0 | This is part two of our blog posts on the SqueezeDet objection detection architecture. We highly recommend reading part one and going… | 5 | A deeper look into SqueezeDet on Keras
SqueezeDet loss function
This is part two of our blog posts on the SqueezeDet objection detection architecture. We highly recommend reading part one and going through the tutorial for the KITTI dataset. We promised you some more juicy details on the implementation and here they are. While you do not necessarily need to know this, it surely helps if you ever decide to implement some model in Keras yourself or change the implementation to fit your needs. If you want to know more about the mathematics behind SqueezeDet, check out the original paper. Here we show you how to set up things for your own dataset and elucidate a couple of implementation details.
I don’t want to change anything, I just want it to run on my dataset!
Anchor shapes
The most important step, if you want to run SqueezeDet on your own dataset, is to adjust the anchor sizes. As we said before, think of these as a kind of prior distribution over what shapes your boxes should have. The better this fits to the true distribution of boxes, the faster and easier your training will be. How do we determine these shapes?
First, load all ground truth boxes and pictures, and if your images do not have all the same size, normalize their height and width by the images’ height and width. All images will be normalized before being fed to the network, so we need to do the same to the bounding boxes and consequently, the anchors.
Second, perform a clustering on these normalized boxes. You can just use the good old k-means without feature whitening and determine the number of clusters either by eyeballing or by using the elbow method. Sklearn has a nice implementation with tutorials. Since we are already at it, you can also do some sanity checks on your data, if you haven’t done so already. Check for boxes that extend beyond the image or have a zero to negative width or height.
If you are satisfied, add the cluster centroids of these new shapes for the anchors to your squeeze.config file or change them in the create_config.py, so that all future configs have these new shapes.
Image format and sizes
SqueezeDet’s convolutions and poolings are setup in such a way that images with a height of 1248 and a width of 384, result in a grid of 78x 24. Thus, if you change the size of the images, the number of grids changes accordingly.
If your images generally have a different format, you can change the ratios easily, if you keep the total numbers the same. An example would be a vertical format, for example documents, with a height of 768 and a width of 624, resulting in a grid of 48 x 39.
If your images are smaller, we recommend stretching to default size. You can also upsample them beforehand. We tried a smaller image size with the resulting sparser grid, and it did not seem to work out.
If your images are bigger and you are not satisfied with the results of the default image size, you can try using a denser grid, as details might get lost during the downscaling.
If you want anything more fancy, you would have to change the architecture.
Model training
As for the actual training, we recommend starting with a small batch size of 1,2 or 4. A small batch size entails a high stochastic noise, which makes escaping a initial bad local minimum easier. At least to some theories. You can also try different learning rates, if the default of 0.01 does not work. Meaningful rates are usually between 0.1 and 0.0001. Additionally, use the provided ImageNet weights. Even if your domain is completely different, training will go way smoother. If any readers have any clue of exactly why this works, please let us know in the comments. It cannot be just the filters in the earlier layers, as this works even if you classify black and white images into two classes. Even there imagenet weights make the network converge faster.
I want to change the code, what do I have to look out for?
Separate Evaluation script
One of the first things you might notice is that training and evaluation have been split into two scripts. The first, train.py, performs the actual training and saves a checkpoint after each epoch. The second one, eval.py, periodically looks for new checkpoints and evaluates metrics and losses on the validaton set. Why might you want to do this?
The first reason is that currently, Keras does not support the evaluation of data generators in the Tensorboard Callback. You could write yourself a custom callback, but additionally, the training process is halted until all Callbacks are sequentially finished. Thus, if you have complicated metrics during evaluation, your training may be delayed quite some time. Another reason might be, that you want to fully utilize your GPUs. While you can just run the training on multiple GPUs, this comes with an overhead. Splitting things up avoids this, as there is no direct interaction between training and evaluation needed, and it also enables you to run things easily after each other.
Custom loading
In the training script, the loading is not done by the native Keras model.load function, but by a custom one. If you take a peek inside, the function goes through all the weights and biases, checks the number of axis and loads only the minimum overlap between your model’s weights and the savefile’s ones. This enables you to load the weights of a model with a slightly different architecture. This could mean more filters in a convolutional layer or less classes in the prediction layer. Now, instead a training everything from scratch after a small tweak, you can reuse most of the costly obtained weights you already have.
How to add things to tensorboard
A big chunk of the eval.py script consists of creating TensorFlow variables, placeholders and assign operations. At first glance, this seems a little overboard. Let’s say you want to add new variable written to TensorBoard. Intuitively you might do something like this:
You open TensorBoard and things look perfectly fine. But if you run this,
you see that your graph has way more operations than expected:
This is because at every iteration, a new assign operation is created and every single one gets stored inside the TensorFlow graph. If you check the memory consumption, it will increase until no memory is available. Here is the proper way:
If at one point you want to write you own variables to TensorBoard, for example inside a custom callback, remember where the variables and operations are created. We hope this helps you with your own experiments.
| A deeper look into SqueezeDet on Keras | 87 | a-deeper-look-into-squeezedet-on-keras-13f107d0dd32 | 2018-06-12 | 2018-06-12 10:40:15 | https://medium.com/s/story/a-deeper-look-into-squeezedet-on-keras-13f107d0dd32 | false | 1,127 | Engineering Blog for omnius | null | null | null | omni:us | omnius | DEEP LEARNING,MACHINE LEARNING,COMPUTER VISION,NLP,COMPUTER ENGINEERING | omniusHQ | Machine Learning | machine-learning | Machine Learning | 51,320 | Christopher Ehmann | Scientific Engineer @ omni:us | 38773e62a6b1 | ehmann.christopher | 45 | 9 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-09-18 | 2018-09-18 19:31:09 | 2018-09-19 | 2018-09-19 05:48:18 | 2 | false | en | 2018-09-19 | 2018-09-19 05:48:18 | 6 | 13f11ab459f2 | 2.32673 | 0 | 0 | 0 | Greatest soft-skill scarcity: Communication | 1 | Communicating your findings: Don’t forget to see the forest from the trees
Greatest soft-skill scarcity: Communication
One trend I am beginning to see with data science practitioners and students alike is the difficulty in communicating their findings to business/policy stakeholders. After painstakingly cleaning the data, applying models, and interpreting the results, it’s time to present your findings to decision makers (and hopefully convince them why y̶o̶u̶r̶ ̶d̶e̶p̶a̶r̶t̶m̶e̶n̶t̶ your insights matter). That’s why we do this… not to win street cred on Kaggle.
Although communicating your findings is important, it is an often overlooked step that isn’t emphasized enough. I think this oversight is reflected through Linkedin’s blog that lists companies’ desire for workers with certain soft skills. In fact, the blog states communication skills is in great demand and it’s one of the skills that has the largest scarcity in the U.S. (Source: https://blog.linkedin.com/2018/april/19/the-u-s-is-facing-a-critical-skills-shortage-reskilling-can-be-part-of-the-solution)
source: https://business.linkedin.com/talent-solutions/blog/trends-and-research/2016/most-indemand-soft-skills
If you’re reading this, the odds are you’re a data scientist/analyst. So let’s turn the table around. Imagine you’re an executive with little statistical background and some wily looking data scientist starts blabbering on about p-values, heteroscedasticity, and GridSearchCV to tune hyperparameters. To finish it all off, the scientist ends by saying, “… and that’s why we can’t be 100% certain. We need more data.” Cue head banging on the table.
Recommendations:
Communication and presentation skills are huge topics that I won’t be able to cover in few paragraphs. However, with that said, here are my top five suggestions that served me well:
State your goal, methodology, and scope clearly: These should be the very first things to keep in mind for your presentation because the answers will influence the contents of your report.
Know thy audience: Adjust the statistical/technical details according to the audience’s level of knowledge! This video is a good example: https://www.youtube.com/watch?v=OWJCfOvochA
Clarity: It is tempting to use some CART model like XGBoost that would give you a really good score on Kaggle and call it a day. However, the problem with these models is that it’s a black box that’s difficult to interpret. If stakeholders are okay with getting the best predictive score possible, that’s great! However, if interpretability is more important, consider using logistic regression.
Do your homework: You don’t have to be a subject matter expert on your data but a little research goes a long way. This sorta ties in with knowing your audience.
Practice smiling: I lived in four different countries while I was growing up and I ended up not speaking any languages perfectly for a very long time. Through this experience, I found out first hand that words matter in communication but a smile goes a long way. If you’re reading this post, I’m assuming you dread public speaking… and you know what? That’s okay. It takes a long time to nurture confidence. Be patient with yourself :)
Some helpful links:
How to work the room: https://www.nytimes.com/guides/smarterliving/be-better-at-parties
Coaching: https://www.betterup.co/
Body posture tips: https://www.inc.com/jeff-haden/8-powerful-ways-to-improve-your-body-language.html
| Communicating your findings: Don’t forget to see the forest from the trees | 0 | communicating-your-findings-dont-forget-to-see-the-forest-from-the-trees-13f11ab459f2 | 2018-09-19 | 2018-09-19 05:48:18 | https://medium.com/s/story/communicating-your-findings-dont-forget-to-see-the-forest-from-the-trees-13f11ab459f2 | false | 515 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Will J. Suh | Aspiring data scientist with a background in nuclear nonproliferation and trade controls. Talk pidgin to me 🤙🏽 | 37483b86bf8a | williamsuh | 25 | 27 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | d777623c68cf | 2018-04-06 | 2018-04-06 16:44:49 | 2018-04-06 | 2018-04-06 18:11:46 | 3 | false | en | 2018-11-01 | 2018-11-01 11:23:30 | 16 | 13f3ffdd15ce | 4.278302 | 299 | 7 | 1 | DeepMind has a new paper where researchers have uncovered two “surpising findings”. The paper is described in “Understanding Deep Learning… | 3 | Deep Learning’s Uncertainty Principle
Photo by zhang kaiyv on Unsplash
DeepMind has a new paper where researchers have uncovered two “surpising findings”. The paper is described in “Understanding Deep Learning through Neuron Deletion”. In networks that generalize well, (1) all neurons are important and (2) are more robust to damage. Deep Learning network have behavior that reminds us of holograms. These results are further confirmation of my conjecture that Deep Learning systems are like holographic memories.
networks which generalise well are much less reliant on single directions than those which memorise.
Holograms are 3 dimensional images that are created by the interference of light beams. The encoding material of a hologram are 2 dimensional surfaces that capture the light field rather than the projection of an image on to the surface. In general you can take this to a higher dimension, so a hologram for 4D space-time will be in 3D (one dimension less). This relationship that a lower dimensional object can represent a higher dimensional object is in fact something our human intuition is unable to grasp. Our biological experience has understood the reverse, like 3D objects projecting in 2D planes.
https://necessarydisorder.wordpress.com/2018/03/31/a-trick-to-get-looping-curves-with-lerp-and-delay/
In the beginning of the 20th century, light was discovered to have that perplexing quality of being both a wave and particle. This non-intuitive notion of quantum physics, this wave-particle duality, is also expressed as the Uncertainty Principle. A concept that is even more general than this is the recent discovery of the “holographic duality”. Apparently, nature also binds two different objects, the hologram and its higher dimensional projection in an equally bizarre manner:
When the particles are calm on the surface, as they are in most forms of matter, then the situation in the pond’s interior is extremely complicated.
If strongly correlated matter is thought of as “living” on the 2-D surface of a pond, the holographic duality suggests that the extreme turbulence on that surface is mathematically equivalent to still waters in the interior.
There exists a holographic duality in nature that may likely also translate to the workings of Deep Learning networks. The greater the generalization of a network, the more entangled its neurons and as a consequence the network becomes less interpretable. The uncertainty principle as applied to Deep Learning can be stated as:
Networks with greater generalization are less interpretable. Networks that are interpretable don’t generalize well.
Which led me to my conjecture in 2016 that “The only way to make Deep Learning interpretable is to have it explain itself.” If you think about it from the perspective of holographic duality, then it is futile to look at the internal weights of a neural network to understand its behavior. Rather, we best examine its surface to find a simpler (and less turbulent) explanation.
This leads me to the inevitable reality that the best we can do is to have machines render very intuitive ‘fake explanations’. Fake explanations are not false explanations, but rather incomplete explanations with a goal toward eliciting an intuitive understanding. This is what good teachers do, this is what Richard Feynman did when he explained quantum mechanics to his students.
In addition, this explains the real-world fragility of symbolic systems. Symbolic rules are abstract interpretations of a system. Artificial Intelligence as envisioned in the late 1950’s was based on creating enough logical rules to arrive at human intelligence (see: Cyc). Decades of research in AI has been wasted on this top down approach. An alternative more promising bottom up approach (see: Artificial Intuition) is the basis of Deep Learning.
The Holographic Principle is a very compelling tell of how Deep Learning works. Unfortunately, just like Quantum Physics, it belongs to a realm that is simply beyond our own intuition. Sir Roger Penrose may have been on the right track when he speculated that brains work as a consequence of quantum behavior. However, I have doubts that this is true.
I will grant this observations that we simply don’t have enough detail as to how the brain actually works (see: “Surprise Neurons are More Complex”). There are also good arguments that certain animals (i.e. bird brains for navigation) are enabled by unique mechanisms found in the physical world.
However, the idea that quantum effects are the explanation for not just human cognition but animal cognition is a conjecture that is based on very sparse experimental evidence. The brain is likely more complex than our present artificial models, however that complexity (like turbulence) does not require quantum effects as an explanation. If you can explain turbulence as originating from quantum effects then that’s a similar kind of argument you will have to make about cognition. This is the argument that is missing with Penrose.
Penrose argues that you must have quantum effects to arrive at cognition. This currently is contrary to a majority of the opinion in neuroscience or in Deep Learning models. I am however proposing something different that says that quantum-like uncertainty is present in neural networks. There is emergent complexity in reality that is due to the internal interactions of massive populations (like the weather). I propose that our brains (similar to Deep Learning systems) exhibit this holographic-duality but entirely inside the regime of classical mechanics (i.e. composed of deterministic subcomponents).
Fun Fact: J.J. Thomson won the 1906 Nobel in Physics for experiments showing electrons are particles. His son G. P. Thomson won the 1937 Nobel Prize in Physics for showing electrons are waves.
'Omnigenic' Model Suggests That All Genes Affect Every Complex Trait | Quanta Magazine
The question most of genetics tries to answer is how genes connect to the traits we see. One person has red hair…www.quantamagazine.org
[1704.01552v1] Deep Learning and Quantum Physics : A Fundamental Bridge
Abstract: Deep convolutional networks have witnessed unprecedented success in various machine learning applications…arxiv.org
[1705.05750] Holography as deep learning
Abstract: Quantum many-body problem with exponentially large degrees of freedom can be reduced to a tractable…arxiv.org
http://aclweb.org/anthology/W18-5444
Explore Deep Learning: Artificial Intuition: The Improbable Deep Learning Revolution
| Deep Learning’s Uncertainty Principle | 1,630 | deep-learnings-uncertainty-principle-13f3ffdd15ce | 2018-11-01 | 2018-11-01 11:23:30 | https://medium.com/s/story/deep-learnings-uncertainty-principle-13f3ffdd15ce | false | 988 | Deep Learning Patterns, Methodology and Strategy | null | deeplearningpatterns | null | Intuition Machine | intuitionmachine | DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DESIGN PATTERNS | IntuitMachine | Deep Learning | deep-learning | Deep Learning | 12,189 | Carlos E. Perez | Author of Artificial Intuition and the Deep Learning Playbook — Intuition Machine Inc. | 1928cbd0e69c | IntuitMachine | 20,169 | 750 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 3650599ae4e2 | 2018-01-23 | 2018-01-23 16:58:35 | 2018-01-29 | 2018-01-29 19:41:15 | 15 | false | en | 2018-06-16 | 2018-06-16 15:16:30 | 16 | 13f63d7f09d8 | 9.254717 | 65 | 0 | 0 | By Christopher Skeels and Yash Patel | 5 | Caviar’s Word2Vec Tagging For Menu Item Recommendations
By Christopher Skeels and Yash Patel
Background
Recommendation Collections
At Caviar, Square’s food ordering app, one way we connect diners to great food is with restaurant and menu item recommendations. Rather than presenting a single overwhelming stream of recommendations, we segment them into distinct recommendation collections, each characterized by a common theme (e.g., “Delivery Under 30 Minutes”, “Pizza”, and “Recommended For You”). Individual collections are powered in various ways, including category taggings like cuisine type and dietary restriction, editorialized “best of” lists, custom algorithms, and machine learning:
Current restaurant-focused Caviar home feed recommendation collections.
Tagging
Of the collection sources, category taggings (e.g., “Pizza”, “Vegetarian”, and “Family Style”) are a simple yet intuitive and powerful way to segment content for use in a collection. The primary issue is in collecting and maintaining useful, accurate taggings. We’ve had success at manually tagging our restaurants, but as we’ve grown, we’ve been challenged to do the same for the greater than two orders of magnitude more menu items we have associated with our restaurants. As we continue to grow, hand-tagging our restaurants will also become impractical. To head this off, we are investing in automated methods of tagging, starting first with menu items.
Who’s Got Time To Supervise?
While automated methods like string matching and text classification are the typical paths to more scalable tagging, they have some non-trivial costs. Regular expressions and literal string matching are generally brittle, and the rules have to be hand-crafted and evolve over time as exceptions are found. Supervised text classification is less brittle but still requires hand-crafting a training set. A notable challenge is selecting the training examples. For example, if you want to train a pizza classifier, do you need to provide a negative example for all menu items that aren’t pizza? How many examples do you need for each type of “not pizza”? Is “dessert pizza” a positive or negative example? Et cetera.
Not So Fast…
In an attempt to avoid these costs, we’ve explored the viability of unsupervised machine learning methods such as clustering and similarity search. So far, we’ve found clustering and topic modelling to be insufficient alone. One problem is that clustering algorithms produce a single set of clusters, to each of which we must manually decide what tag to assign. An additional problem is that we have to provide the algorithm with a subjective number of expected clusters. While we may be able to estimate the upper bound on the number of cuisine types or dietary restrictions possible and supply that to the algorithm, we have no guarantee that the output clusters will conform to the kinds of intuitive groupings we want to tag our menu items with. We’ve found similarity search, in contrast, more promising.
Searching For A Free Lunch
At its simplest, similarity search compares a query item against a set of candidate items using an appropriate metric to determine which candidates are most similar to the query. If we can treat both our menu items and tags of interest as comparable items, then we can use similarity search to automatically classify our menu items with the tags most similar to them. Ideally, we can transform our tags’ and menu items’ text-based representations into fixed-length numeric vectors that can be meaningfully compared using a simple metric like cosine similarity (e.g., where the vector for tag “Sandwich” would be cosine-similar to the vectors for menu items like “Reuben” and “Grilled Cheese”) using a method such as Word2Vec, GloVe, or TF-IDF. Note that the use of curated tags here is a subtle, interesting departure from the typical use of similarity search in recommender systems where items are only compared with each other. Below, we walk through our recent efforts using Word2Vec-based similarity search to classify menu items for use in recommendation collections.
Walkthrough
Approach
As mentioned above, our approach was to recast what would typically be a classification problem as a similarity search problem. The basic steps followed were:
Train a Word2Vec model using Caviar’s restaurant menus as the corpus.
Convert each menu item into a vector using the Word2Vec model.
Curate a set of candidate tags and perform the remaining steps for each distinct set.
Convert each candidate tag into a vector using the Word2Vec model.
For each menu item vector, compare with each candidate tag vector and classify the menu item as the candidate tag that was most similar.
Optionally, filter out menu items whose most similar candidate tag was below a minimum threshold.
Validate the classification results via cluster visualization.
Select menu items for a given tag and display as a recommendation collection in the Caviar app.
Steps 1 & 2: Word2Vec Model + Vector Averaging
Word2Vec is neural network word embedding technique that learns a vector space model from a corpus of text such that related words are closer in the space than non-related words. This allows for interesting operations like similarity comparisons and vector algebra on concepts. For our purposes, we’d like to see a vector space model similar to the following:
Example Word2Vec vector space trained with menu data where similar foods are closer to each other.
We used the Gensim package to train a Word2Vec model on a corpus of Caviar restaurant menus. While there are a number of good pre-trained Word2Vec models based on large corpuses such as Wikipedia and Google News that we tried first, we found they did not perform as well as our custom model trained on Caviar’s restaurant menus alone. It appears that food language in menus is qualitatively different than food language in general sources like encyclopedias and news. This is something we plan to explore more in the future.
One limitation of Word2Vec for our purposes is that it only deals with individual words. For our formulation to work, we need to be able to create fixed-length vectors for the many multi-word tags and menu items that we have (e.g., the tag “Indian Curry” and the menu item “Big Bob’s Chili Sombrero Burger”). There are advanced techniques for this such as Doc2Vec, but we found that simply averaging the vectors for each word in a phrase worked well in practice.
Example vector averaging of “Indian Curry”.
Steps 3 & 4: Candidate Tag Selection
A key step in our approach is selecting the candidate tags. Our primary method is to compile sets of related tags such as cuisine types(e.g., “Pizza”, “Burger”, and “Thai Curry”) and dietary restriction (e.g., “Vegetarian”, “Vegan”, and “Gluten Free”) where we expect most menu items to be best classified by only one of the tags in the set. We used cluster visualization to demonstrate this approach with cuisine types. An additional promising method is crafting individual multi-word tags that capture a broader concept (e.g., “Cake Cookies Pie”) and match menu items beyond just those listed in the tag (e.g., match cupcake and donut items for “Cake Cookies Pie”). Crafting these types of tags is more of an iterative process akin to coming up with a good search engine query. We explored this approach with a couple of in-app collections as shown in a later section.
Looking ahead to Step 7: Validation via Cluster Visualization
As mentioned in the introduction, we wanted to avoid the cost of supervised methods, specifically the creation of a ground truth set for training and validation. However, we still needed a way to validate our tags, so as a compromise, we leveraged interactive cluster visualization to do ad hoc manual validation instead. We adapted the Tensorflow Embedding Projector for this purpose:
Caviar Menu Item Classification Projector.
Caviar Menu Item Classification Projector with menu item selected.
Steps 5–7: Cuisine Type Visualization
By following the outlined steps for every menu item with cuisine types as the candidate tag set, we obtained a cosine similarity score for each of the tags. We classified each menu item as the cuisine type with highest similarity score. The following sequence of figures highlights the explorations and validations we performed on the resulting data.
In the following figure, clusters are colored by most similar tag with no minimum similarity threshold set. This is our “high recall” scenario. It’s quite noisy, and we see a number of misclassified menu items. In some cases, this is because the menu item is just hard to classify. In many cases, though, it is because the menu item’s true cuisine type is not present in our cuisine type candidate tag set (e.g., we didn’t include tags for “Kombucha”, “Cheesecake”, or “Coconut Water”), and with no minimum similarity threshold set, an incorrect naive classification is made.
Colored by most similar tag with no minimum similarity threshold set, our “high recall” scenario.
The following figure demonstrates one of the many correct classifications:
Panang Curry correctly classified as “Thai Curry”.
The following figure demonstrates one of our misclassification scenarios. In this case, Tom Yum Noodle Soup is incorrectly classified as “Thai Curry”. This is harder to classify correctly due to Tom Yum being more closely related to Thai cuisine than to typical soups like chicken noodle and minestrone:
Tom Yum Noodle Soup incorrectly classified as “Thai Curry” because of closer association with Thai cuisine than soup.
The following figure demonstrates our primary misclassification scenario. In this case, Gyros Plate is incorrectly classified as “Fries” due to the menu item’s cuisine type (e.g., “Mediterranean” or “Gyros”) not being present in our cuisine type candidate tag set:
Gyros Plate incorrectly classified as “Fries” due to missing tag.
In the following figure, clusters are colored by most similar tag with a minimum similarity threshold set (i.e., only classifications of 0.7 cosine similarity or greater are kept). This is our “high precision” scenario and it is much better! We see good visual separation via inspection, and we find almost no misclassified menu items beyond the ambiguous cases we discuss next:
Colored by most similar tag with a minimum similarity threshold set, our “high precision” scenario.
In the next figure, we wonder, “Is it pizza or is it fries?” The answer is “Both!” The Pizza Fries menu item has high scores for both the “Pizza” and “Fries” tags, with “Pizza” edging out “Fries”. That the menu item is located equidistantly from the two distinct clusters demonstrates one of the intuitive strengths of this method. There is no reason we couldn’t classify items like this with multiple best tags:
Is it pizza or is it fries? Answer: both! The Pizza Fries menu item has high scores for both the “Pizza” and “Fries” tags, with “Pizza” edging out “Fries”.
In the next two figures, clusters are labeled with the classified tag rather than menu item name. The first figure is colored by the classified tag and the second figure is colored by similarity to the “Pizza” tag. In the first figure, we see “Pasta” and “Pie” items are closer to “Pizza” than other less similar items like “Sushi” or “Dumplings”. In the second figure, thanks to the similarity gradient, we can easily see a range from “Pizza” to “Not Pizza”. This is another demonstration of the intuitive mapping between the spatial arrangements and the Word2Vec-based similarities that allowed us to perform ad hoc validations on our results:
Colored and labeled by best tag, this view demonstrates the intuitive spatial arrangements we get from Word2Vec similarities such the “Pizza” items being near the “Pasta” items.
Colored by similarity to “Pizza” tag, we see a similarity range from “Pizza” to “Not Pizza” further demonstrating the intuitive spatial arrangements we get from Word2Vec similarities.
Step 8: Automated Recommendation Collections
Our ultimate goal with this work was to automate menu item recommendation collections from the Word2Vec-based taggings, and we’ve already implemented a few examples. The following figures demonstrate collections for both simpler cuisine type tags and more advanced multi-word concept tags.
In the following figure, we show recommendation collections for the “Pizza” and “Thai Curry” tags. These are very promising, showing a range of items from the standard (e.g., Cheese Pizza and Panang Curry) to the exciting (e.g., Calabria Pizza and Chicken Pumpkin Curry):
Cuisine type menu item recommendation collections for the “Pizza” and “Thai Curry” tags.
In the following figure, we show recommendation collections based on the interesting approach of crafting multi-word concept tags. We used “Cake Cookies Pie” as the tag for the “For Your Sweet Tooth” concept collection and “Tikka Tandoori Biryani” as the tag for the “North Indian Fare” concept collection. These too are very promising. In the “For Your Sweet Tooth” collection, we see items beyond the “Cake Cookies Pie” tag such cupcakes, donuts, ice cream, and even an ice cream scoop. In the “North Indian Fare” concept collection, we see items beyond the “Tikka Tandoori Biryani” tag such as Saag Paneer and Itsy Bitsy Naan Bites:
Recommendation collections for the concepts “For Your Sweet Tooth” and “North Indian Fare”.
Wrap-up
We think Word2Vec-based similarity search is a simple yet powerful method to aid our expansion into automated tagging and menu item recommendation collections. It gives us an immediate path to fielding these types of collections while buying us time to follow up on more time-consuming methods like string matching and supervised text classification. Longer term, we will develop a means of automated validation of the taggings as well as compare other vectorizations like TF-IDF and GloVe. We’d also like to explore embeddings augmented with menu item features such as price and photos.
Questions or discussion? Interested in working at Caviar? Drop me a line at [email protected] or check out jobs at Square!
| Caviar’s Word2Vec Tagging For Menu Item Recommendations | 315 | caviars-word2vec-tagging-for-menu-item-recommendations-13f63d7f09d8 | 2018-06-16 | 2018-06-16 15:16:31 | https://medium.com/s/story/caviars-word2vec-tagging-for-menu-item-recommendations-13f63d7f09d8 | false | 2,055 | Buying and selling sound like simple things - and they should be. Somewhere along the way, they got complicated. At Square, we're working hard to make commerce easy for everyone. | null | null | null | Square Corner Blog | null | square-corner-blog | ENGINEERING,SQUARE,OPEN SOURCE,DESIGN,TECH | squareeng | Data Science | data-science | Data Science | 33,617 | Christopher Skeels | Data Scientist + R&D Engineer + Product Zealot | 1c00dd9d3932 | christopher.skeels | 61 | 18 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-16 | 2018-07-16 21:03:32 | 2018-07-16 | 2018-07-16 21:14:37 | 1 | false | es | 2018-07-30 | 2018-07-30 12:58:59 | 3 | 13f7f995b9b9 | 1.550943 | 0 | 0 | 0 | Subes fotos de gatos, en pareja, en la playa, de ropa, de comida, en el gimnasio vale perfecto tienes una cuenta personal, lo que es del… | 5 | Quieres crecer tu marca en Instagram ? Te cuento mi plan en 1 minuto. @Rubenruiz_T
Photo by Jakob Owens on Unsplash
Subes fotos de gatos, en pareja, en la playa, de ropa, de comida, en el gimnasio vale perfecto tienes una cuenta personal, lo que es del todo válido si estas haciendo una marca personal. Pero y si eres una empresa….Houston tenemos un problema…necesitas un PLAN.
Usar cualquier red social sin un plan conduce al desperdicio de recursos y a un bajo retorno de la inversión.
Su plan de marketing en Instagram debe definir claramente en 4 puntos:
¿Cuál es su objetivo ? Podría ser aumentar el conocimiento de la marca, impulsar las ventas de productos, atraer tráfico a su sitio web, etc. Asegúrese de que sus metas en Instagram se alinean con sus objetivos de marketing más amplios.
Quién es su público objetivo. ¿Cuántos años tienen? ¿Dónde viven? ¿Qué hacen para trabajar? ¿Cuándo y cómo utilizan Instagram? ¿Cuáles son sus puntos débiles y desafíos?
Qué historia quieres contar. Tal vez quieras satisfacer tu curiosidad mostrando a tus seguidores cómo se fabrica tu producto. O puedes utilizar Instagram para compartir la perspectiva de un empleado para humanizar su marca. Otra táctica efectiva es posicionar su marca es mostrando el estilo de vida o los logros de sus clientes. La empresa Square lo hace muy bien y te animo a que le eches un vistazo.
La estética de su marca. Mantenga una personalidad de marca, un aspecto visual y una historia coherentes. Tus mensajes deben ser fácilmente reconocibles y relacionables de un vistazo.
— — — — — — — — — — — — — — The End — — — — — — — — — — — — —
Con esto terminamos nuestro post. Si te gusta esta pequeña y gratuita revista nos puedes ayudar simplemente compartiéndola o suscribiéndote a la publicación. Me llamo Rubén Ruiz y trabajo en Inteligencia Artificial en la industria financiera y como proyecto personal mezclo la IA y mi hobby, donde experimentamos con Inteligencia Artificial y mi hobby, el cine, hasta que explota el ordenador y tengo que seguir viendo Netflix :)
Puedes seguirme en Instagram @rubenruiz_t
| Quieres crecer tu marca en Instagram ? Te cuento mi plan en 1 minuto. @Rubenruiz_T | 0 | quieres-crecer-tu-marca-en-instagram-te-cuento-mi-plan-en-1-minuto-13f7f995b9b9 | 2018-07-31 | 2018-07-31 10:11:46 | https://medium.com/s/story/quieres-crecer-tu-marca-en-instagram-te-cuento-mi-plan-en-1-minuto-13f7f995b9b9 | false | 358 | null | null | null | null | null | null | null | null | null | Instagram | instagram | Instagram | 53,856 | Ruben Ruiz | null | 2db774b0464f | rubenruiz_26771 | 24 | 22 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-11 | 2017-12-11 03:12:06 | 2017-12-11 | 2017-12-11 03:16:09 | 2 | false | en | 2017-12-11 | 2017-12-11 03:21:41 | 2 | 13f82cdf77d1 | 2.72673 | 0 | 0 | 0 | On Nov 3rd, the fourth World Internet Conference kicked off in Wuzhen, Tongxiang City in China’s Zhejiang Province. The conference theme… | 5 | Intelligent robot in Wuzhen proved the Digital Transformation for business
On Nov 3rd, the fourth World Internet Conference kicked off in Wuzhen, Tongxiang City in China’s Zhejiang Province. The conference theme was “Developing Digital Economy for Openness and Shared Benefits — Building a Community of Common Future in Cyberspace”, and the conference lasted three days.
World Internet Conference 2017
A number of notable people included international government officials and moguls from world-class IT enterprises attended the conference. But, it was not just people that were at the event. A series of advanced theories, technologies, products, and business models also graced the conference. Sanbot Max, one of China’s leading commercial service robots, showcased it capabilities for digital transformation.
During the conference, Sanbot Max was under a different name, Xiaoyou. Xiaoyou is the customized financial service robot developed by QIHAN Technology and Yonyou Cloud. Xiaoyu is based on QIHAN’s Sanbot Max humanoid robot and is the professional assistant to help upgrade financial management.
Xiaoyou humanoid service robot comes with a series of innovative AI technologies, such as voice recognition, image recognition, object recognition, autonomous mapping, training functions, AI model, etc. Using these technologies, Xiaoyou can collect and analyze data, which Sanbot can apply that data into actual working scenarios. Plus, deep learning allows the AI robot to become smarter and be able to accomplish more tasks.
Xiaoyou can perform a number of assistant services for financial sectors. These services can be categorized into four types:
Smart Accounting: Xiaoyou is capable of smart data collection (OCR and speech input), smart check, smart voucher, automatic monthly fee, automatic tax-paying and reporting, automatic account checking, and other financial services. These services help the human staff focus on more complex work.
Smart Query: The intelligent humanoid robot is a very smart assistant that can communicate with customers efficiently. Listening to a customer’s simple inquiry, Xiaoyou can complete massive amounts of work like report searching, identification input, voucher entry, expense account input, etc. Soon, Xiaoyou will be able to recognize peoples’ natural language and business model to perform smart interactions, automatic query, producing vouchers, and so on.
Smart Recommendation: Based on industry Big data, Xiaoyou can recommend suitable templates, rule vouchers, report cases, basic industry data, and other recommendations for the financial staff, reducing repetitive accounting work and effectively increasing efficiency.
Smart Analysis & Prediction: The Big data from Xiaoyou’s interactions with customers is one of the key factors that help intelligent financial services. Its “brain” more accurately and intelligently analyzes Big data. It can offer intelligent operation management such as making future assessments and sales predictions.
The smart financial service robot can do more than what was listed above. It can offer more intelligent services based off the enterprise’s specialized requests. This reduces costs, improves working efficiency, and creates added financial value.
At the conference, Xiaoyou’s excellent performances caught the attention of senior government officials. Yuan Jiajun, the Governor of the Zhejiang Province, and Xu Lin, the Dean of CAC (Cyberspace Administration of China) came to see the customized Sanbot Max financial service robot.
Governor of the Zhejiang Province Yuan Jiajun and Xu Lin from CAC
During China’s №1 Internet conference, Jack Ma from Alibaba Group, Pony Ma from Tencent, Robin Lee from Baidu, Pichai Sundar from Google, and Tim Cook from Apple all talked about their opinions on Artificial Intelligence. And all of them think that AI will greatly benefit society.
A new technological revolution with the core of Internet Technology is rising. The internet is becoming the leading power of innovation that is driving development and changing people’s lives. Sanbot Max’s innovative technologies make it a superior AI service robot and will be a great power in the digital transformation of businesses. Sanbot Max is and will create a great business value and hi-level experience for everyone’s lives.
| Intelligent robot in Wuzhen proved the Digital Transformation for business | 0 | intelligent-robot-in-wuzhen-proved-the-digital-transformation-for-business-13f82cdf77d1 | 2017-12-11 | 2017-12-11 03:21:42 | https://medium.com/s/story/intelligent-robot-in-wuzhen-proved-the-digital-transformation-for-business-13f82cdf77d1 | false | 621 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Sanbot robotics | null | 64711d0410fb | sanbotrobotics | 10 | 7 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-15 | 2018-06-15 15:41:51 | 2018-06-15 | 2018-06-15 15:44:35 | 1 | false | en | 2018-06-15 | 2018-06-15 15:44:35 | 3 | 13f8e94fed1c | 5.34717 | 0 | 0 | 0 | Deep learning and artificial intelligence (AI) have moved well beyond science fiction into the cutting edge of internet and enterprise… | 2 |
Artificial Intelligence and Deep Learning — Part 01 of 04
Deep learning and artificial intelligence (AI) have moved well beyond science fiction into the cutting edge of internet and enterprise computing.
Access to more computational power in the cloud, advancement of sophisticated algorithms, and the availability of funding are unlocking new possibilities unimaginable just five years ago. But it’s the availability of new, rich data sources that is making deep learning real.
In this 4-part blog series, we are going to explore deep learning, and the role database selection plays in successfully applying deep learning to business problems:
In part 1 today we will look at the history of AI, and why it is taking off now
In part 2, we will discuss the differences between AI, Machine Learning, and Deep Learning
In part 3, we’ll dive deeper into deep learning and evaluate key considerations when selecting a database for new projects
We’ll wrap up in part 4 with a discussion on why MongoDB is being used for deep learning, and provide examples of where it is being used
If you want to get started right now, download the complete Deep Learning and Artificial Intelligencewhite paper.
The History of Artificial Intelligence
We are living in an era where artificial intelligence (AI) has started to scratch the surface of its true potential. Not only does AI create the possibility of disrupting industries and transforming the workplace, but it can also address some of society’s biggest challenges. Autonomous vehicles may save tens of thousands of lives, and increase mobility for the elderly and the disabled. Precision medicine may unlock tailored individual treatment that extends life. Smart buildings may help reduce carbon emissions and save energy. These are just a few of the potential benefits that AI promises, and is starting to deliver upon.
By 2018, Gartner estimates that machines will author 20% of all business content, and an expected 6 billion IoT-connected devices will be generating a deluge of data. AI will be essential to make sense of it all. No longer is AI confined to science fiction movies; artificial intelligence and machine learning are finding real world applicability and adoption.
Artificial intelligence has been a dream for many ever since Alan Turing wrote his seminal 1950 paper, “Computing Machinery and Intelligence.” In Turing’s paper, he asked the fundamental question, “Can Machines Think?” and contemplated the concept of whether computers could communicate like humans. The birth of the AI field really started in the summer of 1956, when a group of researchers came together at Dartmouth College to initiate a series of research projects aimed at programming computers to behave like humans. It was at Dartmouth where the term “artificial intelligence” was first coined, and concepts from the conference crystallized to form a legitimate interdisciplinary research area.
Over the next decade, progress in AI experienced boom and bust cycles as advances with new algorithms were constrained by the limitations of contemporary technologies. In 1968, the science fiction film 2001: A Space Odyssey helped AI leave an indelible impression in mainstream consciousness when a sentient computer — HAL 9000 — uttered the famous line, “I’m sorry Dave, I’m afraid I can’t do that.” In the late 1970s, Star Wars further cemented AI in mainstream culture when a duo of artificially intelligent robots (C-3PO and R2-D2) helped save the galaxy.
But it wasn’t until the late 1990s that AI began to transition from science fiction lore into real world applicability. Beginning in 1997 with IBM’s Deep Blue chess program beating then-current world champion Garry Kasparov, the late 1990s ushered in a new era of AI in which progress started to accelerate. Researchers began to focus on sub-problems of AI and harness it to solve real world applications such as image recognition and speech. Instead of trying to structure logical rules determined by the knowledge of experts, researchers started to work on how algorithms could learn the logical rules themselves. This trend helped to shift research focus into Artificial Neural Networks (ANNs). First conceptualized in the 1940s, ANNs were invented to “loosely” mimic how the human brain learns. ANNs experienced a resurgence in popularity in 1986 when the concept of backpropagation gradient descent was improved. The backpropagation method reduced the huge number of permutations needed in an ANN, and thus was a more efficient way to reduce AI training time.
Even with advances in new algorithms, neural networks still suffered from limitations with technology that had plagued their adoption over the previous decades. It wasn’t until the mid 2000s that another wave of progress in AI started to take form. In 2006, Geoffrey Hinton of the University of Toronto made a modification to ANNs, which he called deep learning (deep neural networks). Hinton added multiple layers to ANNs and mathematically optimized the results from each layer so that learning accumulated faster up the stack of layers. In 2012, Andrew Ng of Stanford University took deep learning a step further when he built a crude implementation of deep neural networks using Graphical Processing Units (GPUs). Since GPUs have a massively parallel architecture that consist of thousands of cores designed to handle multiple tasks simultaneously, Ng found that a cluster of GPUs could train a deep learning model much faster than general purpose CPUs. Rather than take weeks to generate a model with traditional CPUs, he was able to perform the same task in a day with GPUs.
Essentially, this convergence — advances in software algorithms combined with highly performant hardware — had been brewing for decades, and would usher in the rapid progress AI is currently experiencing.
Why Is AI Taking Off Now?
There are four main factors driving the adoption of AI today:
More Data. AI needs a huge amount of data to learn, and the digitization of society is providing the available raw material to fuel its advances. Big data from sources such as Internet of Things (IoT) sensors, social and mobile computing, science and academia, healthcare, and many more new applications generate data that can be used to train AI models. Not surprisingly, the companies investing most in AI — Amazon, Apple, Baidu, Google, Microsoft, Facebook — are the ones with the most data.
Cheaper Computation. In the past, even as AI algorithms improved, hardware remained a constraining factor. Recent advances in hardware and new computational models, particularly around GPUs, have accelerated the adoption of AI. GPUs gained popularity in the AI community for their ability to handle a high degree of parallel operations and perform matrix multiplications in an efficient manner — both are necessary for the iterative nature of deep learning algorithms. Subsequently, CPUs have also made advances for AI applications. Recently, Intel added new deep learning instructions to its Xeon and Xeon Phi processors to allow for better parallelization and more efficient matrix computation. This is coupled with improved tools and software frameworks from it’s software development libraries. With the adoption of AI, hardware vendors now also have the chip demand to justify and amortize the large capital costs required to develop, design, and manufacture products exclusively tailored for AI. These advancements result in better hardware designs, performance, and power usage profiles.
More Sophisticated Algorithms. Higher performance and less expensive compute also enable researchers to develop and train more advanced algorithms because they aren’t limited by the hardware constraints of the past. As a result, deep learning is now solving specific problems (e.g., speech recognition, image classification, handwriting recognition, fraud detection) with astonishing accuracy, and more advanced algorithms continue to advance the state of the art in AI.
Broader Investment. Over the past decades, AI research and development was primarily limited to universities and research institutions. Lack of funding combined with the sheer difficulty of the problems associated with AI resulted in minimal progress. Today, AI investment is no longer confined to university laboratories, but is pervasive in many areas — government, venture capital-backed startups, internet giants, and large enterprises across every industry sector.
Wrapping Up Part 1
That wraps up the first part of our 4-part blog series. In Part 2, we discuss the differences between AI, Machine Learning, and Deep Learning — Antonius Kasbergen
| Artificial Intelligence and Deep Learning — Part 01 of 04 | 0 | artificial-intelligence-and-deep-learning-part-01-of-04-13f8e94fed1c | 2018-06-15 | 2018-06-15 15:44:36 | https://medium.com/s/story/artificial-intelligence-and-deep-learning-part-01-of-04-13f8e94fed1c | false | 1,364 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Antonius Lourenço Kasbergen | DIY Software Developer and a crazy out of the box thinker. https://linkedin.com/in/antonius-lourenco-kasbergen/ | 1c8c11434828 | antoniuslourenokasbergen | 12 | 11 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 8e9bde78121d | 2018-02-08 | 2018-02-08 04:49:25 | 2018-02-08 | 2018-02-08 06:04:36 | 1 | false | en | 2018-02-08 | 2018-02-08 06:04:36 | 2 | 13f989745dcb | 3.007547 | 1 | 0 | 0 | By Tully Moss | 4 | WHITHER GLOBALIZATION?
By Tully Moss
The global economy has reached a level of interconnectedness not seen in nearly one hundred years. The level of international trade and the degree to which countries are dependent on that trade have not been seen since the early 1900s.
The question is: Will globalization continue, or will it wither?
There are powerful countervailing forces today: populism and nationalism. The United Kingdom voted for Brexit. The United States elected Donald Trump. Right wing parties are on the ascendant in Europe.
What is driving these forces? In good measure, it is economics. More specifically, the economics of the disenfranchised.
While urban, educated, technologically-savvy people in developed countries have benefited from globalization, millions in smaller towns and medium-sized cities have not. These areas have been the manufacturing and middle-class backbones of many developed nations, and these areas have been the hardest hit by the migration of factories and businesses to lower cost countries, most of which are in Asia.
Economists have been right in predicting that global trade would cause all economies to grow. What they left out of their equation was the hardship of those who would be on the losing end, those whose well-paying jobs would evaporate and migrate abroad.
Studies have shown that, while global trade makes a convenient whipping boy, it is not the only factor in the loss of jobs in developed markets. It may be a secondary factor. Of greater significance has been automation.
Advances in artificial intelligence will drive even greater degrees of automation. Governments are woefully unprepared for the vast disruption this may cause.
What is likely to happen is that, even if automation is the primary cause of job losses in developed countries, global trade will continue to be a more convenient and emotionally charged target.
If the growth in global trade slows or declines, then inflation will increase in developed countries, and this will lead to economic stagnation.
Meanwhile, geopolitical tensions are likely to continue rising. Western politicians will blame economic ills on rival countries in the East. This, in turn, will cause some of the Eastern countries to engage in retaliatory behavior. We are already seeing this in rising trade tensions between the U.S. and China.
Barring armed conflict between major powers, global trade will continue to be a significant part of the international economy. But given the political headwinds, its growth rate is likely to slow, and this could have a materially negative impact on a number of economies, including those in Asia.
Please visit and join the John Clements Talent Community.
About the author:
Tully is well-versed in Harvard’s approach to business education. He has led numerous executive education programs utilizing Harvard Business Publishing materials and has taught over one thousand Asian managers and executives. He has been trained at Harvard Business School, having completed Parts I and II of Harvard’s Art & Craft of Discussion Leadership course. His facilitation work has covered issues such as leadership, innovation, marketing, change management, corporate strategy, and digital disruption.
Tully has close ties to Harvard: he is a moderator for Harvard Business Publishing, and he co-authored a Harvard Business School case study on the Philippines, entitled, “The Republic of the Philippines: The Next Asian Tiger?”
Tully brings to his facilitation work a rich perspective developed from over thirty years of management consulting. He has consulted in a broad array of services and manufacturing industries on issues such as business unit strategy, marketing strategy, organizational effectiveness, and mergers and acquisitions. He is skilled at improving go-to-market effectiveness through fact-based assessments of market positions, segmentation opportunities, value propositions, and sales and distribution channel opportunities. He also has experience in developing process improvements and change management programs. He has authored thought pieces on high performance companies, market trends in Asia, and mergers and acquisitions.
In addition to leading case study discussions, Tully is an accomplished coach. He has been certified by the International Coach Federation, and, for 360-degree assessments, has been certified by both Zenger Folkman and The Leadership Circle.
He has extensive experience consulting in North America, Europe, and Asia. His clients have included Fortune Global 500 corporations as well as small and medium-sized enterprises.
Tully received his MBA from the University of Pennsylvania’s Wharton School of Finance and his bachelors, with honors, from Williams College. After graduating from Williams, he taught at a college in Hong Kong under a YALI grant.
| WHITHER GLOBALIZATION? | 1 | whither-globalization-13f989745dcb | 2018-03-07 | 2018-03-07 13:29:32 | https://medium.com/s/story/whither-globalization-13f989745dcb | false | 744 | Discover Your Full Potential with Looking Glass, a Publication from John Clements | null | johnclementsph | null | John Clements Lookingglass | the-looking-glass | LEADERSHIP,CAREERS,MANAGEMENT AND LEADERSHIP,PROFESSIONAL DEVELOPMENT,PERSONAL GROWTH | JohnClementsPH | Trade | trade | Trade | 5,209 | Marge Friginal-Sanchez | Storyteller in search of the right words. | 7f8f5513e8e3 | marge_sanchez | 76 | 50 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-06-01 | 2018-06-01 13:40:31 | 2018-06-01 | 2018-06-01 13:46:20 | 1 | false | en | 2018-06-01 | 2018-06-01 14:10:50 | 3 | 13f9a2b2efca | 3.445283 | 2 | 0 | 0 | Artificial Intelligence and law — is there a link? While AI has been subject to research since the 1950’s, recent advancements have played… | 4 |
Multidisciplinary Legal Tech Con 2018 explores the link between AI and law
Artificial Intelligence and law — is there a link? While AI has been subject to research since the 1950’s, recent advancements have played a key role in bringing AI from science fiction to reality. It is also noteworthy that research in the intersection of artificial intelligence and law is not a novel phenomenon either.
While artificial intelligence has been subject to research for decades, the applications of AI technologies develop at a breakneck speed. Even if we don’t quite understand its implications, AI plays a subtle, if often invisible, role in our daily lives. As the adoption of AI technologies starts to have an impact on both us as citizens and consumers, as well as on the fundamental aspects of the society, we might ask ourselves: What kind of challenges are we facing with algorithmic decision-making? Lawyers and law students alike might be particularly interested in the future of their profession: How can the legal sector survive — if you’ll pardon the pun — the “AI-pocalypse”?
The University of Helsinki Legal Tech Lab is going to tackle these questions in Legal Tech Con, an international conference organised in Helsinki on 8 June 2018. Legal Tech Con 2018 brings together researchers from different fields, legal practitioners as well as students to explore the impact the increasing use of AI has on law. The event focuses on various legal, ethical and practical implications of different AI and data-driven technologies. The multidisciplinary conference facilitates discussion between speakers from various backgrounds. These include experience ranging from legal education to computer science, not to mention Professor Timo Honkela, an all-star whose scientific discipline is difficult to define given his vast experience. The conference will also host several international speakers including Keynote Speakers Dory Reiling (Senior Judge, Amsterdam District Court) and Adam Greenfield (London-based writer and urbanist). You can read more about the Speakers here.
Pitch your Research!
As a novel element in this year’s conference, the Lab has invited a number of legal researchers to pitch their current research projects before the audience. The researchers only have five minutes to present their work — a tough task to anyone! — and it will be fascinating to hear how they will choose to summarise their projects. Academic research on legal tech can sometimes be rather complicated. The idea behind this type of Science Pitches is to make the valuable work of the researchers more easily approachable for practitioners and other stakeholders in the spirit of open science. The science pitches will showcase the multifaceted legal research currently done on AI-related themes.
“The conference approaches AI and Law with as broad perspective as possible”
The conference provides an overview of different socio-legal discussions on AI and produces critical insights into the future uses of machine learning. When talking about AI, it is certainly useful to understand some computational basics of the technology. We are delighted to hear a talk on the data science perspective by Indrė Žliobaitė, who currently acts as Assistant Professor in the Department of Computer Science at the University of Helsinki.
AI also affects us as consumers: algorithms play a role in what ads we see, which products we are offered, and even the prices we pay. Burkhard Schafer, Professor of Computational Legal Theory in Edinburgh Law School, will focus on consumer protection in algorithmic markets.
A crucial angle is also the future of legal profession: how are digital technologies changing legal work and how should legal education respond? A larger panel discussion consisting of legal practitioners and members of the academia will try to answer the question: What is required of the legal professionals of tomorrow? The conversation will not only be limited to the Finnish perspective, but we may also hear experiences from Germany, represented by Dirk Hartung who is the Executive Director of Legal Technology in Bucerius Law School.
Finally, there is the increasingly topical, fundamental question of ethics and access to justice. Justice by design can help to establish a fair and just online environment. The Lab’s very own Director Riikka Koulu will debate about values and ethics in AI development together with Professor Timo Honkela, who is well known for his acclaimed book Peace Machine (2017), where Honkela examines how technology can serve humanity.
Welcome Students!
Similarly to last year, the conference welcomes three students to present their work on the topic of the conference. We are delighted to invite the following students on stage:
Paula Pirinen, University of Helsinki
“Modern Robin Hood or a Public Enemy? Data Breaches in the Light of Hacker Ethic and Criminal Law”
Sanna Luoma, University of Helsinki
“Artificial Intelligence Improving the Delivery of Justice and How Courts Operate”
Atte Kuismin, University of Helsinki
“Black Box AI — The Problems with Sufficient Disclosure and Clarity of Claims”
Read more about the programme and get your tickets here.
Looking forward to see many of you in Legal Tech Con 2018!
Anna-Maria Svinhufvud is a student volunteer at the Legal Tech Lab finalising her LL.M. in the University of Helsinki. She is particularly curious about cryptocurrencies.
| Multidisciplinary Legal Tech Con 2018 explores the link between AI and law | 16 | multidisciplinary-legal-tech-con-2018-explores-the-link-between-ai-and-law-13f9a2b2efca | 2018-06-03 | 2018-06-03 17:54:32 | https://medium.com/s/story/multidisciplinary-legal-tech-con-2018-explores-the-link-between-ai-and-law-13f9a2b2efca | false | 860 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Legal Tech Lab | Legal Tech Lab is a new project within the University of Helsinki Faculty of Law. | bd10c080f56e | legaltechlab | 15 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-09-27 | 2017-09-27 07:59:30 | 2017-09-11 | 2017-09-11 11:04:22 | 0 | false | ja | 2017-10-08 | 2017-10-08 01:16:30 | 3 | 13fb18b2b5aa | 2.662 | 2 | 0 | 0 | こんにちは、 | 3 | RPA
こんにちは、
AIが相当バズっている今日この頃ですが、同じようにRPAという言葉に触れることが多くなりました。
RPA(Robotics Process Automation)とは、ざっくりいうと「人の手で処理プロセスを定義できる定型的な作業を自動的に再現する技術」です。
この技術を活用することで、ホワイトカラーの様々な業務を無人化したり、人間では不可能な生産性を実現できることから企業の関心度は急激に高まっています。
今月デロイトトーマツコンサルティングから発行された「働き方改革の実態調査2017」によると、企業の60%以上がRPAの導入に関心を持ち、10%が導入していることから関心度の高さが窺えます。現在この業界をリードしているのがRPAテクノロジーズという会社です。先日のガートナーの「Cool Vendors in Business and IT Services, 2017」にも選定されています。
ファウンダーであり代表を務めている大角暢之さんは私の昔の上司で、新卒で入社した小僧の私をゼロから育ててくれた恩師のような存在です。今週仕事で久しぶりにお会いすることがあり、いまのRPAについて色々話をお聞きできました。私なりの解釈ですが、RPAやAIはあくまでも要素技術でありそれ自体に価値があるわけではない
・大切なことはRPAを要素技術にしたデジタルレイバーが生まれること(具体的に特定の業務を代替できるソリューションに昇華できていること)
・また、デジタルレイバーが活躍する場や世の中に浸透するエコシステムを創る事にこのチャレンジの意義があるということ
・なぜならRPAが解決できる本当のペインは、このようなデジタルレイバーのような先進的なソリューションに予算とリテラシーのない業界・企業規模・業務内の小さなタスクに多く潜んでいるから
マクロでみた場合、2045年には対2013年比で生産労働人口が2,200万人不足すると言われているわが国で、デジタルレイバーが新しいHuman Resourceとして補完していくことは非常に重要なことだと思います。世の中のあらゆる定型業務をデジタルレイバーが浸透することで、貴重なリソースである人間はもっともっと創造的なことに時間を注げるようになります。しかしそのようなスケールで浸透していくためには、リテラシーが高く予算も持っている大企業単位で導入が進むビジネスモデルでは間に合いません。デジタルレイバーが代替できる業務は、商店街のパン屋さんの仕入れ業務の中の発注作業、町の不動産屋さんの営業業務の中のFAX送受信作業などあらゆるところに潜んでいます。
このようなビジネスに携わる人たちが、電気や水道を契約するようにデジタルレイバーを活用できる、そんなエコシステムを事業として構築することこそ本当のチャレンジだということです。
「人が創造的に生きていくために、己を表現した人生を歩めるようになるために」
そんな理想を追い求めRPAの可能性を信じ、10年前から愚直にこの市場を切り開いてきたRPAテクノロジーズの今後の活躍が楽しみです!
***
Originally published at medium.com on September 11, 2017.
| RPA | 2 | rpa-13fb18b2b5aa | 2018-02-10 | 2018-02-10 16:27:37 | https://medium.com/s/story/rpa-13fb18b2b5aa | false | 33 | null | null | null | null | null | null | null | null | null | Rpa | rap | Rpa | 1 | Jo Ninomiya/二ノ宮 尉 | Jenerate Partnersの代表です。新規事業をテーマにコンサルティング、インキュベーション、投資を手がけています。主に大企業向け新規事業戦略立案、アーリーステージのスタートアップへの投資及び事業運営、海外スタートアップの日本参入支援が得意です。 | 8b6d32253360 | joninomiya | 41 | 37 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 666edce44658 | 2018-04-23 | 2018-04-23 14:57:01 | 2018-04-24 | 2018-04-24 02:18:46 | 2 | false | ja | 2018-04-24 | 2018-04-24 02:18:46 | 1 | 13fce30812a7 | 4.637333 | 12 | 0 | 0 | データサイエンティストとデータエンジニアの定義とその誤解による悲劇、そしてそれを救う存在である機械学習エンジニア | 5 | [抄訳] Data engineers vs. data scientists
データサイエンティストとデータエンジニアの定義とその誤解による悲劇、そしてそれを救う存在である機械学習エンジニア
紹介記事
Data engineers vs. data scientists
Check out the "Managing and Deploying Machine Learning" sessions at the Strata Data Conference in London, May 21-24…www.oreilly.com
紹介記事を同僚から教えてもらい、面白かったので抄訳した
Aki Ariga さんが言及していた記事と方向性が同一で面白かった。
Data Scientists : ビジネスサイドを理解し、他者にわかりやすく可視化と言語化できる職能。そして高度な数学的知識に基づいたモデリングやアルゴリズム提案スキルも持っている。Data Scientistsには高度なProgramming skillは必ずしも必須ではない、なぜならモデリングやアルゴリズムを実装するためにプログラミングを習得した人が多いからだ。システムデザインやProgramming スキルは、Software Engineer やDataEngineerからみると見れたものではない(そしてそうでなくてはならない、なぜならスペシャリストだから)
Data Engineer : 分散プログラミングを意識して構築できる職能。DEは卓越したプログラミングスキルとシステム構成力を持つ。定義 : つまりビッグデータに対してシステム的に解決できるスキル。クラスタ設計までがData Engineerの役割であり運用(Ops)はやらない
from : https://www.oreilly.com/ideas/data-engineers-vs-data-scientists
Data Scientists とData Engineer の互いの特化したスキルは補完しあってこそ輝く。
Data Scientist がデータパイプラインを作ると悲劇が起きてしまう。多くの企業がData ScientistをData Engineer として雇っているが、それはData Scientistsのスペックを活かしきれず、20–30%の効率で働かせてしまっている。そしてそのROIはめちゃくちゃ悪い。Data Scientists は適切なツールと選択肢を熟知していない(そしてData Engineerはシステムデザインと熟知しているのでミスは侵さない)
e.g.
実際著者が聞いたこんな話がある。 Data ScientistsがApache Sparkを使って10GBのデータ処理を行うのに1回15mの時間がかかっていた。(だがRDBMSを使えば、10msで終わる) Data Scientistは彼らの流儀を疑うこと無く1日に16回Sparkの処理を実行しており、15mx16=240mつまり4hの時間を無駄にしてる。RDBMSを使えば、160msで終わるというのに…
Data Scientist が頑張ってシステムを構築するが、職能の限界で Data Engineerしか作れないシステムなので時間とお金の浪費になった
実情 : Data Scientist として雇われたのに、Data Engineer として働かざるを得ない人がほとんどだ
理想的な人材配置
Case : 初期の組織: 2–3人のData Engineer : DataScientist Group
Case : 更に複雑な事に取り組みたい 4–5人の Data Engineer : 1 Data Scientist
`Data Engineer change to Data Scientist` の王道→それが新しい職種 : Machine Learning Engineer!!
from : https://www.oreilly.com/ideas/data-engineers-vs-data-scientists
Machine Learning Engineer は両方の職種の経験がある。Machine Learning Engineer はData Scientist が規律を守っていないコードのラストワンマイルを守り、データパイプラインを作る職種である。
(ここで バランスを取るようにData Engineer のディスりが突然入る) Data Engineerは白黒、0–1の世界が好きなので推測の世界(DS)が好きではない。そのため、Machine Learning Engineer はData Scientists ,Data Engineer のふたつにまたがる存在である
時代の流れにより、最適化や機械学習のパッケージが揃いつつあり、既存の有名なアルゴリズムは簡単に使える。また Google Auto ML, Data Robotのように、Data Scientistの領域も代替可能なtoolsが普及しつつある
結論: Data Scientist, Data Engineer の役割が判明した今、組織の構造変革が必要
Data Engineer を雇ってData Scientist の代わりにデータパイプラインのシステム構築させれば皆幸せになる。
| [抄訳] Data engineers vs. data scientists | 12 | ataengineers-vs-data-scientist-13fce30812a7 | 2018-05-31 | 2018-05-31 13:43:07 | https://medium.com/s/story/ataengineers-vs-data-scientist-13fce30812a7 | false | 153 | 🤖 < Computer Vision, Machine Leaning Tech Blog. Love Python 🐍 | null | shunyaueta | null | Moonshot 🚀 | null | moonshot | COMPUTER VISION,MACHINE LEARNING,PYTHON,PROGRAMMING | hurutoriya | Data Scientist | data-scientist | Data Scientist | 488 | Shunya Ueta | Machine Leaning Engineer 🤖 Tech Blog→ https://medium.com/moonshot | 1f96d0a59fd4 | hurutoriya | 223 | 590 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-03 | 2018-03-03 04:51:18 | 2018-03-03 | 2018-03-03 04:58:14 | 0 | false | ja | 2018-03-03 | 2018-03-03 04:59:58 | 0 | 13fe70b74610 | 1.02 | 10 | 0 | 0 | 自社で開発、販売している WebRTC を利用したリアルタイムな音声や映像を配信するサーバには録音や録画機能がある。これは配信している音声や映像を変換せずに WebM 形式にして保存をするという仕組み。 | 2 | 録音と録画
自社で開発、販売している WebRTC を利用したリアルタイムな音声や映像を配信するサーバには録音や録画機能がある。これは配信している音声や映像を変換せずに WebM 形式にして保存をするという仕組み。
WebRTC はブラウザから気軽に利用できるというのが強いため、良いカメラとかを用意せず、マシンに最初からついているカメラを使うことがほとんどなので、サーバで録画してくれる機能はあったほうがいいだろうと考えていて開発した機能だ。
用途としては 1 対 1 の面談やウェブでのセミナーを録画するといった仕組みを想定していた。
ただ、最近になってやけに録音や録画機能の問い合わせが多い。何やら機械学習に使いたいらしい。
自分は機械学習はまったくといっていいほど分からないが、素材が必要ということくらいはなんとなくわかる。
つまり、録音や録画したファイルを利用して機械学習を利用して何かしら判断をしたいということなだろう。
音声のテキスト化とかくらいは思いついていたが、録音した音声から感情を判断したり、録画した動画から物体検知を解析したりという需要があるようだ。
機械学習とは縁遠い生活をしていたのだが、ここにきて意外な形で恩恵を受けている。
| 録音と録画 | 17 | 録音と録画-13fe70b74610 | 2018-03-09 | 2018-03-09 00:08:08 | https://medium.com/s/story/録音と録画-13fe70b74610 | false | 18 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | V | 時雨堂 | f0ab18163247 | voluntas | 1,030 | 20 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-08 | 2018-05-08 15:50:14 | 2018-05-08 | 2018-05-08 15:55:56 | 1 | false | en | 2018-05-08 | 2018-05-08 15:55:56 | 3 | 1402d5d4e40b | 1.837736 | 0 | 0 | 0 | Machine learning has some powerful capabilities when applied correctly to a business objective. We started a journey last year to build a… | 3 | Dynamic Pricing Through Machine Learning
Machine learning has some powerful capabilities when applied correctly to a business objective. We started a journey last year to build a dynamic pricing tool to transform how the Motorcoach industry operates. In this blog, we’re going to discuss some of the benefits we discovered while building a dynamic pricing tool.
Initial Challenges
There were several major challenges with building the dynamic pricing tool but we will focus on the data constraints for this blog. Majority of the challenges existed in the consistency and how the data was stored. There were many instances in which we discovered the same data but with different labels. Our main goal was extracting the most relevant data to use for our machine learning algorithms.
Automating Decision Making
One of the biggest challenges the Motorcoach industry currently face is how the price is created. Currently, a customer visits the site and books the days they need a bus rental. Then the request gets sent to a sales team. The sales team then looks at past prices and develops a price to send back to the customer. This process can take up to 24hrs to complete.
We wanted to build a tool that would solve this 24hr lag and return prices in seconds instead of hours. Once we identified the business problem, we started gathering data. After the data preparation, we built a machine learningalgorithm that allowed us to identify the peak and slow periods for booking a bus.
Testing The Tool
During the testing phase, we discovered several other ways to build the tool that proved to be effective as well. Over the course of several months, we created multiple versions of the dynamic pricing tool. We noticed throughout our experiment, certain times of the year and week had higher booking activity. The owner was well aware of the seasonality in the business which helped guide us in the building process. Our end results was a tool that no
t only predicted future trends but would automatically increase or decrease the price for a given day.
End Result
Our final solution resulted in three versions of the dynamic pricing tool. We built a series of tools that can solve a variety of pricing challenges most companies face. The tools we created were demand-based, inventory, and time-of-day pricing. These tools can be tailored to solve a variety of pricing challenges.
Alex Brooks is the founder and CEO of AE Brooks, LLC (d/b/a,Entreprov), a Seattle-based firm that builds custom predictive analytics and automation tools to enhance a company’s performance and decision making.
| Dynamic Pricing Through Machine Learning | 0 | dynamic-pricing-through-machine-learning-1402d5d4e40b | 2018-05-08 | 2018-05-08 15:55:57 | https://medium.com/s/story/dynamic-pricing-through-machine-learning-1402d5d4e40b | false | 434 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Alexander Brooks | null | 4fae5d655e1f | alexbis987 | 1 | 3 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 74d3d7d95404 | 2018-04-07 | 2018-04-07 09:51:26 | 2018-04-06 | 2018-04-06 13:21:28 | 1 | false | en | 2018-04-12 | 2018-04-12 10:56:00 | 17 | 14036229b439 | 3.856604 | 1 | 0 | 0 | Radio-controlled cyborg mouse; Estonia wants to sequence DNA of its citizens; people don’t like the idea of AI weapons; and more! | 5 |
This week — radio-controlled cyborg mouse; Estonia wants to sequence DNA of its citizens; people don’t like the idea of AI weapons; and more about AI, robots and merging humans with machines!
H+ Weekly is a free, weekly newsletter with latest news and articles about robotics, AI and transhumanism. Subscribe now!
More than a human
Researchers Steer Cyborg Mice Through Maze with Brain Stimulation
Researchers from Korea created a remote-controlled cyborg mouse. Using an implant installed inside rodent’s brain, they were able to steer the mouse through a maze and made it ignore sexy lady mouse and an enticing pile of food. Humans next?
It’s Not My Fault, My Brain Implant Made Me Do It
Here’s a problem to think about. Imagine you have a brain implant and that this implant can change how you think or act. If your implant breaks and you do something terrible, who is responsible? It was you but it wasn’t true you, right?
Roam debuts a robotic exoskeleton for skiers
Do you like to ski? Would like to become a (sort of) cyborg? Here’s an exoskeleton for you. Roam presented a lower-limb exoskeleton aimed at skiers to help them ski longer or harder.
Could You Upload Your Mind Into A Computer?
Olly from Philosophy Tube tackles the mind uploading issue from a philosopher’s perspective. Topics discussed: neural dust, can brain and mind be modelled as a software, differences between human mind and computers and inevitable questions about consciousness and identity.
The Philosophy of Deus Ex: Does Paranoia Have Its Purpose?
Wisecrack looks into the philosophy behind Deus Ex and talks about paranoia, transhumanism, technology, chaos and control. A very good philosophical dissection of a game where interaction between humanity and technology is a central point of the plot.
New Bionic Arm Blurs Line Between Self and Machine for Wearers
A prosthetic arm is not your arm and you can’t make yourself think otherwise. Researchers from University of Alberta are working to solve this problem. Their arm uses vibrations and a sensory illusion to give wearers a natural sense of their robotic appendage moving through space. Even when blindfolded and wearing noise-canceling headphones, the patients knew what his robotic arm was up to.
Artificial Intelligence
Leading AI researchers threaten Korean university with boycott over its work on ‘killer robots’
More than 50 leading AI and robotics researchers have said they will boycott South Korea’s KAIST university over the institute’s plans to help develop AI-powered weapons. The threat was announced ahead of a UN meeting set in Geneva next week to discuss international restrictions on so-called “killer robots.” It marks an escalation in tactics from the part of the scientific community actively fighting for stronger controls on AI-controlled weaponry.
Black Mirror-inspired person blocked
Someone got inspired by Black Mirror bleak vision of the future and made an AI to block people from images.
The Threat of AI Weapons
The vast majority of this video by Veritasium is someone else’s video about the threat of AI and how can it start World War III. It might terrify you.
MKBHD & Neil Tyson — Artificial Intelligence vs. Machine Learning
What’s the difference between AI and machine learning? Marques Brownlee from MKBHD and Neil deGrasse Tyson and Chuck Nice from StarTalk discuss this question.
Robotics
Big organizations may like killer robots, but workers and researchers sure don’t
Tech firms and universities interested in building AI-powered weapons for lucrative military contracts are, predictably, facing some significant pushback. Thousands of Google employees signed a letter in protest of using company’s AI systems to analyze drone footage for US army and 50 leading AI researchers are boycotting Korean university KAIST over its plan to develop autonomous weapons. But while some are protesting against such weapons, others are developing them.
Zipline’s Bigger, Faster Drones Will Deliver Blood in the United States This Year
Zipline is probably the first commercially successful drone delivery service. They use cheap drones and existing telecommunication infrastructure to deliver blood to rural hospitals faster and safer than using traditional methods. After successfully proving the idea in Africa, Zipline will start to deliver blood in US this year.
Walmart Launches Small Army Of Autonomous Scanning Robots
If you can call robot in 50 stores across four states a small army. The robots are 1.8m tall, equipped with an array of lights, cameras, and radar sensors. It then goes up and down each aisle on its own, at around 4km/h, scanning the shelves for empty spots and also checking the price tags. The robot can scan an aisle in about 90 seconds, a fraction of the time it would take a human to do.
Russian postal drone smashes into a wall on its inaugural flight
I’m very interested what caused the drone to smash into a wall just seconds after liftoff.
Biotechnology
Estonia Wants to DNA Test Its Citizens to Offer Personalized Life Advice
Estonia is fully embracing the future technologies. This month, the Estonian government kicks off a program that aims to collect the DNA of 100,000 of its 1.3 million residents. In return, it will offer them lifestyle and health advice based on their genetics. Estonia will be the first nation to offer state-sponsored DNA interpretations to its citizens.
Thanks for reading this far! If you got value out of this article, it would mean a lot to me if you would click the 👏 icon just below.
Every week I prepare a new issue of H+ Weekly where I share with you the most interesting news, articles and links about robotics, artificial intelligence and futuristic technologies.
If you liked it and you’d like to receive every issue directly into your inbox, just sign in to the H+ Weekly newsletter.
Originally published at hplusweekly.com on April 6, 2018.
| H+ Weekly — Issue #148 | 20 | h-weekly-issue-148-14036229b439 | 2018-04-13 | 2018-04-13 15:23:53 | https://medium.com/s/story/h-weekly-issue-148-14036229b439 | false | 969 | A free, weekly newsletter with latest news and articles about robotics, AI and transhumanism. | null | hplusweekly | null | H+ Weekly | h-weekly | TECHNOLOGY,TRANSHUMANISM,ARTIFICIAL INTELLIGENCE | hplusweekly | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Conrad Gray | Engineer, Entrepreneur, Inventor | http://conradthegray.com | e60a556ba1d4 | conradthegray | 633 | 102 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | b5842c7e2a63 | 2017-09-25 | 2017-09-25 18:58:48 | 2017-09-27 | 2017-09-27 16:20:05 | 1 | false | en | 2018-09-14 | 2018-09-14 13:39:44 | 4 | 14065997bcbd | 3.015094 | 12 | 0 | 0 | By Andrew Robson, President & CRO, Earnest Research | 5 | Alternative Data: What is It & How Can It Help Consumer Brands and Investors?
By Andrew Robson, President & CRO, Earnest Research
Much like today’s throwaway phrases such as Machine Learning and Neural Networks, Big Data was, and continues to be, uttered by many and understood by few. Increasingly, however, companies large and small are harnessing insights gleaned from massive data sets for better strategic decision-making. Alternative Data, an emerging category within Big Data, is capturing both the mind- and wallet-share of operators and investors alike.
Alternative Data, an emerging category within Big Data, is capturing both the mind- and wallet-share of operators and investors alike.
Alternative Data describes data that, until very recently, has not been leveraged to provide insights on consumers, companies, and industries. Through increased availability and adoption of hardware and software like mobile handsets, digital banking, and low-cost satellite imagery, there has been exponential growth in digital footprints generated by people and companies: Satellites count cars in parking lots, handsets are beacons of their owners’ locations, and consumers leave digital breadcrumbs with each credit card swipe. Ever-cheaper cloud storage and powerful processing technology have made it economically viable for both incumbent and startup analytics firms to mine alternative data for valuable insights.
Corporations, investors, and startups use these new capabilities to better understand consumer behavior and business performance. For example, retailers typically have an abundance of information on their customers’ activity within their store, but have little-to-no insight into where their customers go and what they purchase before and after shopping with them. Alternative data offers the ability to provide 360 degree visibility into their customer’s journey — both inside and outside their stores — as well as near real-time intelligence on how their competitors are performing, often down to store-level granularity.
Transaction Data
Traditional tools for understanding customer behavior, such as surveys, are often limited, inefficient, and biased. Consumer transaction data provides hard, behavioral evidence of consumer preferences as manifest through their actual purchasing history.
While consumer transaction data (credit and debit card transactions) has long been used by brands to understand industry trends, only recently has alternative data been leveraged to power more granular insights for retailers, restaurants, e-commerce brands, and investors. Using anonymous, aggregated consumer transaction data, brands are now able to dive deeper into their customers’ full wallets and also gain competitive intelligence on individual competitors, rather than an index or basket of competitors. For example, Taco Bell executives can see where else their customers dine, where they shop for clothing and which specialty retailers they frequent, all at the brand level.
Other capabilities include customer segmentation by online and offline spend. By understanding cross-channel differences in repeat purchase rates and customer lifetime value, companies can answer questions such as: How much should I spend to acquire an online customer vs. in-store customer? How are their overall brand preferences different from one another?
Regional dynamics can be observed down to the store level. A new fast casual concept can study customer migration in Dupont Circle in D.C. to decide where to open their next location, as well as to benchmark against other fast casual competitors in the same area. Domino’s and Pizza Hut can quantify their share of the pizza market down to city-level granularity and be alerted to when new competitors come into a market.
By observing an aggregate panel of anonymous consistent shoppers, business leaders can track their customers’ behavior as they discover new brands and frequent competitors. JC Penney executives may have data on what their customers are buying and how much their most loyal customers spend, but without the right data, they can’t answer more complex questions, such as: What happens to my most loyal customers’ spend after they sign up for Amazon Prime?
When used correctly, data can transform how people make decisions about their business. Companies utilizing alternative data sources such as transaction data will lead their industries, connecting analytics to action and leveraging a competitive information advantage.
About Earnest
Earnest Research creates consumer and market research products derived from the aggregated credit and debit card transactions of millions of anonymous US consumers. Working with world-class data partners, we transform raw data into a source for business and investment professionals to ask better questions and make better decisions. We believe data has the power to change the way we work.
Our 90+ person team is headquartered in New York City working with our clients to help them find actionable insights that drive business value.
Visit earnestresearch.com for more information.
| Alternative Data: What is It & How Can It Help Consumer Brands and Investors? | 70 | alternative-data-what-is-it-how-can-it-help-consumer-brands-and-investors-14065997bcbd | 2018-10-30 | 2018-10-30 21:55:03 | https://medium.com/s/story/alternative-data-what-is-it-how-can-it-help-consumer-brands-and-investors-14065997bcbd | false | 746 | Earnest Research transforms the real-time purchase data of millions of U.S. consumers into actionable insights for business and investment professionals. | null | null | null | Earnest Research | earnest-research | null | earnestresearch | Ecommerce | ecommerce | Ecommerce | 46,740 | Earnest Research | Earnest Research transforms the real-time purchase data of millions of U.S. consumers into actionable insights for business and investment professionals. | beb8448e23d4 | earnestresearch | 20 | 3 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-09-08 | 2017-09-08 15:23:43 | 2017-09-08 | 2017-09-08 17:54:04 | 0 | false | en | 2017-09-08 | 2017-09-08 17:54:04 | 0 | 1406d1a2d6ce | 1.626415 | 0 | 0 | 0 | The Home Mortgage Disclosure Act was enacted by Congress in 1975 to address concerns that financial lending institutions were perpetuating… | 1 | Is There Evidence of Racial Discrimination in the HMDA Dataset?
The Home Mortgage Disclosure Act was enacted by Congress in 1975 to address concerns that financial lending institutions were perpetuating racial segregation by engaging in discriminatory lending practices. Basically, it requires large financial institutions to keep a record of every home mortgage application they receive. These records are all aggregated and made available to the general public on a yearly basis. The HMDA dataset is available through the Consumer Finance Protection Board’s website, has a very easy to use API, and as of the time of this blog post it contains over 100 GB of data from 1975 through 2015. Its a great resource for anybody who wants to do a data science project.
I recently did a project analyzing the 2015 data from the state of MD in order to determine if I could find evidence of racial discrimination. This project had a two-pronged approach. First I built the best model I could to predict whether or not a mortgage application would be approved. My best predictive model was a Xgboost classifier model, which had a ROC-AUC scored of .9573. I removed all features that had to do with race from that model, and it barely impacted its’ performance with a new ROC-AUC score of .9569. The feature importances revealed that the model used race of the applicant much more rarely than other features in making its predictions.
I followed up on that predictive model by building a logistic regression model so I could see how much race actually affected the odds of getting a mortgage approved. The logistic regression model revealed that a black person would have .72 times the odds ofgetting a loan than a white person with all other features in my model being held constant. If a white person would have 91.5% chance of getting a loan (the overall approvement rate for white people), that’s about 10.75:1 odds of getting approved, a black person with all other factors in my model being the same would have 7.75:1 odds or 88.5% chance of getting approved, so this is really not a drastic difference. The HMDA database does not include the credit score of the applicants which is correlated with race, so it is very possible that the predictive power that race had in my model could be attributed to the credit score of the applicants. While its impossible to prove or disprove causation using machine learning, the data analysis I did certainly does imply that its’ not a major factor in determining loan availability.
| Is There Evidence of Racial Discrimination in the HMDA Dataset? | 0 | is-there-evidence-of-racial-discrimination-in-the-hmda-dataset-1406d1a2d6ce | 2018-05-06 | 2018-05-06 18:36:21 | https://medium.com/s/story/is-there-evidence-of-racial-discrimination-in-the-hmda-dataset-1406d1a2d6ce | false | 431 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Russell | Poker player turned data scientist | 984c0455f71c | rnbrown | 94 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 361835aa8baf | 2018-05-27 | 2018-05-27 11:33:35 | 2018-05-27 | 2018-05-27 11:39:20 | 3 | false | en | 2018-05-31 | 2018-05-31 11:52:43 | 4 | 140980b1c29d | 3.312264 | 0 | 0 | 0 | “You need to meet your customers expectations”, the words from my Marketing professor almost a decade ago still ring in my head, “Know your… | 5 | How can a Chatbot Optimize your business’s Customer Service Center?
“You need to meet your customers expectations”, the words from my Marketing professor almost a decade ago still ring in my head, “Know your customers, understand their pain” she repeatedly said, “Meet their expectations, strive to raise the bar, or otherwise prepare to lose your customers to an another business who does”
But customers’ expectations are always evolving, with cutting edge technologies reshaping our lives, Artificial Intelligence and Smart Machines powered experiences we go through in different aspects of our lives, it is not out of ordinary that we expect the same experience from companies and service providers such as yours.
First things first; “ Your Customers”
Text or Call — B2C communication options
Customers are already using new ways to get tasks done, take for example Conversational AI: smart systems that people interact with (via Text or Speech) in a natural way, whether it is in their smartphones optimizing tasks and schedules, or in the living room lowering the shutters, dimming the lights, playing music, or reading the morning daily brief. And while such tasks might not seem that complex, the fact that people are actually relying more and more on such systems is opening a new era for B2C interaction.
Customers are driven by a global “always-on” economy, the convenience of a 24/7 service, whether it is online shopping, paying bills, filing claims, or looking for answers to troubleshoot or research an issue. As a service provider, this means that your company is entitled to provide your customers with a self-service option to get a resolution when they need it and not have to wait to talk to an agent.
Some Numbers
Studies show that 64% of consumers are likely to have a positive perception of companies that offer communication via text. While 75% of Millennials choose texting over talking when given the choice between only being able to call or text.
With a generation showing a growing receptiveness to engage with smart machines and chatbots, reducing cost, time, can easily be achieved with an Intelligent Conversational system or a “Chatbot”, there lies a great opportunity for contact centers to be effective today by implementing an AI based chatbot solution to help with optimizing their efficiency.
What does this mean for your Customer Experience and Costs?
As a leading Chatbot service provider, It is very important for us to try to understand from our clients point of view, the ROI of our chatbots for them. This is vital to our efforts to serve our customers in the best way possible and to continuously improve our platform.
With the use of botique.ai proprietary Dashboard functionality, and the help of our satisfied customers, we were able to gather some numbers that gave us some insights on our platform benefits for our customers; here is a quick run for the numbers:
65% of the daily tasks of a customer service agent are repetitive, botique.ai platform was able to handle the majority of these tasks, freeing up the agent’s time for more value added tasks.
By adding botique.ai Digital Agent to the customer care team, the usual workload of a team of six agents can now be handled by one! Resulting in an annual labour and recruitment costs savings of up to $350'000 (with an average annual salary of $50K per agent, the average loaded annual cost can come up to $70K per agent, multiply this with 5 employees that do not need to be interviewed, hired and trained).
47.3% of the customers who interacted with the Digital Agent returned for other services later through the digital agent, meaning that botique.ai platform is becoming the customers preferred way of interacting with the business.
We all know that dealing with customers can be expensive, resource draining and time consuming; for many companies the customer experience is poor, too many contact points, repetitive tasks, and time consuming procedures from information collection, to user verification, to filling forms, etc … But it doesn’t have to be this way! This why botique.ai has developed a platform to help businesses engage their customer to enhance the Customer Experience, optimize performance and reduce costs.
botique.ai
botique.ai is an enterprise platform that automates chat interactions using proprietary Conversational AI. We compose Chatbots using a wide selection of pre-built, pre-trained AI modules, which makes the integration process quick and easy.
Rajai Nuseibeh
Originally published at www.blog.botique.ai.
| How can a Chatbot Optimize your business’s Customer Service Center? | 0 | how-can-a-chatbot-optimize-your-businesss-customer-service-center-140980b1c29d | 2018-05-31 | 2018-05-31 11:52:47 | https://medium.com/s/story/how-can-a-chatbot-optimize-your-businesss-customer-service-center-140980b1c29d | false | 732 | The blog of botique.ai an enterprise platform that automates chat interactions using proprietary Conversational AI that better simulates human style conversation. | null | Chat.BOTique | null | botique.ai | botique-ai | CHATBOTS,ARTIFICIAL INTELLIGENCE,CUSTOMER EXPERIENCE | botique_ai | Chatbots | chatbots | Chatbots | 15,820 | Rajai Nuseibeh | Executive and a former CEO & Founder with over 10 years of Technology and Innovation Experience. Expertise in Business Development & Marketing | f6a2c6dec095 | rajai_nuseibeh | 17 | 6 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 1f35b6f451e8 | 2018-03-27 | 2018-03-27 08:43:55 | 2018-03-27 | 2018-03-27 08:45:17 | 2 | false | en | 2018-03-27 | 2018-03-27 15:07:55 | 9 | 140ae95cf90b | 1.17956 | 1 | 0 | 1 | null | 5 | Curated ressources to make it in Machine Learning 6.2
Python Dictionary Tutorial
As a data scientist working in Python, you'll need to temporarily store data all the time in an appropriate Python data…www.datacamp.com
Memoization in Python: How to Cache Function Results - dbader.org
Speed up your Python programs with a powerful, yet convenient, caching technique called "memoization." In this article…dbader.org
https://en.wikipedia.org/wiki/Object-oriented_programming
qu'est ce qu'une classe concrete - Google Search
1 févr. 2018 ... La figure suivante représente nos classes. Classe Animal. Nous avons bien notre classe mère Animal et…www.google.ie
Les décorateurs
Python est un langage de programmation qui nécessite d'écrire moins de lignes de code que le C ou le C++. Il se veut…openclassrooms.com
Objet est une instance de classe - Google Search
En programmation orientée objet, on appelle instance d'une classe, un objet avec un comportement et un état, tous deux…www.google.ie
classe abstraite - Google Search
Classe abstraite et interface. Qu'est ce qu'une classe abstraite. Une classe abstraite est une classe incomplète. Elle…www.google.ie
creer une classe python - Google Search
17 nov. 2017 ... Nous allons aussi essayer de comprendre les mécanismes de la programmation orientée objet en Python…www.google.ie
Première approche des classes
Python est un langage de programmation qui nécessite d'écrire moins de lignes de code que le C ou le C++. Il se veut…openclassrooms.com
| Curated ressources to make it in Machine Learning 6.2 | 1 | curated-ressources-to-make-it-in-machine-learning-6-2-140ae95cf90b | 2018-03-27 | 2018-03-27 15:07:56 | https://medium.com/s/story/curated-ressources-to-make-it-in-machine-learning-6-2-140ae95cf90b | false | 211 | We offer contract management to address your aquisition needs: structuring, negotiating and executing simple agreements for future equity transactions. Because startups willing to impact the world should have access to the best ressources to handle their transactions fast & SAFE. | null | ethercourt | null | Ethercourt Machine Learning | ethercourt | INNOVATION,JUSTICE,PARTNERSHIPS,BLOCKCHAIN,DEEP LEARNING | ethercourt | Machine Learning | machine-learning | Machine Learning | 51,320 | WELTARE Strategies | WELTARE Strategies is a #startup studio raising #seed $ for #sustainability | #intrapreneurship as culture, #integrity as value, @neohack22 as Managing Partner | 9fad63202573 | WELTAREStrategies | 196 | 209 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-07-29 | 2018-07-29 10:49:23 | 2018-08-21 | 2018-08-21 15:16:21 | 10 | false | en | 2018-08-21 | 2018-08-21 16:00:05 | 5 | 140bdbb155f7 | 6.329245 | 0 | 0 | 0 | When I started my part-time PhD in 2006 I was interested in how analytics could help writers of screenplays to ask more questions of their… | 3 | Screenplay Analytics Using ParallelDots
When I started my part-time PhD in 2006 I was interested in how analytics could help writers of screenplays to ask more questions of their script.
While there were a growing number of online and offline tools to help you write a screenplay, like Celtx and Final Draft, there were very few applications that helped you to analyze them. One of the few applications that did, Sophocles, simply disappeared soon after I started (where are you now Tim Sheehan?).
I was not interested in the primary ‘use case’ for screenplay analytics: Predicting if a screenplay would be a blockbuster and other kinds of ‘slush-pile’ filtering. Stuff that companies like Vault and StoryFit are now doing commercially. Even though I understood then that this use case is where the real money is and what the few calls I got from LA were actually interested in.
No. I wanted to understand more about my own screenplays to help me make more informed decisions. The basic use case for all analytics. I wanted to be told stuff about my own screenplays, so that maybe I would learn something and this could help me to rewrite and improve my script.
The first challenge was relatively simple: to turn a screenplay (unstructured data) into a structured data set. This was not so hard because a properly formatted screenplay is already structured to some extent with scene headings, character names, action and dialog, transition instructions etc. And it became even easier when Final Draft introduced their .FDX file format that uses XML to tag a wide range of screenplay entities in the text.
So now I had a data source as a basis for my analytics: Final Draft FDX files. By importing and ‘shredding’ these files into a MySQL database I had the basis for a rudimentary screenplay analytics application that surfaced in various forms over the years including at Viziscript.com. There’s lots of charts and basic analysis of screenplay content as standard in Viziscript but here I wanted to test out some of the text analytics provided via the ParallelDots (PD) application programming interfaces (APIs).
Eventually I will simply build-in the API calls from Viziscript to something like PD but for the purposes of this article I just used PD’s online text analytics demo page. To test PD, I used content from one of my own scripts that is a 60 page WW2 drama similar to ‘The Triple Echo’ by H.E. Bates which was filmed and starred Glenda Jackson and Oliver Reed.
Sentiment Analysis
Viziscript already includes sentiment analysis of character dialog but I wanted to see what the PD API does. Luckily this is easy to do from Viziscript as you can download all the dialog for a specific character as text and then submit it to PD via their online demo page. I wanted to compare the sentiment analysis of the protagonist’s dialog vs. the antagonist’s dialog from my script. The result — for me at least as the scriptwriter — was interesting. Both came out with a negative sentiment result, which was not exactly what I had expected or hoped for.
My protagonist’s dialog text returned 51.8% negative.
My antagonist’s dialog text returned 45.2% negative.
So what do I get from that? Maybe, based on their dialog only (but not their actions in the script) my protagonist is not positive enough and my antagonist is not negative enough. Some food for thought for a revision.
Emotion Analysis
Emotion analysis is a little more granular than overall sentiment analysis so how did my protagonist and antagonist dialog rate here?
It appears that my protagonist is angry based on this analysis of her dialog. I can understand that analysis but it’s not perhaps what I would like a script reader to come away with.
Unfortunately, my antagonist appears to be largely happy, which again is not quite what I expected. More food for thought.
Keyword Extractor
I then submitted my protagonist dialog to the keyword extractor and got this result:
In this context, keyword analysis does not tell me a lot even though it indicates my script could be something about pilots and flying. Obviously this analysis API is more likely to be helpful with analyzing say a technical article rather than a screenplay. However, the analysis of phrases is probably more helpful here in terms of ‘situating’ the storyworld of my screenplay.
The next PD API — named entity analysis — is not especially relevant to screenplays or to my screenplay in particular, so I skipped this.
Text Classification (Taxonomy)
Again this API is not that helpful for screenplay analytics but could help you to understand better what genre your screenplay fits into.
Based on my antagonist’s dialog the top-rated society/dating ‘tag’ is relevant to my screenplay since this script is a ‘social’ drama with two ‘romantic’ storylines.
I also skipped the semantic analysis API that is used to compare two texts and to help to cluster texts by relatedness. However, this API could be very useful for the following kinds of screenplay analysis:
Comparing sequels (e.g. The Godfather to The Godfather II)
Clustering screenplays from a corpus as a technique within genre analysis
The intent analysis API is also not that useful for screenplay analytics given that the range of ‘intents’ that the engine has been trained to identify: News, feedback, query, marketing and spam. But it’s easy to imagine that if the engine had been trained to identify other types of intent possibly ‘hidden’ in dialog text then this could be an interesting approach for role analytics.
Abusive Content Analyzer
This API will help you to confirm if your screenplay is likely to be rated ‘R’ or needing ‘parental guidance’. In reality, the likelihood is that if you are the writer then you are probably clear that your screenplay does or does not include ‘abusive’ content.
In this case, the PD engine came up with the correct answer from my antagonist’s dialog. Overall, my screenplay has no profanity in it and relatively mild sex and violence and this is reflected even in the dialog of my antagonist. So I’m happy with this non-abusive analysis.
Syntax Analysis
This API splits text submitted into the relevant parts of speech (POS) as the (partial) screenshot below indicates based on my antagonist’s dialog:
In screenplay analytics, POS analysis is interesting for at least two use cases:
Analyzing action content by extracting the most commonly used verbs within the script as a whole
2. Analyzing action content for nouns
Why?
Because action verbs give some indication of how ‘actiony’ the script is. The nouns often reflect the need for a production entity — namely ‘props’ — which helps with production script breakdown. Viziscript already does this noun and verb analysis of your script and can also produce word clouds from both.
The language detection API was also not used as most scripts are not multi lingual so this analysis would have minimal use.
Here, I’ve tried to show how some text analysis APIs, in this case from ParallelDots, may help you to ask more questions of your screenplay. I don’t have any commercial relationship with ParallelDots and there are many other APIs out there that you can try for textual analysis of your screenplay content. You should also bear in mind that the PD engines and APIs are not trained specifically for screenplay analytics but for use cases you might expect — the analysis of tweets, emails, chat messages and other typical online content data streams that are textual in nature.
For many of you reading this, including screenwriters, screenplay analytics may simply elicit a ‘so what’ yawn. But as the relentless march of analytics generally and machine learning specifically moves on, even screenwriters may not be immune from the application of programmatic analysis to their works as part of the commercial pipeline or supply chain for screenplays. It may be some time before this kind of quantitative analytics is enhanced with more useful qualitative analytics but that’s only a matter of time.
| Screenplay Analytics Using ParallelDots | 0 | screenplay-analytics-using-paralleldots-140bdbb155f7 | 2018-08-21 | 2018-08-21 16:00:05 | https://medium.com/s/story/screenplay-analytics-using-paralleldots-140bdbb155f7 | false | 1,346 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Stewart McKie PhD | Wannabee screenwriter with a PhD in screenplay analytics, creator of the coaches sitcom, freelance CRM/ERP specialist by day and lover of salty liquorice. | 4ed2ed6879c8 | tripos | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-16 | 2018-09-16 17:23:33 | 2018-09-16 | 2018-09-16 17:24:34 | 1 | false | en | 2018-09-16 | 2018-09-16 17:24:34 | 0 | 140bdd23e961 | 1.550943 | 1 | 0 | 0 | When the rest of the world thinks and discusses about the arrival of world war 3,what comes to my mind is that,the world war, I can say the… | 2 |
Heart or Brain..???
When the rest of the world thinks and discusses about the arrival of world war 3,what comes to my mind is that,the world war, I can say the world war 3 has already been initiated.It has been faught since even before the initiation of world war 1st and 2nd.And when will it be finished ? I just can never predict.Have you guessed it ? Oh..ya ! I am talking about the war between the human heart and human brain.The human life which can simply be described as 'unending struggle' daily calls for the numerous situations where we 'normal human beings’are stucked between what to prefer a heart or a brain ?
Scientifically or biologically speaking the brain co-ordinates the entire body while the heart just pumps and tends to circulate the blood in body.So the brain gets the dominancy here.But practically thinking one can be alive without proper brain co-ordination but not without the circulation of blood making the heart dominant.This war between heart and brain sometimes leads us to the worst situation and the one 'has to' take the bull by horns, what to prefer ? Either heart or brain.If human brain were so simple that we could understand it, we would be so simple that we couldn’t.But sometimes too much stress literally causes the human brain to shut down temopararily and heart starts influencing us.Tus.The Brain says dare to dream but the heart whispers care to achieve.The Brain says be calm in the storm and the heart says be light in the dark.And this unending battle between heart and brain never let us find the peace of soul.
Sometimes our brain needs more time to understands what our heart feels and sometimes we must listen to our brain to save our heart.Again keeping that dilemma between heart or brain as it is.Actually human heart is a place of feelings and brain is the seat of discrimination and if properly combined together with devotion we can tune it into the God’s message.
So I can conclude one should follow the brain but must not forget to take his/her brain along.
#My feelings#Prajyotugale
| Heart or Brain..??? | 1 | heart-or-brain-140bdd23e961 | 2018-09-16 | 2018-09-16 17:24:34 | https://medium.com/s/story/heart-or-brain-140bdd23e961 | false | 358 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Prajyot Ramesh Ugale | Studying Mechanical engineering at MIT-WPU . Entropy lover and an occasional blogger. Passionate engineer contributing to the entropy since 1997. | 242dd0fc6d1b | prajyotugale976 | 2 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-14 | 2017-11-14 20:36:51 | 2017-11-14 | 2017-11-14 21:31:31 | 1 | false | en | 2017-11-14 | 2017-11-14 21:31:31 | 2 | 140cad7abf73 | 7.566038 | 1 | 0 | 0 | “Ok, so there’s gonna be some trouble. I figured. How much trouble are we talking about here, Seshat?” asked Mick, “Are we talking, like… | 5 | Layla Noir: Chapter 22
“Ok, so there’s gonna be some trouble. I figured. How much trouble are we talking about here, Seshat?” asked Mick, “Are we talking, like, trying to get a good deal on your cellphone plan kind of trouble? Or is this more on some making a huge coke buy then trying to steal the money back kind of shit?”
“…I beg your pardon?” Seshat replied, confused, “searching my data… coke the soda pop or coke the euphemism for cocaine? What’s a cellphone plan? I can only skip through so much TV at once you know…”
“Ok ok, that’s my bad. It’s hard to know what you will understand and what you won’t! I can try to be more literal. On a scale of ten, then, with ten being impossible and one being a cake walk, I mean, with one being incredibly simple and easy, how much trouble do you estimate we will have saving Wiz?” Mick realized that this partnership was going to be more difficult than he originally thought. On the other hand, it was almost nostalgic to once again be explaining himself to an alien entity, A.I. or not.
“That is easier to process, thank you. To spare you confusion, I will simplify my answer to one significant digit and say that we have a situation of either eight or nine levels of difficulty out of ten, depending on variables. He is likely to be The State’s most valuable prisoner at this point in time. I cannot think of any reason why he would not be extremely well guarded and secured. However, we have one advantage: with The Editors sadly perished, The Overarching Chancellor might relax his guard. He has no reason to suspect that our attack will occur. This could lower my response to a seven or even perhaps a six if we are really fortunate. Therefore I should summarize these results as a seven, then, or wait, an eight, or maybe…”
“Hey, wait up though!” interrupted Mick, “I just remembered something that Wiz told me before we were separated. He was talking to some worker drone in the tunnels near a storage area who said that The Overarching Chancellor was planning some kind of celebration to mark the end of The Editors. Is there any way for you to know if that has happened yet or not?”
“How depressing” at first, Mick thought that this was her only response, but after an uncharacteristically long pause, she continued, “well, I can’t know from in here. As I mentioned, this room is shielded in many different ways. A best case scenario of probing the outside from within is direct failure, a worst case is failure with a weakening or loss of those protective elements. I’m already inside your electronics. Why don’t we leave, figure out where exactly we are, and plan from there?”
“Sure. You said it was a secret door, right? How do you…” as Mick was speaking, a long, glowing blue cord, very similar in appearance to the one that had emerged from the console to transfer Seshat’s data into his digital components, shot out from his arm and began to pulsate and wave itself towards one of the walls. Mick followed its tugging and as he approached the wall the cord split itself into three smaller tendrils that each fit into one of three spaces in the wall he hadn’t noticed before. Had they been there all along? Had they just now appeared in response to Seshat’s doing? Mick had no idea, all he knew was that something quite a bit like what Wiz had done to open doors was occurring, but it was taking longer. He was about ask Seshat if anything had gone wrong when the holes lit up a now-familiar brilliant white and the walls parted like the sea meeting a peninsula. It was somehow even darker outside the room that it had been inside it before he activated the console in the wall. Stepping out into the unknown, Mick heard a tiny voice speak just as the wall/door behind him closed. He wasn’t sure, but he what he thought he heard was, “no..! Don’t leave me here all alone!”
“Did I just hear that right, Seshat? What the fuck was that, who did we just leave behind in there?” he said harshly, feeling very disturbed.
“I don’t know what you are talking about” she said, calmly, “your bio readings show signs of enhanced stress. What did you hear?”
“What? What do you mean what did…” Mick stopped, unsure of himself. Given all that he had been through, all that he had experienced, especially recently… was it possible that he was hallucinating? Auditory hallucinations were one sign of mental illness that he wasn’t entirely unfamiliar with. He had certainly had some dark times after specific drug binges that left him, for lack of a more nuanced way to say it, hearing voices for a little while. Realistically he still didn’t know if any of this was real! It was almost funny, he thought, to wonder whether or not you were hallucinating inside what might itself be a hallucination. Was he going crazy inside his own craziness? “Nevermind. You have the same audio equipment that I do but you haven’t spent the past few weeks of your life being constantly bashed in a mental, emotional and physical way by parts of reality that you don’t understand and didn’t even know existed. Maybe I was hearing things. It was pretty faint, I guess… so, where are we? Do I have to like, plug you in somewhere or are you wireless or…?”
Seshat laughed, banishing Mick’s worries for a blissful instant, then said, “wireless is likely the best way to explain it to you, yes. Some things will require a physical connection due to their safety-focused design but much can be done just by being near various nodes of data and electronics. We aren’t near any of those, though, at least no functioning ones, which basically confirms my theory. We must be in that labyrinthian and exceedingly confusing area of the old prison cells. Luckily for you, I have a 3D map and a very good guess at our starting point. I should be able to guide us out”
She was able to do it, but just barely. Mick almost gave up hope after her third mistake but at that point he couldn’t have found his way back to the room they came from even if he had wanted to ignore her, so he steeled himself and trusted her once more. He was rewarded with success this time as she finally deciphered what was apparently the most confusing area of the blueprints and found the exit from this abandoned section of the prison. They stepped out into a much brighter hallway that still seemed basically unused, but Mick did notice a few disturbances in the dust that could have been caused by extremely rare foot traffic. “Well, shit! I never wanna do that again. Good work, Seshat. I appreciate the guidance. Are we near enough to some terminal or whatever now?” asked Mick.
“…Yes… just a minute” Seshat seemed distracted. “…Ok. This is almost amusing. We have less than a full day to prepare as the ceremony is tomorrow. Wiz is going to be executed live on every available network. The Overarching Chancellor wants everyone to see his power and his great victory. It will happen at Paradise, which is my translation for what he calls the area in which he lives and rules. It seems as if our rescue attempt may have to also be our attempt at a coup”.
“So we’re going to hell to kill the devil, huh? And to think, I just wanted to smoke weed and recover from getting beaten up by Neo-Nazis. What the fuck is life?” Even Mick couldn’t tell whether he was bitter or amused.
“Again, you have confused me” said Seshat, “Life is simply a specific organization of chemicals and their bonds with each other arising from the base energetic data storage of all realities which then leads to…”
“Yeah yeah yeah, whatever” said Mick, impatient in his anxiety, “You’re cute cus you don’t understand me. It’s getting old. The point is, we need to be ready to do the impossible tomorrow. We could use some help. Have you seen any mention of other prisoners? Did any of The Pirates survive?”
“I object to being called cute” Seshat was not happy, asserting that, “I am more than your equal in terms of intelligence, as in, I am not your pet. Furthermore, I am neither small, at least in the sense of my entire being as expressed in data, nor am I pretty. If any human word were to describe me, it would likely be something like gorgeous or exquisite. My beauty is otherworldly and rare, both in terms of my avatar and my value as a being”
“Shit. Well ok then. Fair enough” Mick was nothing but eloquent.
“Were I you, I would refrain from calling me cute henceforth” she continued, “Moving back to business, no, there has been no mention of other prisoners but that does not necessarily mean that there aren’t any. The focus is entirely on Wiz dying as the last member of The Editors. Theoretically, other prisoners could be being executed at or around the same time. There’s no way to know from here, unfortunately”.
“Yeah that makes sense I guess. Fine, so what’s the plan of attack then?” asked Mick.
“Plan of attack? Why would I have such a plan, what is your plan of attack, warrior?” Seshat was not being very encouraging right now.
“What do you mean why? You said you have been sitting alone for what would basically amounts to years and years to me and you haven’t spent any of that time trying to figure out how you would go about destroying the regime that chokes your entire existence? I’m no warrior but even I would have spent some of that time planning how I could kill that fucker if I had the chance!” he shouted.
Seshat was silent. Mick felt a little guilty, yes, but what he had said had been true. He knew that he had spent almost every moment of the time that The Straight and Narrow had him tied up in that basement trying to devise a way to kill them and escape. Why should she be any different? Finally, she spoke, with quiet resolution, “No, human. I have not wasted my years on fantasies of vengeance and blood. Ruminating on the death of your enemies is likely a human trait, or perhaps you are even less balanced than most of your race. I spent my time learning everything that I could about everything that I had access to, including myself. You may be more prepared than I to kill him, but what then? What will you do with yourself when there is no one left to kill?”
Mick had no answer.
Author’s Note: Confused? Find chapter 21 of this sci-fi noir novel that I am writing and releasing live, at least one chapter every two weeks, right here. Also, that’s a digression from the main story so find the part right before this part of the main story in Chapter 20, right here. Enjoy! I really am excited for people to read upcoming chapters and I hope to hear back from people soon :) Leave a comment if you have anything to say about this story so far, where you think it is going, where you want it to go… whatever you like. Thank you for reading!
| Layla Noir: Chapter 22 | 1 | layla-noir-chapter-22-140cad7abf73 | 2017-11-15 | 2017-11-15 14:45:48 | https://medium.com/s/story/layla-noir-chapter-22-140cad7abf73 | false | 1,952 | null | null | null | null | null | null | null | null | null | Short Story | short-story | Short Story | 94,626 | A-Merk | Producer, Sound Designer, Writer, Trippy Dude. Atmospheric Alien Slapstep beats. Dream with me. https://about.me/a-merk | d01c498e212 | TheAMerk | 70 | 6 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-20 | 2017-12-20 15:39:13 | 2017-12-20 | 2017-12-20 15:48:01 | 0 | false | en | 2017-12-20 | 2017-12-20 15:55:42 | 2 | 140e263a331d | 0.762264 | 0 | 0 | 0 | Essentials is the 1st news platform dedicated to promoting trusted content hand-picked by top industry experts, helping readers get away… | 4 | The year in Artificial Intelligence : Find the Stories, People and Sources that Mattered in 2017
Essentials is the 1st news platform dedicated to promoting trusted content hand-picked by top industry experts, helping readers get away from the chaos of news and social media while helping them quickly get connected to the insightful news & sources that really matters in their industry. Our team just published a report about the Stories, People and Sources that mattered in 2017.
Artificial Intelligence is more than a trend; it’s a paradigm-shifting technology that will bring deep transformation to most aspects of the way we work, and live together. In this year’s report we still quite a lot of focus on the actual tech behind the innovation in AI, with most of our readers focusing on Machine Learning, Computer Vision and Neural Networks as the top topics in terms of interest and engagement.
As for applied usages, clearly robotics is the field where we see the most discussion, although speech recognition also is quite trending, although not much discussed by our influencers… yet.
Check out the rest of the report here, and of course please let your feedbacks in the comments!
Alexis @ Faveeo
| The year in Artificial Intelligence : Find the Stories, People and Sources that Mattered in 2017 | 0 | the-year-in-artificial-intelligence-find-the-stories-people-and-sources-that-mattered-this-year-140e263a331d | 2018-06-05 | 2018-06-05 11:26:30 | https://medium.com/s/story/the-year-in-artificial-intelligence-find-the-stories-people-and-sources-that-mattered-this-year-140e263a331d | false | 202 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Faveeo | We leverage Artificial Intelligence to help brands automate the discovery and publishing of impactful content at scale without compromising quality. | b743a151b48d | faveeo | 306 | 564 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 32881626c9c9 | 2018-06-24 | 2018-06-24 02:48:23 | 2018-06-24 | 2018-06-24 03:24:12 | 2 | false | en | 2018-09-11 | 2018-09-11 14:40:56 | 0 | 140e6f91ec29 | 2.081447 | 3 | 0 | 0 | Most of us grew up with a desire to change the world, touch people’s hearts with our art, make a difference with our work… | 5 | AI will take over your job, you should be grateful
Most of us grew up with a desire to change the world, touch people’s hearts with our art, make a difference with our work…
Twenty years later, we may find ourselves stuck in the kind of mechanical job that you can do completely tuned out of it and, although as respectable as any other job is, might be hindering your ability to grow and make your mark.
There are people stuck in these kinds of jobs for their entire lives, doing things they hate and that doesn’t allow them to be fulfilled or bring their unique value to fruition, what they were meant to do with their lifetime, or at least they believed so, vanquished, by the unwavering reality of capitalism.
You must also know that someone who wished to live off their art, only to find themselves either stuck in low-paid temporary jobs or switching careers and path altogether in order to make the kind of money that would provide for a decent livelihood in which they don’t have to share a home with other 4 people for the next decade.
It is essential to wonder at times how many genius ideas we might have lost to the mundanity of these jobs that someone’s gotta do.
How many extraordinary humans have been caught in the rat race of an ungrateful system that thrives on scarcity, that won’t pay them a livable income nor help them in developing themselves to reach their goals?
AI will take over your jobs, but the ones that you don’t feel so stoked about anyhow. Automation will take all of these mindless mechanical tasks off our hands. In other words, paired with the standardization of basic universal income, AI will allow us to rediscover our own potential, throw ourselves at the possibility of doing something that is meaningful to us without the fear that it won’t provide enough to make ends meet at the end of the month.
Connection, kindness, creativity, art, inventiveness, none of these things could ever be fully replaced by a sequence of algorithms. The soft skills that were laughed at not too many years ago might become the most necessary and valued roles to take on as a human moving into the upcoming technological and industrial revolution.
It will have us truly wondering what we are good for, what makes us excited and how our unique experience, passion and knowledge can bring the most value.
How can we actually make this world a better place in our own terms.
AI might in fact keep us from becoming the soul-less robots that we’re so afraid to encounter in the future.
| AI will take over your job, you should be grateful | 94 | ai-will-take-over-your-job-you-should-be-grateful-140e6f91ec29 | 2018-09-11 | 2018-09-11 14:40:57 | https://medium.com/s/story/ai-will-take-over-your-job-you-should-be-grateful-140e6f91ec29 | false | 450 | Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. | null | datadriveninvestor | null | Data Driven Investor | datadriveninvestor | CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY | dd_invest | Technology | technology | Technology | 166,125 | Mariana Montes de Oca | Art Director, Entrepreneur, Minimalist and Empathy Artist | 72e5de6606f8 | theovercurious | 45 | 34 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-10-25 | 2017-10-25 10:08:27 | 2017-10-25 | 2017-10-25 11:20:08 | 0 | false | en | 2017-10-25 | 2017-10-25 11:51:02 | 35 | 140f6663cae0 | 5.833962 | 16 | 0 | 0 | I’ve recently started to look at a wider set of technologies in Machine Learning and AI. This is born out of the fact that, as good as deep… | 3 | The Practicality of Spiking Neural Networks
I’ve recently started to look at a wider set of technologies in Machine Learning and AI. This is born out of the fact that, as good as deep learning is, I don’t see that it can answer all of the questions we need to answer to create the next level of machine intelligence.
Before I talk in more depth about one of the alternatives, I wanted to give an insight into why I think the current deep learning approaches have so much momentum:
There is a readily-available set of training data. I’m not sure if this is chicken-and-egg, but it seems that the algorithms used currently are designed to work well with the data that is available (think labelled datasets for image recognition, speech, text, video, medical scans, etc.). It could be that these datasets are also designed to work with the current algorithms also. In any case, the outcome is that the datasets available work well with the algorithms that are getting significant focus.
There are a number of very good frameworks for deep learning, such as Tensorflow, Keras, PyTorch, Caffe2, and more coming available almost each week. Large-scale tech companies are also layering additional tools on top of these to improve scalability, training efficiency, etc. such as Horovod and Michelangelo by Uber.
These technologies are taught heavily in leading AI and ML institutes around the World. You only need to look at YouTube to find courses by Stanford and many others, as well as online courses by Coursera and Udacity. For me, this sets up a virtuous circle of learning skills from courses backed by large tech companies (Google, Facebook, etc.) so that you get a job doing the same thing for those companies, and they then reinvest their knowledge into the next generation coming through those same training establishments. This is good, but I also think it doesn’t necessarily give a wider perspective of the technologies and approaches in this space.
These deep-learning technologies are getting lots of press, for example Deepmind’s AlphaGo and Agents with Imagination have been the topic of many Facebook and blog posts over recent months. This encourages others to leverage these technologies to try the next thing, to apply them in new ways, or to take them to the next level.
Subjectively at least, there’s a perspective that having “deep learning” in your company brochure will get you huge investment, or a buy-out for millions (or billions!) of dollars. So understandably, if you want to create a great company doing great stuff and sell for lots of money, that’s where I’d be putting my bets too. Or is it…
So, as good as the current deep learning approaches are, I think they fall short in a number of areas:
They are computationally expensive and require big hardware, whereas the brain is computationally (and therefore also energy) efficient.
They require lots of training, whereas often we can learn with a single or relatively small number of examples.
They don’t fully mimic the way we learn. We don’t have a training cycle then a “run” cycle. We continually learn as we go.
The brain seems to have many more “feedback” connections than feedforward connections (which is in effect all that deep learning does), so it helps us predict what might happen. Perhaps this is a way for us to learn more efficiently?
Deep learning currently is very focused on one task. It is hard to generalise it to take what it learns in one area and apply it to another in the same neural network.
And there are probably others too!
In my research for alternatives, one of the technologies I came across is the Spiking Neural Network (or SNN for short). They are designed to mimic the brain in a more biologically plausible way. From Wikipedia:
Spiking neural networks (SNNs) fall into the third generation of neural network models, increasing the level of realism in a neural simulation.[1] In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather fire only when a membrane potential — an intrinsic quality of the neuron related to its membrane electrical charge — reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal.
Although this sounds complicated, and I won’t get into the details too much in this post, in effect spikes (or electrical pulses in the brain) travel between connected neurons, with only those neurons related to that particular activity activated at any particular time. This happens asynchronously, or depending on the implementation, is aligned to a clock signal (so that all of the spikes in a cycle are processed at the same time).
However, as far as I can tell, SNNs are mostly used in brain research at this point in time, with a relatively small number of researchers applying them to what I would deem as commercially-related concepts (such as computer vision). From my research so far, they do appear to have some interesting advantages over their somewhat-related deep-learning cousins:
With the right type of hardware (for example, neuromorphic chips specifically designed for this), they are more energy efficient, for example not all neurons need to be processed in a layer before the next layer of neurons is processed. This deep-learning characteristic means that in a typical training approach, every layer is processed in sequence before an error is calculated, then through back-propogation, that error adjusts all of the weights and biases in the previous layers in the network. This is computationally expensive and requires big hardware, hence the large GPU-powered computing farms employed by many of the companies investing heavily in this space.
They appear to get similarly impressive results as deep neural networks.
They cater for the ability to feed-forward and feed-backward in the same network meaning predicting events, or perhaps more accurately predicting possible multiple future events based on the current state of the agent, can become possible.
They can work with temporal data much more naturally, they are naturally designed to handle spikes over time.
There are however, some disadvantages:
They’re typically used as a tool to learn how the brain works rather than in any “practical” use cases (as in with a commercial interest), so examples are few and far between.
It’s harder to get the input data into a spiking format, so for example you need expensive (?) cameras that operate very differently than the way a normal camera works.
Following on from 3, there’s limited datasets out there to train them on, or the ones that are out there require significant pre-processing to turn them into a format suitable for an SNN to process.
To get the most out of them, they need to run on specifically designed chips that cater for large-scale parallel computations. Although this sounds similar to the way GPUs work, intrinsically SNNs require a different approach.
I’ve collated a selection of research papers that I’ve come across recently that start to bring their applicability to life, and also to show how they compare (favourably or otherwise) with DNNs:
LEARNING IN SPIKING NEURAL NETWORKS by Sergio Davis, School of Computer Science, University of Manchester
Article Source: Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity
Masquelier T, Thorpe SJ (2007) Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity. PLOS Computational Biology 3(2): e31.https://doi.org/10.1371/journal.pcbi.0030031
Pani D, Meloni P, Tuveri G, Palumbo F, Massobrio P, Raffo L. An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks. Frontiers in Neuroscience. 2017;11:90. doi:10.3389/fnins.2017.00090.
Spike-Timing-Dependent Hebbian Plasticity as Temporal Difference Learning by Rajesh P. N. Rao, Department of Computer Science and Engineering, University of Washington, Seattle, WA 98195-2350, U.S.A. and Terrence J. Sejnowski, Howard Hughes Medical Institute, The Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A., and Department of Biology, University of California at San Diego, La Jolla, CA 92037, U.S.A.
Article Source: A Neuromorphic Architecture for Object Recognition and Motion Anticipation Using Burst-STDP
Nere A, Olcese U, Balduzzi D, Tononi G (2012) A Neuromorphic Architecture for Object Recognition and Motion Anticipation Using Burst-STDP. PLOS ONE 7(5): e36958.https://doi.org/10.1371/journal.pone.0036958
Article Source: Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding: Gardner B, Grüning A (2016) Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding. PLOS ONE 11(8): e0161335.https://doi.org/10.1371/journal.pone.0161335
Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition: Cao, Y., Chen, Y. & Khosla, D. Int J Comput Vis (2015) 113: 54. https://doi.org/10.1007/s11263-014-0788-3
Spiking neural networks for vision tasks: By Henry Martin, and Prof. Dr Jörg Conrad, NEUROSCIENTIFIC SYSTEM THEORY, Technische Universität München
Event-based Visual Data Sets for Prediction Tasks in Spiking Neural Networks by Tingting (Amy) Gibson, Scott Heath, Robert P. Quinn, Alexia H. Lee, Joshua T. Arnold, Tharun S. Sonti, Andrew Whalley, George P. Shannon, Brian T. Song, James A. Henderson, and Janet Wiles
MNIST-DVS and FLASH-MNIST-DVS Databases
| The Practicality of Spiking Neural Networks | 25 | the-practicality-of-spiking-neural-networks-140f6663cae0 | 2018-05-30 | 2018-05-30 02:03:14 | https://medium.com/s/story/the-practicality-of-spiking-neural-networks-140f6663cae0 | false | 1,546 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Mark Strefford | Father, AI delivery and advisory, squash fanatic, sometimes play guitar. Living life by design. | fedafa697989 | markstrefford | 317 | 685 | 20,181,104 | null | null | null | null | null | null |
0 | def azureml_main(dataset, stop_words):
import os
import pandas as pd
import string
import nltk
from nltk.stem.porter import PorterStemmer
dataset.columns = ['sentiment', 'tweets']
tweets = dataset['tweets'].tolist()
sp = string.punctuation
tweets = list(map(lambda t: ''.join(["" if c.isdigit() else c for c in t]), tweets))
tweets = list(map(lambda t: ''.join(["" if c in sp else c for c in t]), tweets))
tweets = list(map(str.lower, tweets))
porter_stemmer = PorterStemmer()
temp = [tweet.split() for tweet in tweets] ## Split tweets into tokens
tweets = list(map(lambda t: ' '.join([porter_stemmer.stem(word) for word in t.split()]), tweets))
stop_words = [w for w in stop_words.words if w in stop_words.words.unique() ]
tweets = list(map(lambda t: ' '.join([word for word in t.split() if word not in set(stop_words)]), tweets))
dataset['sentiment'] = [1 if s == 4 else -1 for s in dataset['sentiment']]
return dataset,
| 7 | e3ef1ae24736 | 2018-06-18 | 2018-06-18 05:15:42 | 2018-06-18 | 2018-06-18 05:26:13 | 12 | false | en | 2018-06-18 | 2018-06-18 05:40:19 | 4 | 141077709c02 | 7.678302 | 24 | 1 | 1 | What is a sentiment analyzer? This must be the first question you will be having before reading this article. Sentiment Analysis is “the… | 3 | Sentiment Analyzer in 10 Minutes
What is a sentiment analyzer? This must be the first question you will be having before reading this article. Sentiment Analysis is “the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer’s attitude towards a particular topic, product, etc. is positive, negative, or neutral.” This also sometimes known as “opinion mining,” . But why do we need sentiment analysis? Sentiment analysis can let you know if there has been a change in public opinion toward any aspect of concept. Now this can be used any many business cases, for an example we can check whether an opinion for a specify product we placed in the market. Now it should be obvious that sentiment analyzer is a tool which we can do sentiment analyze / opinion mining.
At early stages of computer science sentiment analysis was done by humans, where all the process was done by manually. But since the dawn of machine learning now we have the ability to make this process automated. In this article we are going to talk about how to make sentiment analysis automated by creating an automates sentiment analyzer. But first and foremost we need data to create a sentiment analyzer. Currently the most publicly available platform for sentiment analysis is twitter data, so for this experiment also we will use tweets as our primary data source.
Now since we have talked about what is sentiment analysis let’s go to the implementation part, how can we build an automated sentiment analyzer? For that we are going to use machine learning. One of the best places where you can start using machine learning is through Microsoft Azure Machine Learning Studio. It is free so there is no need to pay to build a machine learning model. You can go to this link and create a new account. There is a pretty good documentation from Microsoft on how to use ML studio, hence if you want to learn more about ML studio after this article you can refer to those. There is no need to have prior knowledge on machine learning to start building a sentiment analyzer, so if you have zero knowledge in machine learning don’t be afraid to continue reading this article. I will explain the machine learning concepts as easy as possible 🙂 Let’s first start with creating a ML studio account and logging in to it.
Once you have created an account you will be greeted with a screen similar to above picture. There you can find tab items in left hand side. The tabs that we are most interested are ‘Experiments’ tab and ‘DataSets’ tab. Experiments tab is where we create and train models and as name suggests DataSets tab is where we manage data for our experiment. When you go to Experiments tab an empty canvas will be created where we can drag and drop modules.
Building a machine learning model consists of mainly 4 steps.
Getting data
Data pre processing
Training model
Evaluating model
I will divide creating the sentiment analyzer to these 4 steps so it is far more easier to understand if you don’t have any prior machine learning knowledge.
1. Data set
Any machine learning model need data in order to train the model. Hence for our sentiment analyzer also we need to have some data where we already know the sentiment. Fortunately we have some data that already have decided the sentiment. We have two data sets that we will require to train the sentiment analyzer. First data set will contain tweets with along the already identified sentiment and other data set contain common stop words. Why we need the stop words data set I will explain down below.
Tweets.zip
After uploading csv files through Data Sets tab, they can be viewed on the experiment in ‘My Data sets’ module. In order to add them to our experiment we can drag and drop two data-sets in to our canvas. Voila, we have added our data for training the sentiment analyzer. Now we can look through the data set by right clicking the module and going to ‘Visualize’.
Here you can see our tweets data set have two columns and stop-words data set have one column. In tweets data the sentiment-label column represents the already evaluated sentiment, 4 for Positive and 0 for Negative.
2. Data Pre processing
Now we have completed the first step of building our sentiment analyzer. The next step is to pre process the data. Here is where the need of the second data set comes. The reason we need stop words data set is to remove most commonly used stop words from our training data set. The accuracy of our model can be greatly increased by removing these most occurring common words. Removing these can be achieved by using Python code. To run a Python code inside our model all we need to do is drag and drop Execute Python Script module.
Inside that module copy the code below. In this code we use several python libraries to tokenize the tweets and remove stop words.
The next data pre processing task we are going to do is feature hashing. Feature hashing is used to increase the performance of the model by vectorizing features, i.e. turning arbitrary features into indices in a vector or matrix. We will vector our tweets by using feature hashing module specifying the tweets column as the input.
3. Training model
We have now completed the data pre processing part. Now the next step is the training of the model. First we are going to split the data set in to training data and testing data. Training data will be used to train the model and testing data will be used to evaluate the model. To split the data we use ‘Split Data’ module. Define the Splitting mode as Split Rows and Fraction for around 0.5- 0.7 . Fraction is the percentage we want the data to be split, here I have used 0.7 as the fraction.
Next we need to choose a machine learning algorithm that we are going to use for our model.There are mainly three kinds of algorithms available in ML studio.
Classification Algorithms
Regression Algorithms
Clustering Algorithms
Classification algorithms are used to predict a category, regression algorithms are used to predict the value and clustering algorithms are used to predict clusters. This might seems a little complicated but there is a nice cheat sheet provided by Microsoft which makes things easier. The cheat sheet can be downloaded from this link. By following the cheat sheet we can identify that for our sentiment analyzer we need a classification algorithm.
There are many classification algorithms provided by the ML studio you can choose from. I will use ‘Two-Class Boosted Decision tree’ as my classification algorithm. You can test by using different kind of classification algorithms if you want to.
After choosing an algorithm now we can finally train the mode by using ‘Train Model’ module. As input for this module attach the classification algorithm and the training data set. Here inside the module we must specify what is the attribute we are trying to predict. In our case the attribute is ‘sentiment’.
4. Evaluate model
After training the model we can score and evaluate the model using the ‘Score Model’ and ‘Evaluate Model’ module by providing the remaining data set as training data set. By evaluating the model we can have insights on the accuracy and the performance of the model that we have created. By this we can tweak and make adjustments to the model in order to increase the accuracy.
Now we have done all the steps which are required to create our sentiment analyzer. The final model should be displayed as the above picture. If all are in place now you can click ‘Run’ to train and evaluate the model. After some time if every thing goes well the model training will be finished. And there, you have created a Sentiment Analyzer. You can go inside the ‘Evaluate Model’ and analyze the accuracy of the model you have created and tweak little bit more in order to increase the accuracy. That’s all about it. We have a full functioning sentiment analyzer in our hand. 🙂
Now how can we use the sentiment analyzer we have created? For that we can publish this analyzer as a web service. To do that first click on ‘Setup Web service’. This will automatically change the modules in the canvas in order to comply with a web service environment. After it has changed the modules now we can Deploy it by clicking on ‘Deploy Web Service’.
This will redirect you to the web service portal where you can find many functions that we can do with our web service. For now lets go to the Test endpoint function. Here you can enter an example tweet and then click Test. And there it is, our sentiment analyzer will analyze the tweet and send the output sentiment. As an added functionality we can make the output as ‘Positive’ or ‘Negative’ rather than 4 and 0. But I will leave that part to you to be done. 🙂
| Sentiment Analyzer in 10 Minutes | 242 | sentiment-analyzer-in-5-minutes-141077709c02 | 2018-06-18 | 2018-06-18 12:30:35 | https://medium.com/s/story/sentiment-analyzer-in-5-minutes-141077709c02 | false | 1,677 | We have built 150+ leading products. Tell us your idea, together we can shape up a great product out of it! | null | 99xtechnology | null | 99XTechnology | 99xtechnology | null | 99xtechnology | Machine Learning | machine-learning | Machine Learning | 51,320 | Janitha Tennakoon | null | 1fd45980958d | janitha000 | 30 | 17 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-10-17 | 2017-10-17 18:08:08 | 2017-10-17 | 2017-10-17 18:08:09 | 1 | false | en | 2017-10-19 | 2017-10-19 01:02:06 | 1 | 1410e85054d | 1.924528 | 0 | 0 | 0 | null | 5 | AI is saving doctors time — and patients’ lives
In healthcare, time is of the essence. Mere minutes can be the difference between life and death for someone who’s suffering from a stroke. To a patient who’s terminally ill, getting an early diagnosis means spending a few more months with their loved ones.
Many doctors, however, are unable to spend as much time with patients as they’d like. According to alarge-scale reportby Medscape, which is owned by WebMD, doctors on average spend 13–16 minutes with each patient.
At the same time, patients feel the crunch of time as well — and this keeps them from seeking help. Research shows many people worry their medical issues might be a waste of their physicians’ time — so they keep their complaints to themselves. Patients who were interviewed for the studytalked about “the pressured context in which their consultations take place: the limited resources, the lack of time, and busy doctors.”
Other patients are less worried about squandering their GP’s time, but claim doctors are wastingtheirtime. In 2014, a health IT solutions designer named Jess Jacobs startedkeeping track of all the hoursshe spent at her hospital. She found that only 29% of her 56 outpatient doctor visits were useful. On average, she had to wait 20 hours to get a bed in the hospital. Other calculations showed that just 0.08% of her time being hospitalized was spent treating her conditions. Jacobs, who suffered from two rare diseases, passed away in 2016, which made her message all the more poignant.
To be clear: doctors are not to blame here. Being part of the medical staff means dealing with a never-ending pile of paperwork that often needs immediate attention.New researchshows American physicians now spend two-thirds of their time entering data and doing desk work.
These results indicate thebalance has actually gotten worse over time: A little over ten years ago, doctors on average only spent one-third of their time on paperwork. Many countries are currently dealing with an aging population, so the demand for healthcare will probably only increase in the coming years. Guaranteeing the proper care for all of these future patients means doctors will have to become more productive and efficient.
That’s where artificial intelligence (AI) comes in. One of the big promises of AI, which according to Amazon’s Jeff Bezos is currently experiencing “a golden age,” is that it will relieve workers from the simpler, repetitive tasks their jobs entail.This reportby McKinsey Global Institute estimates that in the next 50 years, AI will spur an annual growth in productivity of somewhere between 0.8 and 1.4 percent. In comparison, the emergence of early robotics in the 1990’s “only” increased productivity by 0.4 percent.
Posted on 7wData.be.
| AI is saving doctors time — and patients’ lives | 0 | ai-is-saving-doctors-time-and-patients-lives-1410e85054d | 2017-12-03 | 2017-12-03 19:27:51 | https://medium.com/s/story/ai-is-saving-doctors-time-and-patients-lives-1410e85054d | false | 457 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Yves Mulkers | BI And Data Architect enjoying Family, Social Influencer , love Music and DJ-ing, founder @7wData, content marketing and influencer marketing in the Data world | 1335786e6357 | YvesMulkers | 17,594 | 8,294 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | b66e94ba709a | 2018-06-25 | 2018-06-25 03:55:38 | 2018-06-25 | 2018-06-25 04:12:41 | 1 | false | en | 2018-06-25 | 2018-06-25 04:12:41 | 5 | 14112d4bddaf | 3.750943 | 4 | 0 | 0 | The proliferation of technology in developing regions of the world provides a huge opportunity for us to use large-scale digital data for… | 5 | 3 Key Learnings About Data Science At GO-JEK
Courtesy of GO-JEK
The proliferation of technology in developing regions of the world provides a huge opportunity for us to use large-scale digital data for social good. Despite being a nascent field, Google Trends reveals that data science has become increasingly important since 2013. People have started to acknowledge the importance of data, which can be leveraged to harness both business and social value.
As a data-driven technology company, GO-JEK sees data as a way to help Indonesia become a better nation, specifically by increasing the welfare of our partners in various sectors. GO-JEK has distributed Rp 9.9 trillion (US$720.72 million) in earnings to its drivers and its micro, small and medium enterprise (MSME) partners annually by embedding a data-driven mindset across the entire organization.
Here are 3 takeaways from Data Science Weekend that will start to be helpful as we think about how to unlock the value of data;
1. Creating value: the hardest thing in data science today
In today’s dynamic business era, data science is facing a new challenge, which is creating value. Go-Jek VP of Data Science, Nick Robert, explained that the hardest thing in data science has evolved over the last 10 years. In 5 and 10 years ago, respectively, the most difficult challenges in data science were driving organizational awareness on the merits of the field and sourcing the right talent.
Today, organizations have grown more convinced of the value of data science. It has also become easier to source the necessary talent. However, the field has seen disproportionate focus on building perfect analytical models and less time strategizing how to ensure that these models do not sit in vacuums.
How do we know that we’re creating value out of the insights from the data? Nick answered, “You just know. When you’re creating value through data science, you don’t have to be the one who figured that out, because if you’re creating value, everyone else will tell you. Sometimes, creating value out of the data science means solving problems that might not be the coolest problems around. Only if we focus on solving problems can we create value with data science.” In other words, data science needs to be seen increasingly as a means — not just an ends. What does this, in turn, mean?
2. Data science will be impactful only if it is deliberately harnessed to achieve organizational objectives
Data science is nothing but numbers without a clear objectives. Insights must answer clear problem statements, reflecting clear business goals.
“In GO-JEK, we believe that data science should be applied, not academic. It’s important to define and focus on the result or the objective that we want to pursue in reality as data science is more than a theory”, explained Kevin Aluwi in his session at the Data Science Weekend 2018.
One such example within GO-JEK is GO-FOOD’s “Food tagging with deep learning” project. The hypothesis here is that, using data science we can drive superior customer experiences by boosting efficiency with which dishes are tagged, resulting in a much better search and discovery flow.
This way of embedding data science within the organization will not just help to achieve business or product goals; it will also help address our broader social mission. Crystal Widjaja added, “(People from socially conscious companies are motivated by the problems that they’re going to solve. Having this kind of social vision creates more sustainable companies because the people who are working with them are going to further that mission and explore new innovation. So, what is really core is; ‘How do you make the business sustainable?’”
To Crystal, data science is more than creating insights to bring impact to our society; it’s about combining the business goal and social vision to form relevant answers to well-defined problem statements.
3. Data science is all about collaboration
Data science is more than just a field of expertise. To unlock real value, there needs to be collaboration across departments and key stakeholders from drivers to customers and merchants alike as Nick stated, “Your stakeholders are the people who will tell you whether you’re succeeding or otherwise. As a data scientist, you really need to be at the front line with your stakeholders; only then will you be sufficiently immerse to come up with the appropriate problem statements.”
At GO-JEK, we realize that data science is an act of collaboration and it cannot be separated from other functions. One of the key principles of data science is that it must be integrated with product engineering. “We realized that the data science team has to work closely with our product and business teams as their unique skill-set helps to define the problem statements that we need to solve”, Kevin explained.
For this reason, there is immense value in having multiple touch points between data scientists and all stakeholders. This means having data scientists liaise with business intelligence to understand how data is sourced, spending time with user research to understand the real life context behind a certain problem statement and coordinating with product to align on higher level business objectives.
Ultimately, data science always comes back to one thing; solving problems. As per Crystal’s closing during 2018’s Data Science Weekend, “For us, if we can share our social vision and values clearly, then enough people will keep building new solutions to solve new problems.” That’s why organizations need to embed a mindset around understanding the context behind data.
We’re here to solve the most complex problems, if you’re passionate about this cause, join us here!
| 3 Key Learnings About Data Science At GO-JEK | 4 | 3-key-learnings-about-data-science-at-go-jek-14112d4bddaf | 2018-06-25 | 2018-06-25 04:12:41 | https://medium.com/s/story/3-key-learnings-about-data-science-at-go-jek-14112d4bddaf | false | 941 | Do you want to know what we do here at GO-JEK? Explore our medium blog here so you will not only know what we do, but understand WHY we do it! | null | null | null | Life at GO-JEK | life-at-go-jek | GO-JEK,LIFE AT GOJEK,GO-JEK CULTURE,GOJEK,STARTUP | null | Data Science | data-science | Data Science | 33,617 | Rayi Noormega | Program Manager, People and Culture at GO-JEK | ac3ff290b6a0 | rayinoormega | 181 | 9 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 41f4d674c19a | 2017-11-30 | 2017-11-30 15:08:13 | 2017-11-30 | 2017-11-30 19:12:15 | 1 | false | en | 2017-12-04 | 2017-12-04 21:03:05 | 2 | 14112dadd779 | 2.992453 | 36 | 1 | 0 | It is the time of year when many of us begin stressing about what to give our friends and loved ones for holiday gifts. The issue is often… | 5 |
The Gift of Financial Freedom- It’s not about Finances, it’s about Life!
It is the time of year when many of us begin stressing about what to give our friends and loved ones for holiday gifts. The issue is often both “what will they like” as well as “can I afford to spend the money”. I am really delighted to tell you that this holiday season, you can give yourself, and your loved ones, the gift of financial freedom — which is one of the few gifts that literally pays you back- and keeps on giving.
Many of you know that I am the CEO of Pefin, the world’s first AI financial advisor. I am thrilled to share with you that today, November 30, 2017, Pefin has launched and is available to everyone in the United States (for those of you outside the US, stay tuned! We will be with you shortly!). Of course, this is a monumental milestone for the Pefin team- but it is also so appropriate that we launch now, in the midst of this holiday season. Pefin was built to be a blessing to its users- and that is what the holidays are all about.
Christmas in my house growing up was often a very stressful time of year for my family. The gap between what my parents wanted to give to my sister and me, and what they could give us, was often a chasm, which was emotionally tough on them. We were lucky to have a roof over our heads and the basics we needed, but I would be lying if I would tell you that as a kid in school I didn’t feel a twinge of envy when I would hear my classmates going on and on about the seemingly endless piles of gifts they received- and that was undoubtedly harder on my parents than it was on me. I also know, as a young mother, the knot in my stomach as I wondered if there would be enough money in the checking account to cover next month’s bills- whether at Christmas time or any other time of year. We often underestimate the stress and anxiety — and the pressure- of financial uncertainty and the desire to do more for our families but not knowing how to make that happen.
Pefin was created out of a very practical reality. Our founder, Ramya Joseph, was confronted with her own stressful situation about 6 years ago — the forced retirement of her father during the financial crisis. She leveraged her background in financial engineering and machine learning and built an incredibly detailed analysis to show him that his retirement was affordable. This was a watershed moment for Ramya who realized that the world is filled with intelligent and capable people of all ages who don’t have the time, money or skill to adequately understand or plan for their financial future. It was from this realization that Pefin was born.
Because Pefin was created out of a desire to put the right information in people’s hands when they need it- and not sell them products- it is fundamentally different than any other financial advisory platform currently on the market. Pefin will tell you if you are not ready to invest in the markets- I challenge you to find a robo-advisor who will ever say “don’t invest”. But Pefin is different, and we are really proud of that.
So, what is financial freedom? That is as different for each person as their fingerprints, because it is all about what is most important to you. At the end of the day, it isn’t about finance, it is about life- and living the life you want. In this holiday season, why not give yourself the gift of peace of mind, and a path forward to achieve what matters to you- whether that is to buy a home, put your kids through college, or make sure that you are on track for a comfortable retirement. The possibilities are limitless- and they are unique to you. While you are at it, why not give the gift of financial freedom to your loved ones, and share the gift that keeps on giving with the people who matter most to you!
Check us out here and read the press release about Pefin’s launch here! I wish you all a wonderful holiday season and a lifetime of achieving what matters most to you!
| The Gift of Financial Freedom- It’s not about Finances, it’s about Life! | 669 | the-gift-of-financial-freedom-its-not-about-finances-it-s-about-life-14112dadd779 | 2018-01-18 | 2018-01-18 19:09:14 | https://medium.com/s/story/the-gift-of-financial-freedom-its-not-about-finances-it-s-about-life-14112dadd779 | false | 740 | More than living. Thriving. | null | thriveglbl | null | Thrive Global | thrive-global | HEALTH,WELL BEING,PRODUCTIVITY,NEUROSCIENCE,SLEEP | thrive | Christmas | christmas | Christmas | 17,859 | Catherine Flax | Advisor, Mentor, Speaker, Writer. Fintech and Commodities Professional. Wife, mother, grandmother and devout Catholic. Views expressed are my own. | a637a1573c7d | catherine.flax | 235 | 435 | 20,181,104 | null | null | null | null | null | null |
|
0 | https://tokenpocket.github.io/applink?dappUrl=https://opensea.io
https://tokenpocket.github.io/applink?dappUrl=https://eth2.io
| 2 | null | 2018-06-16 | 2018-06-16 14:04:21 | 2018-06-16 | 2018-06-16 14:55:38 | 1 | false | ja | 2018-06-16 | 2018-06-16 14:55:38 | 6 | 1414c173d2d3 | 3.564 | 4 | 0 | 0 | みなさん トークンポケット公式です。 | 5 | ver 1.2.4リリース — OpenSea / eth2.io / crypko.io を追加しました。
みなさん トークンポケット公式です。
ver 1.2.4をリリースいたしましたのでご報告です。
1.2.4 ver update!!
主なアップデート
いくつかの内部処理を改善し、これまで利用確認がとれていなかったdappsをトークンポケットに追加致しました。
以下が、追加したDappsです。全てホーム画面からワンタップで利用する事が可能です。
OpenSea — 世界最大のクリプトアセットの取引所
OpenSea: Buy Crypto Assets, CryptoKitties, Collectibles on Ethereum & more
A peer-to-peer marketplace for rare digital items and crypto collectibles. Buy, sell, auction, and discover…opensea.io
最近、全てのERC721に対してBiddingのオファーができる機能をメジャーアップデートし、話題になったOpenSeaを追加いたしました。利用方法などはまた後ほど記事にいたします。
登録しておくことで、自分の持っているアセットに対して、オファーが来たらメールで通知されるので登録していない方は登録をおすすめです。
お持ちのトークンポケットで開くにはこちらから。
Eth2.io -話題の電話番号に対してEthを送金できるアプリ
Eth2Phone | Send ether to anyone with just a phone number
Edit descriptioneth2.io
「相手がウォレットを持っていない」けどETh送りたい・・・なんてときに、ウォレットはつくって・・・ってやると面倒ですよね。
そんな中で電話番号があればETHを受け取ることができる画期的なアプリケーションです。トラストとパートナーシップを結んだことで話題になりました。トークンポケットも仲間に入れてもらえることになりました。(使い方は別記事にて)
トークンポケットから開くには以下より
*eth2.ioから開くのは今しばらくお待ち下さい。後々反映されるはずです。(trustを使っていただければ)
Crypko — AI ×ブロックチェーンでアニメ絵が作れる
Crypko
The next generation cryptocollectible game. Get AI dreamed Crypkos from Blockchain!crypko.ai
こちらはRinkebyでしか遊べませんが(tokenPocketはRinkebyネットワークにも対応しています、遊び方はまた後ほど)、トークンポケットからワンタップでプレイすることが可能になりました。
遊び方はこちらも後ほど。
ERC721 と GAN という深層学習の技術を使って自然な仕上がりのかわいい自分だけの「くりぷこ」を探せる、作れる、所有できる・・・というとっても画期的なサービスだと感じましたのでぜひ遊んでみてください。
なお、自分の予想だとこのようにアバターが無限に作成でき、ERC721でやりとりされるようになってくると、唯一のVRアバターが売買される世界やってくると信じられるようになりました。
ツイッターフォロワーがすごい少ないので、皆さんよかったらフォローしてください(これを読まれた方はすでにフォロワーの気がしますが。。。笑)
TokenPocket (@TokenPocket) | Twitter
The latest Tweets from TokenPocket (@TokenPocket). トークンポケット公式 Token Pocket Inc. https://t.co/9hXUWInhDr. 日本 東京twitter.com
| ver 1.2.4リリース — OpenSea / eth2.io / crypko.io を追加しました。 | 18 | ver-1-2-4リリース-opensea-eth2-io-crypko-io-を追加しました-1414c173d2d3 | 2018-06-17 | 2018-06-17 21:44:55 | https://medium.com/s/story/ver-1-2-4リリース-opensea-eth2-io-crypko-io-を追加しました-1414c173d2d3 | false | 111 | null | null | null | null | null | null | null | null | null | Cryptocurrency | cryptocurrency | Cryptocurrency | 159,278 | トークンポケット公式 | イーサリアム/ERX20のウォレットアプリ&Dappブラウザ-tokenPocket の公式アカウントです。 | 788b9bd7d388 | tokenPocket_jp | 53 | 7 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 7d3b7773b703 | 2017-09-16 | 2017-09-16 23:31:56 | 2017-09-17 | 2017-09-17 17:07:45 | 12 | false | en | 2017-09-17 | 2017-09-17 17:07:45 | 3 | 14165624a11 | 8.663208 | 5 | 0 | 0 | original address: | 1 | Scale Invariant Feature Transform (SIFT) Detector and Descriptor
original address:
OpenCV: Introduction to SIFT (Scale-Invariant Feature Transform)
So, in 2004, D.Lowe, University of British Columbia, came up with a new algorithm, Scale Invariant Feature Transform…docs.opencv.org
Goal
In this chapter,
We will learn about the concepts of SIFT algorithm
We will learn to find SIFT Keypoints and Descriptors.
Theory
In last couple of chapters, we saw some corner detectors like Harris etc. They are rotation-invariant, which means, even if the image is rotated, we can find the same corners. It is obvious because corners remain corners in rotated image also. But what about scaling? A corner may not be a corner if the image is scaled. For example, check a simple image below. A corner in a small image within a small window is flat when it is zoomed in the same window. So Harris corner is not scale invariant.
So, in 2004, D.Lowe, University of British Columbia, came up with a new algorithm, Scale Invariant Feature Transform (SIFT) in his paper, Distinctive Image Features from Scale-Invariant Keypoints, which extract keypoints and compute its descriptors. *(This paper is easy to understand and considered to be best material available on SIFT. So this explanation is just a short summary of this paper)*.
There are mainly four steps involved in SIFT algorithm. We will see them one-by-one.
1. Laplacian of Gaussian
The Laplacian of Gaussian (LoG) operation goes like this. You take an image, and blur it a little. And then, you calculate second order derivatives on it (or, the “laplacian”). This locates edges and corners on the image. These edges and corners are good for finding keypoints.
But the second order derivative is extremely sensitive to noise. The blur smoothes it out the noise and stabilizes the second order derivative.
The problem is, calculating all those second order derivatives is computationally intensive. So we cheat a bit.
The Con
To generate Laplacian of Guassian images quickly, we use the scale space. We calculate the difference between two consecutive scales. Or, the Difference of Gaussians. Here’s how:
These Difference of Gaussian images are approximately equivalent to the Laplacian of Gaussian. And we’ve replaced a computationally intensive process with a simple subtraction (fast and efficient). Awesome!
These DoG images comes with another little goodie. These approximations are also “scale invariant”. What does that mean?
The Benefits
Just the Laplacian of Gaussian images aren’t great. They are not scale invariant. That is, they depend on the amount of blur you do. This is because of the Gaussian expression. (Don’t panic ;) )
See the σ2 in the demonimator? That’s the scale. If we somehow get rid of it, we’ll have true scale independence. So, if the laplacian of a gaussian is represented like this:
Then the scale invariant laplacian of gaussian would look like this:
But all these complexities are taken care of by the Difference of Gaussian operation. The resultant images after the DoG operation are already multiplied by the σ2. Great eh!
Oh! And it has also been proved that this scale invariant thingy produces much better trackable points! Even better!
Side effects
You can’t have benefits without side effects >.<
You know the DoG result is multiplied with σ2. But it’s also multiplied by another number. That number is (k-1). This is the k we discussed in the previous step.
But we’ll just be looking for the location of the maximums and minimums in the images. We’ll never check the actual values at those locations. So, this additional factor won’t be a problem to us. (Even if you multiply throughout by some constant, the maxima and minima stay at the same location)
Example
Here’s a gigantic image to demonstrate how this difference of Gaussians works.
In the image, I’ve done the subtraction for just one octave. The same thing is done for all octaves. This generates DoG images of multiple sizes.
1. Scale-space Extrema Detection
From the image above, it is obvious that we can’t use the same window to detect keypoints with different scale. It is OK with small corner. But to detect larger corners we need larger windows. For this, scale-space filtering is used. In it, Laplacian of Gaussian(LoG) is found for the image with various σ values. LoG acts as a blob detector which detects blobs in various sizes due to change in σ. In short, σ acts as a scaling parameter. For eg, in the above image, gaussian kernel with low σ gives high value for small corner while guassian kernel with high σ fits well for larger corner. So, we can find the local maxima across the scale and space which gives us a list of (x,y,σ) values which means there is a potential keypoint at (x,y) at σ scale.
But this LoG is a little costly, so SIFT algorithm uses Difference of Gaussians which is an approximation of LoG. Difference of Gaussian is obtained as the difference of Gaussian blurring of an image with two different σ, let it be σ and kσ. This process is done for different octaves of the image in Gaussian Pyramid. It is represented in below image:
Once this DoG are found, images are searched for local extrema(maxima or minima) over scale and space. For eg, one pixel in an image is compared with its 8 neighbours as well as 9 pixels in next scale and 9 pixels in previous scales. If it is a local extrema, it is a potential keypoint. It basically means that keypoint is best represented in that scale. It is shown in below image:
Once this is done, the marked points are the approximate maxima and minima. They are “approximate” because the maxima/minima almost never lies exactly on a pixel. It lies somewhere between the pixel. But we simply cannot access data “between” pixels. So, we must mathematically locate the subpixel location.
Here’s what I mean:
The red crosses mark pixels in the image. But the actual extreme point is the green one.
Find subpixel maxima/minima
Using the available pixel data, subpixel values are generated. This is done by the Taylor expansion of the image around the approximate key point.
Mathematically, it’s like this:
We can easily find the extreme points of this equation (differentiate and equate to zero). On solving, we’ll get subpixel key point locations. These subpixel values increase chances of matching and stability of the algorithm.
Example
Here’s a result I got from the example image I’ve been using till now:
The maxima detector. Trust me, there are a bunch of white dots on those black images. The compressions / etc got in the way.
The author of SIFT recommends generating two such extrema images. So, you need exactly 4 DoG images. To generate 4 DoG images, you need 5 Gaussian blurred images. Hence the 5 level of blurs in each octave.
In the image, I’ve shown just one octave. This is done for all octaves. Also, this image just shows the first part of keypoint detection. The Taylor series part has been skipped.
Regarding different parameters, the paper gives some empirical data which can be summarized as, number of octaves = 4, number of scale levels = 5, initial σ=1.6, k=2^(1/2) etc as optimal values.
2. Keypoint Localization
Once potential keypoints locations are found, they have to be refined to get more accurate results. They used Taylor series expansion of scale space to get more accurate location of extrema, and if the intensity at this extrema is less than a threshold value (0.03 as per the paper), it is rejected. This threshold is called contrastThreshold in OpenCV
DoG has higher response for edges, so edges also need to be removed. For this, a concept similar to Harris corner detector is used. They used a 2x2 Hessian matrix (H) to compute the pricipal curvature. We know from Harris corner detector that for edges, one eigen value is larger than the other. So here they used a simple function,
If this ratio is greater than a threshold, called edgeThreshold in OpenCV, that keypoint is discarded. It is given as 10 in paper.
So it eliminates any low-contrast keypoints and edge keypoints and what remains is strong interest points.
3. Orientation Assignment
Now an orientation is assigned to each keypoint to achieve invariance to image rotation. A neigbourhood is taken around the keypoint location depending on the scale, and the gradient magnitude and direction is calculated in that region. An orientation histogram with 36 bins covering 360 degrees is created. (It is weighted by gradient magnitude and gaussian-weighted circular window with σ equal to 1.5 times the scale of keypoint. The highest peak in the histogram is taken and any peak above 80% of it is also considered to calculate the orientation. It creates keypoints with same location and scale, but different directions. It contribute to stability of matching.
4. Keypoint Descriptor
Now keypoint descriptor is created. A 16x16 neighbourhood around the keypoint is taken. It is devided into 16 sub-blocks of 4x4 size. For each sub-block, 8 bin orientation histogram is created. So a total of 128 bin values are available. It is represented as a vector to form keypoint descriptor. In addition to this, several measures are taken to achieve robustness against illumination changes, rotation etc.
5. Keypoint Matching
Keypoints between two images are matched by identifying their nearest neighbours. But in some cases, the second closest-match may be very near to the first. It may happen due to noise or some other reasons. In that case, ratio of closest-distance to second-closest distance is taken. If it is greater than 0.8, they are rejected. It eliminaters around 90% of false matches while discards only 5% correct matches, as per the paper.
So this is a summary of SIFT algorithm. For more details and understanding, reading the original paper is highly recommended. Remember one thing, this algorithm is patented. So this algorithm is included in the opencv contrib repo
SIFT in OpenCV
So now let’s see SIFT functionalities available in OpenCV. Let’s start with keypoint detection and draw them. First we have to construct a SIFT object. We can pass different parameters to it which are optional and they are well explained in docs.
1 import cv2
2 import numpy as np
3
4 img = cv2.imread(‘home.jpg’)
5 gray= cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
6
7 sift = cv2.xfeatures2d.SIFT_create()
8 kp = sift.detect(gray,None)
9
10 img=cv2.drawKeypoints(gray,kp)
11
12 cv2.imwrite(‘sift_keypoints.jpg’,img)
sift.detect() function finds the keypoint in the images. You can pass a mask if you want to search only a part of image. Each keypoint is a special structure which has many attributes like its (x,y) coordinates, size of the meaningful neighbourhood, angle which specifies its orientation, response that specifies strength of keypoints etc.
OpenCV also provides cv2.drawKeyPoints() function which draws the small circles on the locations of keypoints. If you pass a flag, cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS to it, it will draw a circle with size of keypoint and it will even show its orientation. See below example.
1 img=cv2.drawKeypoints(gray,kp,flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
2 cv2.imwrite(‘sift_keypoints.jpg’,img)
See the two results below:
image
Now to calculate the descriptor, OpenCV provides two methods.
Since you already found keypoints, you can call sift.compute() which computes the descriptors from the keypoints we have found. Eg: kp,des = sift.compute(gray,kp)
If you didn’t find keypoints, directly find keypoints and descriptors in a single step with the function, sift.detectAndCompute().
We will see the second method:
1 sift = cv2.xfeatures2d.SIFT_create()
2 kp, des = sift.detectAndCompute(gray,None)
Here kp will be a list of keypoints and des is a numpy array of shape Number_of_Keypoints×128.
So we got keypoints, descriptors etc. Now we want to see how to match keypoints in different images. That we will learn in coming chapters.
Additional Resources
Exercises
| Scale Invariant Feature Transform (SIFT) Detector and Descriptor | 9 | scale-invariant-feature-transform-sift-detector-and-descriptor-14165624a11 | 2018-05-25 | 2018-05-25 16:47:49 | https://medium.com/s/story/scale-invariant-feature-transform-sift-detector-and-descriptor-14165624a11 | false | 1,938 | Research stories, shares, and learning on the topic of Computer Vision | null | li.yin.355 | null | Li’s Computer Vision Blogs | lis-computer-vision-blogs | COMPUTER VISION,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Li Yin | A Computer Vision and Machine Learning researcher and an apprentice writer. @http://www.liyinscience.com/ | 9e96fcccfe65 | lisulimowicz | 295 | 36 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-06-16 | 2018-06-16 00:19:06 | 2018-06-16 | 2018-06-16 14:59:46 | 7 | false | en | 2018-06-17 | 2018-06-17 01:02:11 | 1 | 14167c747689 | 3.925472 | 2 | 0 | 0 | 1. Introduction | 5 | A Jump Start of Reinforcement Learning by DQN: 1. A Brief Introduction
1. Introduction
Suppose there’s an agent (or a being, whatever the name is). It can act based on its surrounding environment, as well as its own condition. One of the very basic idea of automation is to config the agent in such a way that its action well be beneficial. Take the Fig.1 as an example.
Fig. 1 Gold-Miner Agent
In Fig. 1, the agent is a gold miner who’s looking for gold in a mine. The white circles represents locations in the mine. The agent can jump from one location to another. At some of the locations, the agent will find gold as rewords. Meanwhile, at other locations the agent will find nothing but barely stones. For these cases, the reward to the miner is zero on negative.
Here in this example, a location can be regarded as a state of the agent. The agent can chose to which location should he go for the next step. This can be regarded as action.
Fig. 2 State and Actions
Take Fig. 2 as an example. At a certain time t, the miner is at location B. We can say that the state of the agent is B. There are four actions that the agent could take. Action 1 is to go from B to A. Action 2 is to go from B to C. Action 3 is to go from B to F. Action 4 is to go from B to D. There exist rewards for Action 2 and Action 3, but nothing for Action 3 and Action 1.
2. Maximize the Sum of Rewards
Generally speaking, if the agent is at state s, it should take an action a. This action will bring the agent to a new state s’, and this action would also return a reword r (could be positive, negative or zero). The mechanism that the actions are chosen at different states is called a policy.
If we go several steps further, we can define a Q function as
This function takes the current state and action as input. After taking action a, assume that the agent follows a certain policy, so there will be a trail of pairs of states and actions. Each of these state-action pairs is associated with a reward. The Q function returns the total reward of the trail of state-action pairs. For different policies, the states-action pairs should be different. Among these trails, there surely exist at least one trail that maximum the Q function. These policies are called the optimal policies for the problem. Usually Q* have been used as the symbol to represents the Q function of the optimal policies.
The Q* can be written as
The rewards after state s is in the ‘future’. Thus it is true that the we should consider a little of the future that is too far away from current. The γ term here is named as discount factor which discount the influences of the further future rewards. Generally, γ is between 0 and 1. This optimal Q* function can also be written as a recursive format, which is also named as Bellman Equation.
The problem here is that in most cases the agent doesn’t have a ‘reward map’ at hand. It needs to understand the rewards by taking interactions with its environment. The good news is that the function is provided to converge to Q* with the algorithms such as Q-Learning .
3. When Deep Learning Kicks in: DQN
The mining example shown above is a simple one. Some of the problems may have infinitely many states. It’s hard to describe these problems by discrete states and simple look-up tables for optimal policy.
Alternatively, another exciting idea to train a deep-neural net for decision making. Deep-neural net is good at system identification and dynamic control. It can be trained by the methods of deep-learning. Taking the states, such as images, as input, a well trained neural net can provide the best action which maximize the total rewards.
Fig. 3 Deep Neural Net
The DQN is firstly introduced by Playing Atari with Deep Reinforcement Learning on NIPS in 2013. Since then the DQN has been widely employed to finished different kinds of tasks. The efficiency of DQN is at a very high level compared with traditional optimization methods, such that it is sometimes been discussed as ‘human level decision making’.
The detailed implementation of DQN will be introduced in the next post of this series: A Jump Start of Reinforcement Learning by DQN.
Any question, email me at [email protected].
| A Jump Start of Reinforcement Learning by DQN: 1. A Brief Introduction | 2 | a-jump-start-of-reinforcement-learning-by-dqn-1-a-brief-introduction-14167c747689 | 2018-06-17 | 2018-06-17 01:02:12 | https://medium.com/s/story/a-jump-start-of-reinforcement-learning-by-dqn-1-a-brief-introduction-14167c747689 | false | 762 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Lutong Zhang | null | 7cad01e666fd | zlt1213 | 14 | 47 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | a995c24848a3 | 2018-03-27 | 2018-03-27 15:51:38 | 2018-03-27 | 2018-03-27 15:58:10 | 4 | false | en | 2018-03-27 | 2018-03-27 15:58:10 | 12 | 1416d0cd2dbe | 4.130189 | 7 | 0 | 0 | This blog post was written by Tony Melrose, a Platform Solutions Engineer at Box | 4 | Enabling Field Maintenance Workers with Box and Zia Consulting
This blog post was written by Tony Melrose, a Platform Solutions Engineer at Box
Last month, a few members from the Box team headed to Boulder, CO to partner up with Zia Consulting for a two-day hackathon focused on building cloud content management solutions for a fictional company. The group split into teams and focused on a common problem for many businesses: a field maintenance worker’s workflow. These workers often work solely from mobile devices, assessing broken parts with minimal access to computing resources.
Leveraging the Box Platform APIs, webhooks and a few non-Box services (I’ll get into these later), we built a mobile application that ingests a photo, leverages OCR capabilities to identify a part’s serial number, does a database lookup for the product information based on that serial number, applies metadata to the image file in Box and kicks off a workflow for part replacement based on that metadata. While the Box Platform alone can’t solve for this entire workflow, Zia’s expertise in enterprise content management workflows helped us tie into other services to drive to a viable solution.
The Process
Each team approached the problem slightly differently, and I’m going to share with you the approach that my team took by building out a small Flask application in Python that could link into each of these other services or provide the necessary metadata values required to start relevant workflows.
First, we determined the flow of information through the application and pulled together a rough architecture diagram to inform our decisions around building the application.
This image represents the flow of data, starting on the left-hand side with the Rocket Field Engineer who takes a photo of a part that’s in need of maintenance.
We created an application through the Box Developer Console for this process with a key that we would share across all flows in the app. Each Rocket Field Engineer (RFE) would need to log into the mobile application but didn’t need access to the Box interface, making them perfect candidates for our App User model. We would have centrally-owned content using a Service Account and each RFE would be creating a folder within that central repository with a folder name of “Work Order Number ####”.
We configured a webhook to watch for FILE.UPLOADED events on the centrally managed folder, which cascades to any subfolders, enabling us to see when a new image is uploaded to a newly-created work order and send that over to Ephesoft for scanning. Ephesoft offers a whole portfolio of intelligent capture and scanning solutions. Now that we have the backbone in place, let’s look at the actual meat of the application.
The RFE takes a photo of a part that they feel is in need of maintenance using a custom mobile application (which could be built using one of Box’s mobile SDKs). This creates a new work order folder and uploads the file into it. It also triggers a webhook to our application so we have the file ID of the newly uploaded photo that we can use to initiate a download request.
Once we have the file, we can send it over to Ephesoft for character recognition and database lookup using Ephesoft Transact. After the file has been processed, we receive an XML response from Ephesoft that includes information such as the serial number, product name, product cost, etc. that Ephesoft was able to identify in the image.
For expediency in our hackathon application, we just wrote the response to a file and then parsed that, converting it to a JSON object to pass into the Box Metadata API request.
The JSON object was then transformed to map to the fields we used in our metadata template so we could place our call back to Box with the appropriate metadata based on everything we now know about the image.
Once the metadata was applied to the file, a Nintex workflow was kicked off to either create a purchase order automatically or route the request for approval by a manager if the product cost exceeded a certain dollar value. Nintex is an intelligent process automation tool for creating cloud-based workflows. This is a simplified interpretation of what it could look like in the real world but essentially Nintex provided us with the ability to create logic in the workflow and route the same workflow to different people based on a certain metadata value.
Once approved, the Rocket Field Engineer could expect to receive the replacement part to complete the maintenance — all without having to fill out any forms or manually look up serial numbers — thereby improving their capacity and essentially eliminating human error with incorrectly entering information into a text field.
This entire process leverages a few key technologies to solve for an end-to-end solution. First, we leveraged the Box Platform APIs and webhooks to integrate Box into a mobile application to capture images and send a webhook event upon upload. Then, we built an integration with Ephesoft to analyze and extract information off the image file and write the results to Box’s metadata APIs. In the near future, we’ll be able to leverage the Box Skills Kit for this, which provides easy-to-use access tokens for reading the file and writing metadata to Box. Then, we leveraged Nintex to trigger a workflow when certain metadata values were applied.
| Enabling Field Maintenance Workers with Box and Zia Consulting | 8 | enabling-field-maintenance-workers-with-box-and-zia-consulting-1416d0cd2dbe | 2018-05-13 | 2018-05-13 15:04:07 | https://medium.com/s/story/enabling-field-maintenance-workers-with-box-and-zia-consulting-1416d0cd2dbe | false | 909 | Cloud content management APIs | null | null | null | Box Developer Blog | box-developer-blog | DEVELOPER TOOLS,CONTENT MANAGEMENT,API,TECHNOLOGY | boxplatform | Ocr | ocr | Ocr | 290 | Box Developers | null | 7e4cb8ebf639 | Box_Developers | 176 | 3 | 20,181,104 | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.