audioVersionDurationSec
float64 0
3.27k
⌀ | codeBlock
stringlengths 3
77.5k
⌀ | codeBlockCount
float64 0
389
⌀ | collectionId
stringlengths 9
12
⌀ | createdDate
stringclasses 741
values | createdDatetime
stringlengths 19
19
⌀ | firstPublishedDate
stringclasses 610
values | firstPublishedDatetime
stringlengths 19
19
⌀ | imageCount
float64 0
263
⌀ | isSubscriptionLocked
bool 2
classes | language
stringclasses 52
values | latestPublishedDate
stringclasses 577
values | latestPublishedDatetime
stringlengths 19
19
⌀ | linksCount
float64 0
1.18k
⌀ | postId
stringlengths 8
12
⌀ | readingTime
float64 0
99.6
⌀ | recommends
float64 0
42.3k
⌀ | responsesCreatedCount
float64 0
3.08k
⌀ | socialRecommendsCount
float64 0
3
⌀ | subTitle
stringlengths 1
141
⌀ | tagsCount
float64 1
6
⌀ | text
stringlengths 1
145k
| title
stringlengths 1
200
⌀ | totalClapCount
float64 0
292k
⌀ | uniqueSlug
stringlengths 12
119
⌀ | updatedDate
stringclasses 431
values | updatedDatetime
stringlengths 19
19
⌀ | url
stringlengths 32
829
⌀ | vote
bool 2
classes | wordCount
float64 0
25k
⌀ | publicationdescription
stringlengths 1
280
⌀ | publicationdomain
stringlengths 6
35
⌀ | publicationfacebookPageName
stringlengths 2
46
⌀ | publicationfollowerCount
float64 | publicationname
stringlengths 4
139
⌀ | publicationpublicEmail
stringlengths 8
47
⌀ | publicationslug
stringlengths 3
50
⌀ | publicationtags
stringlengths 2
116
⌀ | publicationtwitterUsername
stringlengths 1
15
⌀ | tag_name
stringlengths 1
25
⌀ | slug
stringlengths 1
25
⌀ | name
stringlengths 1
25
⌀ | postCount
float64 0
332k
⌀ | author
stringlengths 1
50
⌀ | bio
stringlengths 1
185
⌀ | userId
stringlengths 8
12
⌀ | userName
stringlengths 2
30
⌀ | usersFollowedByCount
float64 0
334k
⌀ | usersFollowedCount
float64 0
85.9k
⌀ | scrappedDate
float64 20.2M
20.2M
⌀ | claps
stringclasses 163
values | reading_time
float64 2
31
⌀ | link
stringclasses 230
values | authors
stringlengths 2
392
⌀ | timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | null | 0 | null | 2018-08-15 | 2018-08-15 15:53:27 | 2018-08-15 | 2018-08-15 17:16:34 | 0 | false | en | 2018-08-15 | 2018-08-15 22:52:36 | 3 | 12bfd6bdb2f6 | 2.162264 | 1 | 0 | 0 | Gartner’s Hype Cycle chart is a guiding light for venture capitalists. It is very often that a new business or technology and its potential… | 4 | Which AI companies will be adopted the fastest?
Gartner’s Hype Cycle chart is a guiding light for venture capitalists. It is very often that a new business or technology and its potential value to an end-user is clearly better than the existing experience. But it is important to understand how quickly the new product will be adopted. There are so many examples of businesses, that were the first of their kind, but did not end up being the category winner: Facebook was not the first social network, Uber was not the first ride-sharing service. And neither were actually even close to being second. There was lots of hype around these industries but it took some time for the reality to set in around what it would take for these ideas to be realized.
This is currently relevant with companies that are leveraging Artificial Intelligence (AI) to create a new businesses. There are legitimate applications of AI in every industry and there is very little doubt that AI will eventually infiltrate every one of them. The way we understand which companies will create the most near-term value for customers and investors is by breaking companies down into three pieces: data, algorithms and operation.
To illustrate, we’ll use the company Tractable. The company has added AI into claims assessment for vehicle collisions. The company aims to add efficiency to the damage appraisal process and remove the need for an appraiser to come look at the damage in person, a process which can take weeks to coordinate.
Data: The data behind the company comes from pictures of damaged cars and an understanding of what specific damage means in terms of maintenance costs.
Algorithms: In this case, the algorithms behind the AI are centered around computer vision. They understand the difference between a scratch, a dent, a hole, etc.
Operation: The software delivers an assessment back to an appraiser where they can confirm its findings and send it off to the vehicle owner.
The key here is in how quickly its findings are produced and verified. The process starts as soon as pictures are uploaded into Tractable’s software. Appraisers can go through much more claims when they don’t have to travel to assess them. They are provided an assessment from the software and they can adjustment the assessment accordingly.
Pictures and a damage assessment are delivered to an appraiser faster than they can go and gather that data themselves. The value in that efficiency is felt immediately. The feedback loop into the underlying algorithms is also fast. The algorithms are refined with each new claim. The speed of that feedback loop is a competitive advantage against a new entrant. Tractable could claim to have the best data set and assessment software that could only be replicated by having done the volume of claims volume that Tractable has done.
What does this mean for adoption and the Hype Cycle? When it comes to things that are new, the faster that ROI can be shown and value can be delivered the easier it is for its increased adoption to be justified. The AI companies that do this the best will see rapid adoption.
For companies where ROI takes longer to be shown, it doesn’t mean that they can’t come to market. But they will have to plan accordingly — raise more capital, partner on distribution and grow slowly.
—
Disclaimer — This is not investment advice.
| Which AI companies will be adopted the fastest? | 50 | which-ai-companies-will-be-adopted-the-fastest-12bfd6bdb2f6 | 2018-08-15 | 2018-08-15 22:52:36 | https://medium.com/s/story/which-ai-companies-will-be-adopted-the-fastest-12bfd6bdb2f6 | false | 573 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Chris Fortunato | Just here to exhale. Opinions are my own. | bac95e63dabc | chrisfortunato | 158 | 217 | 20,181,104 | null | null | null | null | null | null |
0 | conda install python=2.7.13
conda update python
coda create --name CoreMLEnvironment
pip install -U coremltools
source activate CoreMLEnvironment
cd ~/Download/cnn_age_gender_models_and_data.0.0.2
python
| 6 | null | 2017-09-14 | 2017-09-14 07:57:12 | 2017-09-14 | 2017-09-14 17:03:58 | 7 | false | it | 2017-09-15 | 2017-09-15 08:15:32 | 9 | 12c07487f9f1 | 3.993396 | 4 | 2 | 0 | Neural Networks: CoreML e iOS11 + coremltools + Anaconda | 5 | CoreML: Da modello Caffe ad iOS App — Parte 1
Neural Networks: CoreML e iOS11 + coremltools + Anaconda
Cosa facciamo oggi?
Oggi vediamo come convertire un modello Caffe in un modello CoreML utilizzando coremltools
Domani… vediamo come utilizzare un modello CoreML in iOS 11, per riconoscere il sesso (probabile :)) di una persona da una foto.
Di cosa abbiamo bisogno?
Un modello Caffe ☕️
Anaconda (Python)
XCode 9 💻 (per domani)
iPhone con iOS 11 📱 (per domani)
Installiamo Python
Come prima cosa, dobbiamo installare Python nel nostro Mac. A renderci la vita facile c’è Anaconda.
Scarichiamo Anaconda con Python 2.7 e facciamo partire l’installazione.
L’installazione di Anaconda andrà a modificare il file bash_profile nella vostra cartella utente. Possiamo vedere il file aggiornato, digitando cat ~/.bash_profile nel terminale.
Una volta installato Anaconda, procediamo con l’installazione di Python 2.7.13. Apriamo il terminale e digitiamo:
Perfetto! Visto che è possibile utilizzare più versioni di Python nello stesso Mac, quello che dobbiamo fare è definire un ambiente virtuale che utilizzeremo con Anaconda. Per fare questo, digitiamo:
dove CoreMLEnvironment è un nome a nostra scelta (ricordiamoci di rispondere “y” quando ci verrà chiesto se vogliamo procedere con la creazione dell’ambiente virtuale).
Se vogliamo optare per PyCharm per scrivere il nostro script Python di conversione (come vedremo in seguito), allora non è necessario creare un ambiente virtuale.
Installiamo CoreMLTools
A questo punto non ci rimane che installare CoreMLTools digitando, sempre dal terminale:
Caffe Model
E’ il momento di reperire un modello Caffe. Ci viene in aiuto una lista di modelli Caffe reperibile alla pagina Model Zoo di Caffe.
Per il nostro progetto scarichiamo il modello Age & Gender da qui o direttamente cliccando su cnn_age_gender_models_and_data.0.0.2.zip
Espandiamo il file .zip e controlliamo che il contenuto della cartella sia come quello in immagine:
Convertiamo il Caffe Model
Come si può vedere dalla immagine, nella cartella sono presenti due tipi di modelli: Gender e Age.
Ciò che dobbiamo fare ora è convertire i modelli per poterli utilizzare con CoreML.
A questo punto abbiamo due strade. Se siamo Nerd che amano il terminale, andiamo al punto successivo “Da Terminale”, altrimenti possiamo andare direttamente al paragrafo “Da PyCharm”.
Da Terminale
Attiviamo lo spazio virtuale che abbiamo creato in Anaconda
Entriamo nella cartella con i modelli Caffe appena scaricati
digitiamo
e scriviamo il nostro script:
Una volta premuto invio, si avvierà la conversione dei modelli e vedremo apparire nella nostra cartella, i modelli convertiti in CoreML Gender.mlmodel e Age.mlmodel.
Da PyCharm
Se invece non vogliamo usare il terminale ma scegliamo la via dell’IDE Python, allora abbiamo bisogno di PyCharm. La versione CE è gratuita.
Installiamo PyCharm, apriamolo e creiamo un nuovo progetto da File/New Project e chiamiamolo Exporter.
Per semplificarci la vita, salviamo il nostro progetto nella stessa cartella dove abbiamo i nostri modelli (nel mio caso sempre ~/Download/cnn_age_gender_models_and_data.0.0.2)
Come interpreter è di vitale importanza 💪 scegliere dalla lista quello relativo ad Anaconda
Anaconda Python 2.7.13
Tasto destro su Exporter, New e scegliamo Python file. Chiamiamolo exporter.py e salviamolo all’interno della cartella Exporter appena creata.
Ora dobbiamo verificare che la nostra Python Console sia corretta. Apriamo le preferenze di PyCharm (PyCharm/Preferences) e assicuriamoci di avere il Python interpreter della Python console, come da immagine:
Per comodità (ci tornerà utile quando eseguiremo lo script), torniamo nel Finder e nella cartella Exporter appena creata muoviamo tutti i files che sono contenuti nella cartella ~/Download/cnn_age_gender_models_and_data.0.0.2
Perfetto, possiamo finalmente scrivere il nostro script. Apriamo il file exporter.py e scriviamo al suo interno:
Eseguiamo il nostro script. 🤞 Ci ritroveremo quindi i modelli Age.mlmodel e Gender.mlmodel convertiti in CoreML, all’interno della cartella del nostro script:
CoreMLTools
E’ importante capire che dalla conversione eseguita con coremltools, dipende la creazione del modello e soprattutto il tipo di parametri input ed output che questo restituirà.
Infatti se per esempio escludiamo l’argomento image_input_names dal nostro script, potremo notare, aprendo i files Age.mlmodel e Gender.mlmodel in XCode, che in input e in output avremo sempre un parametro di tipo MultiArray.
Mantenendo invece l’argomento image_input_names avremo come input dei nostri modelli, un parametro di tipo Image… ovvero una foto :)
Ad ogni modo, non c’è un solo modo per convertire un modello non CoreML in modello CoreML. Ci sono molteplici modi e dipendono da tantissime variabili.
Inoltre non tutti i modelli possono essere convertiti. Tantomeno tutti i tools come Caffe sono supportati.
Di seguito una lista riepilogativa da parte di Apple, dei modelli e tools supportati.
Next…
Nel prossimo post vedremo come utilizzare il Gender.mlmodel in iOS11 per poter riconoscere il sesso di una persona in primo piano in una foto.
Stay tuned :)
| CoreML: Da modello Caffe ad iOS App — Parte 1 | 60 | coreml-da-modello-caffe-ad-ios-app-12c07487f9f1 | 2018-06-14 | 2018-06-14 11:37:13 | https://medium.com/s/story/coreml-da-modello-caffe-ad-ios-app-12c07487f9f1 | false | 780 | null | null | null | null | null | null | null | null | null | Coreml | coreml | Coreml | 178 | Daniele Galiotto | iOS Developer and COFounder at Nexor Technology srl www.nexor.it | 5f8d2bb58f0e | danielegaliotto | 28 | 28 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | f21e517b3177 | 2017-09-06 | 2017-09-06 18:04:06 | 2017-09-06 | 2017-09-06 18:14:17 | 1 | false | en | 2017-09-06 | 2017-09-06 18:14:17 | 4 | 12c149adc402 | 3.592453 | 8 | 0 | 0 | Maria Valdivieso de Uster, Director of Knowledge, McKinsey & Company’s Marketing and Sales Practice | 5 | The 7 Biggest Trends Upending Sales Today
Maria Valdivieso de Uster, Director of Knowledge, McKinsey & Company’s Marketing and Sales Practice
I spend all of my time with sales executives across different countries and sectors, helping them think through their sales strategies. While writing Sales Growth, one characteristic rose to the top that differentiated best-in-class sales leaders from the rest: their ability to find growth opportunities before their competitors do.
A key to doing this is to spend time thinking about what’s coming in technology, regulations, demographics, and economics, and how this is affecting opportunities for sales now and in the future. We identified many ways in which the sales world is changing, but the following are the trends I found particularly eye-catching in the context of driving sales growth today.
Trend #1: Investing in Future Growth
Thinking three moves ahead is vital in any game, and is essential to sales growth. But this skill does not come automatically. The best sales leaders make trend analysis a formal part of the sales-planning process, and make forward planning part of someone’s job description. This means they are perfectly poised to capture the opportunities created by sudden changes in the environment.
Knowledge is only one part of the equation, though. Top-performing sales organizations have the will and the means to translate macro shifts into real top-line impact fast. The first-mover advantage created by forward-looking sales plans drives sales in areas where competitors have yet to arrive.
Many sales executives explicitly account for investment in new growth opportunities in their annual capacity-planning processes. More than half of the fast-growing companies we interviewed look at least one year out, and 10% look more than three years out. Thinking ahead is not just about resource planning: 45% of fast-growing companies invest more than 6% of their sales budget on activities supporting goals that are at least a year out.
Trend #2: Finding the Growth in Micromarkets
Averages lie. In the quest for sales growth, averages can mask where growth truly lies, and the hidden pockets of growth in your industry may be in your own backyard.
The most successful sales leaders I speak to are extremely proactive at mining the growth that lies beneath their feet in what can appear — on average — to be mature markets. They take a geological hammer to all their market and customer data; they break larger markets down into much smaller units, where the opportunities — prospects, new customer segments, or microsegments — can be assessed in detail. This disaggregation makes it apparent very quickly that a broad-brush approach leads to resources being wasted where growth is significantly below average.
Micromarket strategies are heavy on the analytics, so it’s important that sales teams on the ground don’t get bogged down by the details, and can use the information in the most effective way.
Trend #3: Capturing Value from Big Data and Advanced Analytics
Sales forces have an incredible amount of data at their fingertips today compared with even four or five years ago, but getting insights from it and making those actionable is much harder. Sales leaders that get it right make better decisions, uncover insights into sales and deal opportunities, and refine sales strategy.
The big shift we see today is from the analysis of historical data to using data to be more predictive. Sales forces use sophisticated analytics to decide not only what the best opportunities are, but also which ones will help minimize risk. In fact, in these areas, three-quarters of fast-growing companies believe themselves to be above average, while 53%–61% of slow-growing companies hold the same view.
But even among fast-growing companies, only just over half of them — 53% — claim to be moderately or extremely effective in using analytics to make decisions. For slow-growing companies, it drops to a little over a third. This indicates that there remains significant untapped potential in sales analytics
To start with, you need to have a lot of very smart data scientists to help you mine the data, and then you need people with the business expertise to translate that into something that salespeople can act upon. Then, the next time a rep goes to see a customer, he or she knows exactly who to see, when to see them, what to say, and precisely what to offer.
Trend #4: Outsourcing the Sales Function
One of the trends that we began to see while doing the research for Sales Growth is the outsourcing of parts of (and sometimes lots of) the sales value chain. What’s new today is that the automation we mentioned has enabled third-party vendors to run a company’s entire end-to-end sales process. I’m talking all the way from demand generation to customer acquisition and fulfillment.
These companies understand your target segments, they use big data to identify leads, they market to different segments with different offers and using different platforms, and then they match their own sales reps to individual customers based on the likelihood of converting that particular type of person. For the sales organization, it means moving to a model where your pay is based not on the service, but on the new customers being acquired.
To read the complete article, “The 7 Biggest Trends Upending Sales Today,” visit Quotable.com.
| The 7 Biggest Trends Upending Sales Today | 11 | the-7-biggest-trends-upending-sales-today-12c149adc402 | 2017-10-11 | 2017-10-11 20:36:54 | https://medium.com/s/story/the-7-biggest-trends-upending-sales-today-12c149adc402 | false | 899 | Salesforce for Sales shares content highlighting the latest sales advice from the world’s most respected sales minds. Learn from the best. Sell like the best. | null | SalesCloud | null | Salesforce for Sales | null | salesforce-for-sales | SALESFORCE FOR SALES,SALESFORCE,SALES,THOUGHT LEADERSHIP,SALES TIPS | salescloud | Sales | sales | Sales | 30,953 | Salesforce | Connect to Your Customers in a Whole New Way | f4fb2a348280 | salesforce | 36,343 | 4,056 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-10 | 2018-03-10 13:49:39 | 2018-03-10 | 2018-03-10 15:18:40 | 9 | false | en | 2018-03-10 | 2018-03-10 15:21:03 | 0 | 12c15c09ea09 | 2.909434 | 1 | 1 | 0 | Tutorial | 5 | How to Claim Your BTD Coins
Tutorial
This tutorial will show you how to claim your BTD coins by exporting your private key from your bitcoin wallet and then by importing it into the Bitcoin Dollar wallet. This method is a very fast and simple way to claim your BTD coins.
*Note: We will be using the Electrum bitcoin wallet for this tutorial, but the information in this tutorial can be applied to the majority of the bitcoin wallets out there. If you can export your private keys from your bitcoin wallet, then this tutorial will work for you.
Safety Precaution: Move your Bitcoin balance to a new BTC address before importing your private key into your Bitcoin Dollar wallet. You should never trust importing your private key anywhere, this is how you avoid any risk.
1. Find the option to export your private key, in electrum; there are two ways to export the private key.
A. Wallet > Private Keys > Export
This will export ALL of your private keys in this format: BTCADDRESS PRIVATEKEY
B. Under the “Addresses” tab, right click on the BTC address and click Private Key.
Note: If you have multiple addresses you can filter to display only the addresses that have been ‘Used’ under the “Addresses” tab you will find a dropdown box with different filters. Even if the addresses have 0 balances at this moment in time, it might be worth the effort to import all of the addresses that have had transactions into the Bitcoin Dollar wallet as you might find that you will obtain more BTD coins this way.
2. Copy the private key that you would like to import into the Bitcoin Dollar wallet.
3. Open the Bitcoin Dollar wallet and click on the Help > Debug Window menu.
4. Click the ‘Console’ tab under the debug window interface, and type the following command (Make sure to replace YOURPRIVATEKEY with your copied private key) and press enter:
importprivkey YOURPRIVATEKEY
5. You will see a dialog box pop up that says “Rescanning” this can take a few hours to complete. Once it has completed you should see a new BTD balance reflect in your Bitcoin Dollar wallet and you should see a new BTD address added under “Receiving Addresses” File > Receiving Addresses. You can complete this task for as many private keys as you wish.
6. If you would like to see how much BTD was added to your wallet for that specific private key you can do so by copying the new BTD address that was added to “Receiving Addresses” File > Receiving Addresses — and then pasting that BTD address under the “Transactions” tab into the filtering box as shown below.
| How to Claim Your BTD Coins | 20 | how-to-claim-your-btd-coins-12c15c09ea09 | 2018-04-02 | 2018-04-02 12:18:28 | https://medium.com/s/story/how-to-claim-your-btd-coins-12c15c09ea09 | false | 453 | null | null | null | null | null | null | null | null | null | Bitcoin | bitcoin | Bitcoin | 141,486 | Bitcoin Dollar | First Artificial Intelligence Bitcoin and Exchange of the people, for the people and by the people. | 1255de06ea5 | support_18653 | 23 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-20 | 2018-03-20 11:28:08 | 2018-03-20 | 2018-03-20 11:32:14 | 1 | false | en | 2018-03-20 | 2018-03-20 11:32:14 | 4 | 12c17bacd0ab | 2.913208 | 0 | 0 | 0 | Companies are increasingly turning to artificial intelligence (AI)-powered contact center solutions to meet consumers’ growing demand for… | 5 | Computer Vision in the Call Center — The New CX Frontier
Image by: Unsplash
Companies are increasingly turning to artificial intelligence (AI)-powered contact center solutions to meet consumers’ growing demand for better CX, reduce costs and alleviate pressure on agents. With AI, call centers are equipped with a wide range of voice analytics, enabling recognition of customer, accent, gender, and emotion, as well as powering conversational IVRs and voice based virtual assistants. Natural Language Processing (NLP) algorithms have enabled AI-powered tools to grasp context, power smart classification, routing of customer inquiries, and create conversational chatbots. With structured data analysis, predictive analytics can now be performed by extracting information from mass amounts of data and using it to predict trends and future behavior patterns, such as customer churn.
But there is still one missing element that has barred AI from radically transforming the customer experience.
What is the secret sauce in the AI mix that stands at the core of this customer support transformation?
Computer vision.
Can a computer see?
Computer vision is the science that attempts to give visual capabilities to a machine. Via automatic extraction and analysis, computer vision enables the machine to extract meaningful information directly from an image, and then utilize learned algorithms to achieve automatic visual understanding.
Computer vision is being utilized in a wide range of applications. It recognizes faces and smiles in cameras; it helps self-driving cars read traffic signs and avoid pedestrians; it allows factor robots to monitor problems in the production line. In customer service, it helps the computer see the problem, as a true virtual technician.
Object recognition in a technical support model
Deep learning-based object recognition offers incredible accuracy that makes object recognition a core technology for the future virtual technician as the ability to see the problem is essential to finding a rapid resolution. Object recognition enables the computer to identify technical devices or parts — including the exact device model, and each device part, such as ports, cables or display panel, the device color, and be able to distinguish the particular device from others. In addition, the computer can recognize objects found in a live customer environment; for example, in a variety of backgrounds, positions, angles or lighting.
Computer vision can be utilized to perform as a Virtual Assistant for customer service agents, delivering effective decision support during the agent-customer interaction. Considered a hybrid model, the agent’s performance is enhanced by the computer’s ability to quickly identify devices and technical issues, as well as to provide faster resolutions. This has been proven to reduce agent training time and streamline the entire support process.
Computer vision also enables gradual automation towards full self service with device recognition and augmentation. Via a smartphone, the customer indicates the faulty device, and the virtual assistant can recognize devices, detect motions, and interact in real time with the customer. The virtual assistant uses augmented reality to guide the customer to resolution via a step-by-step process and is also able to correct the customer in case of errors, ensuring that the resolution is successful.
Powered by advances in Deep Learning
Computer vision is becoming increasingly effective in remote customer care and has been supported by unprecedented advances in AI over the past few years. The most advanced form of AI — Deep Learning — enables independent learning of massive data sets. Unlike classic methods in which a human expert needs to define features (rules and attributes), deep learning can learn directly from data without human intervention, whether supervised or unsupervised. In some fields, deep learning achieves far greater results than classic machine learning methods. These technologies have driven significant improvements in computer vision accuracy and performance and have enabled the virtual technicians of the future.
For an enterprise to successfully implement a computer vision solution within its contact center, specific Deep Learning challenges must be overcome, and an effective strategy must be designed that takes into account the business case, available data, resources and desired output.
Click here to download Smart Vision — The White Paper and discover the challenges involved with the AI transformation, and the specific steps that businesses must take to successfully execute a deeper implementation of AI — computer vision — within their customer care operations.
This post was originally published on the TechSee Blog
| Computer Vision in the Call Center — The New CX Frontier | 0 | computer-vision-in-the-call-center-the-new-cx-frontier-12c17bacd0ab | 2018-03-20 | 2018-03-20 11:32:15 | https://medium.com/s/story/computer-vision-in-the-call-center-the-new-cx-frontier-12c17bacd0ab | false | 719 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | TechSee | TechSee revolutionizes the customer support domain by providing the first cognitive visual support solution powered by augmented reality and AI. www.techsee.me | 110ccef15e53 | hagai_929 | 112 | 11 | 20,181,104 | null | null | null | null | null | null |
0 | >>> sentiment = dict()
>>> def load(f=os.path.abspath("data/yelp_labelled.txt")):
>>> with codecs.open(f, 'rb', 'utf-8') as infile:
>>> for line in infile:
>>> line = line.split("\t")
>>> if line[1].strip() not in sentiment:
>>> sentiment[line[1].strip()] = [line[0]]
>>> else:
>>> sentiment[line[1].strip()].append(line[0])
>>> return sentiment
>>> load()
>>> re_punctuations = "([.,/#!$%^&*;:{}=_`~()-])[.,/#!$%^&*;:{}=_`~()-]+"
>>> r_punc = re.compile(re_punctuations)
>>> def preprocess(d):
>>> #remove extra and terminal punctuations
>>> d = r_punc(r'\1', d)
>>> # lowercase
>>> d = [i.lower() for i in d]
>>> # remove stopwords
>>> d = ' '.join([i for i in nltk.word_tokenize(d) if i not in stop and len(i) > 2])
>>> return d
>>> from PIL import Image
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from wordcloud import WordCloud
>>> # reading the image for masking
>>> pos_mask = np.array(Image.open("positive.png")))
>>> wc = WordCloud(background_color="white", max_words=2000, mask=pos_mask)
>>> wc.generate(text)
>>> wc.to_file("pos_cloud.png"))
>>> # reading the mask
>>> neg_mask = np.array(Image.open("negative.png")))
>>> wc = WordCloud(background_color="white", max_words=2000, mask=neg_mask)
>>> wc.generate(text)
>>> wc.to_file("neg_cloud.png"))
| 5 | 212989ff10a5 | 2017-12-26 | 2017-12-26 12:48:55 | 2018-02-27 | 2018-02-27 04:29:58 | 3 | false | en | 2018-02-27 | 2018-02-27 06:35:27 | 5 | 12c30cd4652d | 2.934906 | 6 | 0 | 0 | A context driven graphical representation of word frequency | 5 | Word Clouds — Old but Insightful
A context driven graphical representation of word frequency
[Source]
Word Clouds ?
Word clouds are fun ways to think creatively about any topic. It gives user a quick and dense view of the frequency distribution of words in the text. They relatively are better visualization than bar charts, when visualizing frequency of the words. As, the number of unique words in the text can be relatively huge.
In this blog, we will see how we can add a bit of visual context to our old school word clouds to make them visually attractive and contextually relevant.
We will be working with Sentiment Dataset for our purpose that to specifically with yelp reviews.
We will first load the dataset and apply basic preprocessing to the textual content despite of reviews being positive or negative.
The above snippet is responsible for grouping all the positive and negative reviews under the same bucket.
The above snippet holds all the necessary functions that are responsible for removing noise from our dataset. I have written descriptive comments for reader to figure out the specific functions.
We are now ready to render things visually. But wait a minute! What’s new in this? Don’t hustle, see ☟
Adding Visual Context
We have been seeing word clouds for long time now. Let’s consider 2 situations.
Situation-1: Consider of a scenario wherein, you plot word cloud for +ve and -ve comments on Youtube. It will take you 3–5 seconds to figure out which cloud holds respective words.
Situation-2: Consider of a scenario wherein, you need to mine the stock market comments over some company stock. You need to see the prominent words that are leading to the increase in the stock price over time. (i.e mind that value of stock can increase, decrease overtime) — it’s an useful information for the data.
Traditional simple word clouds will fail to capture all the information. Is it? Yes, I guess.
What if we overlay it on context driven mask?
Will that help? YES! 😋
Let’s do it then … 😎
Above snippet generates the word cloud with overlaying the context mask over words and save it to the disk.
Below are the resulting wordcloud that got rendered for positive and negative reviews on yelp dataset.
Word Cloud for Positive Reviews on Yelp Dataset
Creation of negative word cloud is left as an exercise for the reader. 😊
Now, you should be able to relate the problem I was trying to address. Similarly, scenario-2 can be pictured like the image shown below:
[Source]
Feel free to share your thoughts on the same. Share and Clap if you ❤️ it.
Recently, we had written a blog post that shows the effective usage of it. Read it here. To know more about what we do and how we can help you, please visit www.formcept.com
| Word Clouds — Old but Insightful | 65 | word-clouds-old-but-insightful-12c30cd4652d | 2018-05-28 | 2018-05-28 06:46:03 | https://medium.com/s/story/word-clouds-old-but-insightful-12c30cd4652d | false | 632 | Your Analysis Platform | null | formcept | null | FORMCEPT | formcept | BIG DATA ANALYSIS,DEEP LEARNING,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,DISTRIBUTED SYSTEMS | formcept | Machine Learning | machine-learning | Machine Learning | 51,320 | Prakhar Mishra | Currently into deep learning for NLP. Writes for Fun. Interested in Reinforcement Learning ✍️[email protected] | bcb8dddfcc90 | prakhar.mishra | 326 | 43 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-12-27 | 2017-12-27 18:40:14 | 2017-12-29 | 2017-12-29 17:07:39 | 4 | false | en | 2018-01-02 | 2018-01-02 03:58:07 | 14 | 12c512669248 | 4.450943 | 4 | 0 | 0 | Part II: From engagement to relationship | 5 | Superhuman swag: Shaping a future of social interactions
Part II: From engagement to relationship
This is Part II of an updated version of a talk I gave for @Futures_Design at the Mozilla Foundation in San Francisco on July 20th, 2017. Part I of the talk is here.
In Part I of this talk I argued that artificial agents, such as social robots, can exceed (in some scenarios) an average human in engaging human users. This can be done by a design that embraces their lack of human-likeness, while still endowing the robots with expressive abilities that make them believable characters. Here is the graph that shows that character-enabled media artifacts raise above humans in their utility, with the crossing point sometime between past and future, depending on the particular interaction scenario.
In this part of the talk, I will argue that believable characters are capable of further increasing engagement and, as a consequence, their utility through development of a social relationship.
Engage user → transform user
Once a user is engaged, the interface has the ability to affect the user transforming the user’s cognitive, emotional, or physical state. This is a bit of a chicken and egg problem, as engagement is itself a kind of a user’s state.
Transforming a user towards a more engaged one
In the case of the roboceptionist Tank, users can be divided roughly into those who use relational conversation strategies, such as starting with a greeting, saying thanks, and ending with a farewell, and those whose conversation is utilitarian, consisting of only the information-seeking question. As I mentioned in Part I, the relation-oriented users engage better: they are more persistent in the face of communication breakdowns. As a consequence, they are more likely to succeed in their task of getting the information they seek from the robot.
Wouldn’t it be nice to be able to convert some of the utility-oriented users into the relation-oriented ones!
Turns out this may be possible, by having the robot deploy the following conversational strategies:
Proactively greeting the user, once the robot’s sensors detect the user’s intent to communicate.
Priming for thanks: saying “thank you for stopping buy.”
Expressing an effort: saying things like “I am looking it up. Please hold on,” or just pausing for half a second.
The process of chipping away users with strategically placed dialog turns from the utilitarian group and converting them into relationally-oriented users within a single interaction can be shown like this:
Corollary: counter-intuitively, adding delay to the robot’s response can actually help engagement.
Transforming a user towards a less prejudiced one
Military personnel stationed abroad and locals. Migrant workers and locals. Two city neighborhoods with distinct social class and ethnic majorities. These pairs of communities have one thing in common: members across each pair rarely get a chance to interact with each other within an equal power status situation.
Equal status within a contact situation is one of the necessary conditions for a positive contact, according to Gordon Allport’s work on Intergroup Contact Theory published in 1954. Positive intergroup contact can reduce stereotyping, prejudice, and discrimination. Conversely, a contact that is not positive, is not expected to have such benefits.
As a consequence, there are few chances for a positive contact between many communities that are correlated with ethnicity. Without the benefits of the positive contact even few negative contact situations lead to development and reinforcement of racial and ethnic stereotypes.
Fortunately, studies suggest that even a positive contact with a virtual character may help reduce ethnic prejudice. (Fascinatingly, even an imagined positive contact seems to help!)
Would a social robot like Hala, described in Part I, that expresses ethnicity through behaviors while still maintaining its robotic agency, be able to create a positive intergroup contact that reduces ethnic prejudice? This is still an open research question.
Engage user → transform user → mutual shaping
The most rewarding interactions are balanced: none of participants dominate beyond the comfort of others and all converge to some middle ground. This includes convergence in both a cognitive state (knowledge), and linguistic and physical behaviors.
Just like the Julie Delpy’s character said in that movie:
…if there is any kind of god, it wouldn’t be in any of us, […] but just this little space in between.
If there is any kind of magic in this world, it must be in the attempt of understanding someone sharing something.
In less poetic terms, a successful interaction is a joint activity, where participants work together towards establishing a common ground.
Peers and teachable agents
Given this balanced view of each participant’s contributions to an interaction, it is not difficult to imagine conversational agents that are peers to the users or even dependent on the user’s help.
For example, a peer storytelling robot can work with children to jointly tell a story while introducing new vocabulary.
A simulated student may need to be taught by a user, which in turn leads to the user’s learning by teaching.
Alignment
Interactions where participants align their linguistic choices such as lexemes and syntactic patterns are more mutually comprehensible and are also reported to increase feelings of rapport, empathy, and intimacy. Less obviously, breathing rates and neural patterns of the participants in such interactions align too.
Check out this recent report for an overview of the research on alignment.
TL;DR
Once a conversational agent has succeeded in engaging a user, it may be able
to steer the conversation towards a more social one, increasing the objective metrics of a success of the interaction, and
to reduce the user’s cognitive biases, including racial and ethnic prejudice.
Mutually rewarding human interactions are usually balanced: the participants converge to both shared content and shared linguistic and physical behaviors.
Repeating such mutually shaped interactions between a human and an agent over time may result in the participants establishing distinct social roles that would serve as a basis of a human-agent social relationship.
| Superhuman swag: Shaping a future of social interactions | 36 | superhuman-swag-shaping-a-future-of-social-interactions-12c512669248 | 2018-06-07 | 2018-06-07 08:09:55 | https://medium.com/s/story/superhuman-swag-shaping-a-future-of-social-interactions-12c512669248 | false | 994 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Maxim Makatchev | Making robots social. Contributed to roboceptionists Tank and culture-aware Hala, trash-talking scrabble-playing gamebot Victor, and Jibo. | 69787a1b58cf | maxipesfix | 17 | 71 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-09-05 | 2017-09-05 08:47:23 | 2017-09-05 | 2017-09-05 09:32:40 | 4 | false | en | 2017-10-11 | 2017-10-11 13:52:01 | 13 | 12c5a63fc870 | 3.016981 | 2 | 0 | 0 | Artificial Intelligence (AI) in a nutshell is the deep ability for machines to ‘learn’ user preferences and behaviours. Currently machines… | 5 | The Intelligence Revolution: 4 ways that AI makes life easier
Robot holding supporting the world image. Photo courtesy of www.rollingstone.com.
Artificial Intelligence (AI) in a nutshell is the deep ability for machines to ‘learn’ user preferences and behaviours. Currently machines can simulate basic human actions, like communicating and perform basic tasks. However, in the future machines will be able to function autonomously and mimic cognitive human actions. Cognitive actions are associated with the problem solving and learning capabilities of the human mind. The ability for machines to ‘learn’ and become more human-like is giving way to the Intelligence Revolution, which is comparable to the Industrial Revolution!
“The Industrial Revolution is a good comparison because it conveys just how momentous and disruptive this new technology is going to be. Think about the Industrial Revolution: mechanization, the use of steam, the arrival of electricity and so on — these enabled us to create powerful machines that could do things for us. But with the Intelligence Revolution, we are using AI to create machines that will be deciding things for us.”
- Dr Cave
4 ways that AI could improve your life:
1. Home and transport automation
Imagine the possibilities that self-driving cars offer people who are disabled. The options seem to be endless, where people can accept any job offer, transport them self without assistance to a certain degree and become more independent. These possibilities are extended into homes, where automation of simple tasks like, making a bed and cleaning your house via robots or home devices acting autonomously, is made simpler.
Google Driverless Car Animation Video.
2. Elimination of redundant and unsafe tasks in work life
The use of Chatbots are already proving useful in day to day work automation. These bots gather information and simulate real time human conversations, freeing up time for more strategic tasks to be conducted in the office by humans. Chatbots are already involved in dialog services like customer care and information acquisition. Bots are also used on social media platforms to share posts and conduct basic interactions. This freed time allows for humans to be more productive and can increase job satisfaction, as most of the day is not spent on mundane and repetitive tasks.
More so, unsafe job tasks like wielding have already been replaced by bots. This extra measure ensures a higher safety rate in working careers. As well as the exploitation of robots is also more beneficial than the exploitation of cheaper outsourced labour.
Automotive statistics for 2020. Photo courtesy of Accentureland.
3. Less screens and more talking
AI will see for an increase in voice command bots like, Apple’s SIRI, Google Home, Amazon’s Alexa, Microsoft’s Cortana etc., products that ultimately make life more efficient and easier. Cognitive systems are also figuring out new ways to describe images to people with sight impairments.
ASR: Automatic Speech Recognition. Photo courtesy of Shaan Haider.
4. Save time and money
Smart devices also have the ability to not only ‘remember’ your user preferences and repeat them when you are in your home. Allowing for you to have an optimised home experience whereby the temperature, lighting and coffee machine are in sync to your normal routine. But, they also have the ability to self-turn off and on and adjust according to light sensitivity. This means that a light will never be left on or on unnecessarily when the room is bright enough. This will drive down your electricity bill and leave more time for you to enjoy the comforts of your home.
Phone controlling a Smart Home. Photo courtesy of https://www.vivint.com.
The road to the Industrial Revolution was a rocky disruptive one that was poised with many challenges resulting a general disruption of human civilization, which improved working and living conditions and gave rise to new possibilities. The Intelligence revolution will be much the same, but will ultimately make life, as we know it, easier, safer and more efficient.
| The Intelligence Revolution: 4 ways that AI makes life easier | 51 | the-intelligence-revolution-4-ways-that-ai-makes-life-easier-12c5a63fc870 | 2018-03-20 | 2018-03-20 17:57:58 | https://medium.com/s/story/the-intelligence-revolution-4-ways-that-ai-makes-life-easier-12c5a63fc870 | false | 614 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Weeve | Empowering the Economy of Things by democratizing IoT data. Head over to Weeve's World Publication for all of our posts. | 764176defc1 | weeve | 350 | 97 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-10-11 | 2017-10-11 12:51:56 | 2017-10-11 | 2017-10-11 17:12:15 | 10 | false | en | 2018-01-13 | 2018-01-13 13:15:50 | 8 | 12c66f381ebf | 4.314151 | 2 | 0 | 0 | This year I successfully organized Coursera Mentors Meetup, Jaipur on 10th October 2017 at The LNM Institute of Information Technology. | 5 | Why to organize and be a Speaker at Coursera Mentor Meetup.
Coursera Mentor Meetup along with LNMIIT Students
This year I successfully organized Coursera Mentors Meetup, Jaipur on 10th October 2017 at The LNM Institute of Information Technology.
The work was initiated on 19th Sep, when I first contacted Claire Smith, Head of Coursera Community Manager. She is one of the most enthusiastic and positive women, I have ever met.
Her positivity and support was the major key initiative that pushed me to conduct such a successful meetup.
LNMIIT as always, embraced my initiative and with the support of MozLNMIIT, I knew this was going to be a huge success. MozLNMIIT helped me to conduct and organize the above event which required a lot of effort from both ends.
My valuable peers from MozLNMIIT supported me throughout this journey and many more to come. I’m pretty sure that without their support it would not have been possible. So I would like to thank:
Akshit Agarwal (Campus Outreach)
Mayankar Shukla & Kartikey Joshi (Secretary)
Anirudh Jakhar & Chirag Goyal (Event Organizer)
Manan Nahata (Marketing Coordinator)
Enthusiastic students of LNMIIT
Pratul Kumar and Ayush Pareek
What I learned or gained from the event?
This was my first event at such a huge scale. I personally contacted several mentors from Rajasthan and made some valuable connections with them. Many of them are now my good friends.
I gained knowledge about many things from my fellow mentors who possess immense knowledge in their domain. I also learned how and when one should approach a others(mentors) appropriately.
With the great support from MozLNMIIT, I experienced the value of team effort and realized - “Together we stand, alone we fall”.
Why should one organize a Coursera Meetup?
You get a chance to meet highly intellectual people.
You get to know about new technology and working framework which would be highly beneficial for you.
You would get to know how an event is organized and how important a healthy community interaction is.
Networking, you meet several people with different mindset and different skills.
Student Making Notes
What the students of LNMIIT Gained?
More than 120 students turned up for the meetup. Students of LNMIIT got insight of several domains within a small time interval. As all the speaker gave insight about the Coursera courses they are mentoring,and how these courses helped them to shape their future, students had great time with speaking and getting to know to the speakers.
Pratul Kumar
What I learned as a Speaker?
This was my first event as a keynote Speaker. It was a great learning experience for me. Addressing the mass and trying to spread the knowledge to each and every one is a hard task. I learned that you need to be very specific and clear about the speech.
Speakers:
Divyanshu Rawat(Event Organizer and Speaker) — Single Page Application offered by John Hopkins University
Divyanshu shared How Bootstrap Front End UI Framework makes designing a lot simpler and you don’t have to worry too much about how your site looks like on different browsers because the framework already takes care of that. It saves you a lot of time. Divyanshu also shared How the concept of Single-Page Application (SPA) is a web application or web site that interacts with the user by dynamically rewriting the current page rather than loading entire new pages from a server.
Lokesh Todwal — Machine Learning Offered by Stanford University.
Lokesh shared insight about Machine Learning & AI : What it is and Why it Matters Artificial Intelligence (AI) and Machine Learning (ML) are two very hot buzzwords right now, and often seem to be used interchangeably.They are not quite the same thing, but the perception that they are can sometimes lead to some confusion.
Pradyumn Agarwal — Data Structures and Algorithms offered by University of California San Diego
Pradyumn shared Algorithms are at the heart of every nontrivial computer application, and algorithms is a modern and active area of computer science and How he learnt algorithmic techniques for solving various computational problems and ended up solving whole lot of algorihmic problems by leveraging knowledge acquired by the Coursera Course that he is currently Mentoring.
Ayush Pareek — Natural Language Processing offered by the University of Michigan
Ayush shared how Text mining, also referred to as text data mining, roughly equivalent to text analytics, is the can be leveraged to derive high-quality information from text.
Pratul Kumar(Event Organizer and Speaker) — Front End UI Frameworks And Tools offered by HKUST
I shared how we can leverage Bootstrap to design templates for typography, forms, buttons, tables, navigation, modals, image carousels and many other, as well as optional JavaScript plugins and whole lot of cool stuffs that we can do with Bootstrap.
All the Five Mentors giving talk to LNMIIT students.
Each Speaker gave a brief insight about the course they are mentoring and the basic in that domain.
At the end I would like to thank all the mentors for turning up together and supporting this initiative.
| Why to organize and be a Speaker at Coursera Mentor Meetup. | 2 | why-to-organize-and-be-a-speaker-at-coursera-mentor-meetup-12c66f381ebf | 2018-05-13 | 2018-05-13 21:17:37 | https://medium.com/s/story/why-to-organize-and-be-a-speaker-at-coursera-mentor-meetup-12c66f381ebf | false | 812 | null | null | null | null | null | null | null | null | null | Coursera | coursera | Coursera | 816 | Pratul Kumar | null | d046b584dea | pratulkumar | 31 | 17 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 9649a1862e42 | 2018-02-28 | 2018-02-28 16:26:03 | 2018-03-29 | 2018-03-29 21:38:17 | 4 | false | en | 2018-03-29 | 2018-03-29 21:46:51 | 16 | 12c6f47995c0 | 4.269811 | 25 | 0 | 0 | Today, we’re pleased to announce Pachyderm 1.7! Install it now or migrate your existing Pachyderm deployment. | 5 | Pachyderm 1.7: Graphical pipeline builder, new structure for versioned data, official Python client, and more!
Today, we’re pleased to announce Pachyderm 1.7! Install it now or migrate your existing Pachyderm deployment.
For those new to the project, Pachyderm is an open source and enterprise data science platform that is enabling reproducible data processing at scale. We have users implementing some pretty amazing production pipelines for AI/ML training, inference, ETL, bioinformatics, financial risk modeling, and more. For example, the US Department of Defense’s DIUx organization is using Pachyderm to power their xView satellite imagery detection challenge (see figure below), which was recently featured in Wired. Now with the release of version 1.7, we can’t wait to see what our users will build!
DIUx’s xView detection challenge, powered by Pachyderm
The major improvements in the Pachyderm 1.7 release include:
A graphical pipeline builder — Build new data pipelines quickly and intuitively from the Pachyderm dashboard.
A new structure for organizing versioned data — Maintain robust pipelines subscribed to changes in data and easily recover from “bad” updates to data.
Official support for the Pachyderm Python client —Integrate Pachyderm data and pipelines into any Python-based application and manage data, pipelines, access controls, and more directly from Python.
More granular pipeline controls — Easily control job/data timeouts and resources utilized by pipeline stages.
Graphical pipeline builder
Not everyone wants to build and manage data pipelines from the command line or via language clients (e.g., our new Python client). Sometimes data scientists/analysts need to quickly set up pipelines for experimenting with new data, trying out new models, or cleaning up new data sources, and they would prefer to do this visually.
Choosing data inputs for a Pachyderm pipeline
Pachyderm’s new pipeline builder, which is part of the Pachyderm dashboard, let’s data scientists quickly create and deploy data pipelines via a graphical control plane. This lets data scientists focus on data sets and associated processing, while Pachyderm handles all of the deployment and scheduling details under the hood. They can select what data they want to process and then utilize any data science language/framework to perform that processing, whether that be scikit-learn, PyTorch, ggplot, or TensorFlow.
Specifying a command for the pipeline along with some resources limits
You can find out more about the pipeline builder and other Pachyderm Enterprise features here.
New structure for organizing versioned data
There’s a lot riding on production data science pipelines. Whether that’s a modeling pipeline predicting fraudulent financial transactions or a series of data aggregations that gives visibility into company sales. The triggering, updating, and management of these pipelines needs to be rock solid as data changes and code is updated.
Pachyderm 1.7 makes a number of updates to the underlying structure and organization of our versioned data, which make our data pipelines even more robust. When pipelines need to reprocess data, they will now only reprocess the most recent version of that data. In this way, pipelines become immune to previous states of the data that might have included corrupt, or otherwise bad, data. In addition, any change to input data creates an internal metadata structure relating that change to downstream collections of data that are dependent on that change. This allows Pachyderm to manage pipeline dependencies for both data and processing in a unified and resilient manner.
Official support for the Pachyderm Python client
Data scientists love Pachyderm, and data scientists love Python. So we decided it was time to officially support the Pachyderm Python client that was started as a user contributed project (special thanks to our users kalugny and frankhinek for their contributions).
The Pachyderm python client will now be integrated into our internal CI builds and will be maintained such that it is up-to-date with our latest API. This will allow data scientists to more easily iterate on their pipelines and manage Pachyderm resources. For example, they can now quickly pull versioned data from Pachyderm into Jupyter notebooks for experimentation and integrate pipeline triggering and results into any Python application.
Note, Pachyderm is still completely language agnostic, and we aren’t forcing anyone to use Python. However, this will be a great boost for the many users already integrating Pachyderm with their Python applications!
Check out these docs for more information on the Python client.
More granular pipeline controls
As we help more and more data science and engineering teams scale their data pipelines, we discover trends related to how teams want to customize their pipelines. Pachyderm 1.7 gives scientists and engineering more granular controls for pipelines based on these trends.
Pachyderm 1.7 give data scientists/engineers more control over resources needed for any particular pipeline stage. They can set resource “limits” for pipelines to control the amount of memory, cpu usage, and gpu usage that is a pipeline is allowed to consume. They can also set resources “requests,” such that pipeline workers are scheduled on nodes that have certain resources available.
Further, Pachyderm 1.7 allows data scientists to set timeouts for processing certain jobs and data. This is super valuable for data scientists that run compute intensive jobs like model training on expensive resources like GPUs. These data scientists can rest easy knowing that their jobs are time boxed, and teams can leverage these timeouts to make sure that shared resources are optimally utilized.
Install Pachyderm 1.7 Today
For more details check out the changelog. To try the new release for yourself, install it now or migrate your existing Pachyderm deployment. Also be sure to:
Join our Slack team for questions, discussions, deployment help, etc.
Read our docs.
Check out example Pachyderm pipelines.
Connect with us on Twitter.
Finally, we would like to thank all of our amazing users who helped shaped these enhancements, file bug reports, and discuss Pachyderm workflows and, of course, all the contributors who helped us realize 1.7!
| Pachyderm 1.7: | 168 | pachyderm-1-7-12c6f47995c0 | 2018-06-21 | 2018-06-21 02:16:48 | https://medium.com/s/story/pachyderm-1-7-12c6f47995c0 | false | 946 | Elephantine Analytics | null | null | null | Pachyderm Data | pachyderm-data | null | pachydermIO | Data Science | data-science | Data Science | 33,617 | Daniel Whitenack | Data Scientist at Pachyderm | 860304171052 | whitenack.daniel | 920 | 263 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-18 | 2018-05-18 18:21:01 | 2018-05-18 | 2018-05-18 18:28:45 | 1 | false | en | 2018-05-18 | 2018-05-18 18:35:27 | 0 | 12c7430d01ba | 1.588679 | 3 | 0 | 0 | Artificial Intelligence is here — brace for the new world. While many of us are struggling with tasks like waiting in lines, on customer… | 5 | Brace for the new world. Inspired by Artificial Intelligence.
Artificial Intelligence
Artificial Intelligence is here — brace for the new world. While many of us are struggling with tasks like waiting in lines, on customer service calls, still wasting time in filling forms, and digging for information, there is a new world emerging, which in aiming to take away all these mundane activities and give us time back, a valuable and scarce asset, especially to humans, as mortal beings.
It is a unique time, when both the new and old are existing side by side. I have a rumba, a machine for vacuuming, though intelligent, it is really quite stupid and does not replace the my cleaning service by any measure. Similarly, we all have our experiences of Siri and Alexa, which still seem buggy on voice recognition, ultimately, forcing us to resort to old ways.
So, what does this mean — it indicates that change is coming. Historically, we have learnt how internet changed how we live and work today. Likewise, artificial intelligence will change how we live and work in the future.
While internet came after the industrial revolution, its main objectives was to increase productivity. Today, we can achieve many tasks quickly but have a tech and data overload. We all have days where we are in front of a screen for most part of the day. Is this viable human condition? Did we industrialize the internet?
I believe artificial intelligence is coming at a time where it can further enhance our productivity, reduce data overload and save us a lot of time that we spend on machines, away from our natural social habitat. Nearly everybody is feeling more disconnected with connected machines.
Agreed, there are perils to artificial intelligence, there is much discussion that it will take away many jobs, it will take over the planet and end the civilization.
The question is — will it be another human breakthrough, which will bring us time to use creative side of our brains which only works best in less clutter and distractions. Have the millennials, who are digital natives, infamous in corporate circles for empathy figured out something about how the new age work would look like?
| Brace for the new world. Inspired by Artificial Intelligence. | 5 | brace-for-the-new-world-inspired-by-artificial-intelligence-12c7430d01ba | 2018-05-20 | 2018-05-20 14:07:56 | https://medium.com/s/story/brace-for-the-new-world-inspired-by-artificial-intelligence-12c7430d01ba | false | 368 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Deepika Bajaj | Mobile gaming, marketing, branding, entrepreneur, author, blogger, social media expert and an active thinker | 11ee4eb7e51a | invincibelle | 296 | 385 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-03 | 2017-12-03 13:33:46 | 2017-12-03 | 2017-12-03 13:45:10 | 1 | false | en | 2017-12-03 | 2017-12-03 13:45:10 | 3 | 12cd9aacda92 | 0.566038 | 2 | 0 | 0 | Listen to the remarkable story of Walter Pitts who rose from the streets to MIT, but couldn’t escape himself. | 5 | The Man Who Tried to Redeem the World with Logic
Listen to the remarkable story of Walter Pitts who rose from the streets to MIT, but couldn’t escape himself.
Listen to this remarkable story from Nautilus Magazine only on curio.io
The Man Who Tried to Redeem the World with Logic - brought to you by curio.io
Listen to professionally narrated articles from premium publications like The Financial Times, The Guardian, and Aeon…bit.ly
For more outstanding journalism in audio from The Guardian, The Financial Times, Salon and over 20 top publications, download the curio.io app
IOS (http://apple.co/2kkl5Jd)
Android (http://bit.ly/2tDOWwI)
| The Man Who Tried to Redeem the World with Logic | 12 | the-man-who-tried-to-redeem-the-world-with-logic-12cd9aacda92 | 2018-05-29 | 2018-05-29 05:51:19 | https://medium.com/s/story/the-man-who-tried-to-redeem-the-world-with-logic-12cd9aacda92 | false | 97 | null | null | null | null | null | null | null | null | null | Science | science | Science | 49,946 | curio.io | Listen to curated, professionally narrated articles from premium publications like The Financial Times, The Guardian and Aeon. https://www.curio.io | fd17429bb7b9 | curio.io | 76 | 34 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-18 | 2018-09-18 13:37:07 | 2018-09-18 | 2018-09-18 13:37:24 | 1 | false | en | 2018-09-18 | 2018-09-18 13:37:24 | 0 | 12cf26cfcb59 | 1.067925 | 0 | 0 | 0 | All the data we receive from our employees for training the system are the first most monotonous and to some extent difficult stage of the… | 1 | Artificial intelligence
All the data we receive from our employees for training the system are the first most monotonous and to some extent difficult stage of the system operation since all data in the current form can not be used and it is necessary to spend time for their correct conversion to the machine view for further study of them by AI Stellar modules. To do this, we use the following elements:
1. Analyzing and determining the degree of accuracy of the obtained analytics and statistical information that is dynamic and passes also additional verification and internal change for each source according to the following parameters: The
● percentage of accuracy of each of the signals according to its type and subject;
● Evaluation of the result of the analytical forecast or statistical transaction according to the forecasted results in the real model;
● The presence of a similar pattern obtained earlier for analytics and statistics for its subsequent systematization.
2. Imposition of existing trading strategies and reaction models on the market in order to identify regularities and the most successful scenarios, taking into account the minimization of risks and forecasting the system’s actions on such factors. For this we: We
● constantly search for and analyze existing strategies on several sites simultaneously;
● We study new methods of creating strategies, as well as hypotheses of working with the market.
| Artificial intelligence | 0 | artificial-intelligence-12cf26cfcb59 | 2018-09-18 | 2018-09-18 13:37:24 | https://medium.com/s/story/artificial-intelligence-12cf26cfcb59 | false | 230 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Stellar Fund | Stellar is your guiding star in crypto. Follow the star. stellarfund.io | a866ab2223d5 | stellarfund | 179 | 0 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-06 | 2017-12-06 21:31:10 | 2017-12-06 | 2017-12-06 21:34:51 | 1 | false | en | 2017-12-06 | 2017-12-06 21:35:12 | 2 | 12cfc76196e1 | 1.728302 | 0 | 0 | 0 | As the pursuit of artificial intelligence is broadening, many are finding themselves struggling between fear, interest, and caution. The… | 5 | How Robotics Will Change Your Life And Why It’s Too Late To Stop It
As the pursuit of artificial intelligence is broadening, many are finding themselves struggling between fear, interest, and caution. The thought of hardworking people losing their jobs to A.I. has many worried about the future of humanity. Movies like Terminator and Resident Evil have instilled the belief that the pursuit of robotics and super computers will come at the cost of mankind.
The debate is heated on both sides. While robotics have eliminated many factory jobs and with the release of self driving cars they have the potential to remove even more positions from the workforce. There is concern as to how the technology will reduce the requirement for human interaction as well as lower the jobs available to people. The other side of the argument looks at how reducing the amount of people behind the wheel would substantially reduce the amount of accidents and costs associated with operating a vehicle.
Advancement has long propelled humanity to push themselves to discover bigger, better, and easier ways of living. This is the next logical step in the pursuit of becoming more evolved. With all technological interest it will likely cost more than we’re prepared to give and dramatically change the way cultures exist.
However when it comes to reducing manual labor, most people are on board with lowering their workload. A household robot that cleans, does the dishes, laundry, and cooks would be a welcomed addition to most families. I think all thirteen year olds are desperate to lose their job to a machine, while parents would appreciate the lack of back talk. Which is why the roomba has been so popular. When considered in small measure it holds appeal. It’s when people begin to realize that their way of life is at stake that the arguments begin.
Yet the way of life has been threatened before, when airplanes took flight, cars drove, computers hummed to life. It changed the landscape of society, but most would argue for the better. As companies perfect the gentle touch of a robot hand and allow surgeries to be performed without error, the struggle to understand becomes replaced with acceptance.
The future is today, it is being shaped and molded, and it will change the way people live. However, it has the propensity to save lives, empower health, and create more room for the enjoyment of living.
| How Robotics Will Change Your Life And Why It’s Too Late To Stop It | 0 | how-robotics-will-change-your-life-and-why-its-too-late-to-stop-them-12cfc76196e1 | 2018-04-07 | 2018-04-07 11:46:03 | https://medium.com/s/story/how-robotics-will-change-your-life-and-why-its-too-late-to-stop-them-12cfc76196e1 | false | 405 | null | null | null | null | null | null | null | null | null | Technology | technology | Technology | 166,125 | Cherylyn Petersen | null | 59379d1a1680 | cpete220 | 2 | 30 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-03 | 2017-11-03 07:02:45 | 2017-11-03 | 2017-11-03 07:07:47 | 1 | false | en | 2017-12-19 | 2017-12-19 09:39:37 | 1 | 12cfe9dd3533 | 4.901887 | 5 | 0 | 0 | If you are interested in artificial intelligence and care about the effectiveness of your business, the article be Brad Power is worthy of… | 5 | How Harley-Davidson Used Artificial Intelligence to Increase New York Sales Leads by 2,930%
If you are interested in artificial intelligence and care about the effectiveness of your business, the article be Brad Power is worthy of your attention.
Original is here: https://hbr.org/2017/05/how-harley-davidson-used-predictive-analytics-to-increase-new-york-sales-leads-by-2930
It was winter in New York City and Asaf Jacobi’s Harley-Davidson dealership was selling one or two motorcycles a week. It wasn’t enough.
Jacobi went for a long walk in Riverside Park and happened to bump into Or Shani, CEO of an AI firm, Adgorithms. After discussing Jacobi’s sales woes, Shani, suggested he try out Albert, Adgorithm’s AI-driven marketing platform. It works across digital channels, like Facebook and Google, to measure, and then autonomously optimize, the outcomes of marketing campaigns. Jacobi decided he’d give Albert a one-weekend audition.
That weekend Jacobi sold 15 motorcycles. It was almost twice his all-time summer weekend sales record of eight.
Naturally, Jacobi kept using Albert. His dealership went from getting one qualified lead per day to 40. In the first month, 15% of those new leads were “lookalikes,” meaning that the people calling the dealership to set up a visit resembled previous high-value customers and therefore were more likely to make a purchase. By the third month, the dealership’s leads had increased 2930%, 50% of them lookalikes, leaving Jacobi scrambling to set up a new call center with six new employees to handle all the new business.
While Jacobi had estimated that only 2% of New York City’s population were potential buyers, Albert revealed that his target market was larger — much larger — and began finding customers Jacobi didn’t even know existed.
How did it do that?
AI at Work
Today, Amazon, Facebook, and Google are leading the AI revolution, and that’s given them a huge market advantage over most consumer goods companies and retailers by enabling them to lure customers with highly personalized, targeted advertising, and marketing. However, companies such as Salesforce, IBM, and a host of startups are now beginning to offer AI marketing tools that have become both easier to use (that is, they don’t require hiring expensive data scientists to figure out how to operate the tool and analyze its outputs) and less expensive to acquire, with software-as-a-service (SaaS), pay-as-you-go pricing. And instead of optimizing specific marketing tasks, or working within individual marketing channels, these new tools can handle the entire process across all channels.
In the case of Harley-Davidson, the AI tool, Albert, drove in-store traffic by generating leads, defined as customers who express interest in speaking to a salesperson by filling out a form on the dealership’s website.
Armed with creative content (headlines and visuals) provided by Harley-Davidson, and key performance targets, Albert began by analyzing existing customer data from Jacobi’s customer relationship management (CRM) system to isolate defining characteristics and behaviors of high-value past customers: those who either had completed a purchase, added an item to an online cart, viewed website content, or were among the top 25% in terms of time spent on the website.
Using this information, Albert identified lookalikes who resembled these past customers and created micro segments — small sample groups with whom Albert could run test campaigns before extending its efforts more widely. It used the data gathered through these tests to predict which possible headlines and visual combinations — and thousands of other campaign variables — would most likely convert different audience segments through various digital channels (social media, search, display, and email or SMS).
Once it determined what was working and what wasn’t, Albert scaled the campaigns, autonomously allocating resources from channel to channel, making content recommendations, and so on.
For example, when it discovered that ads with the word “call” — such as, “Don’t miss out on a pre-owned Harley with a great price! Call now!” — performed 447% better than ads containing the word “Buy,” such as, “Buy a pre-owned Harley from our store now!” Albert immediately changed “buy” to “call” in all ads across all relevant channels. The results spoke for themselves.
The AI Advantage
For Harley-Davidson, AI evaluated what was working across digital channels and what wasn’t, and used what it learned to create more opportunities for conversion. In other words, the system allocated resources only to what had been proven to work, thereby increasing digital marketing ROI. Eliminating guesswork, gathering and analyzing enormous volumes of data, and optimally leveraging the resulting insights is the AI advantage.
Marketers have traditionally used buyer personas — broad behavior-based customer profiles — as guides to find new ones. These personas are created partly out of historic data, and partly by guesswork, gut feel, and the marketers’ experiences. Companies that design their marketing campaigns around personas tend to use similarly blunt tools (such as gross sales) — and more guesswork — to assess what’s worked and what hasn’t.
AI systems don’t need to create personas; they find real customers in the wild by determining what actual online behaviors have the highest probability of resulting in conversions, and then finding potential buyers online who exhibit these behaviors. To determine what worked, AI looks only at performance: Did this specific action increase conversions? Did this keyword generate sales? Did this spend increase ROI?
Even if equipped with digital tools and other marketing technologies, humans can only manage a few hundred keywords at a time, and struggle to apply insights across channels with any precision. Conversely, an AI tool can process millions of interactions a minute, manage hundreds of thousands of keywords, and run tests in silica on thousands of messages and creative variations to predict optimal outcomes.
And AI doesn’t need to sleep, so it can do all this around the clock.
Consequently, AI can determine exactly how much a business should spend, and where, to produce the best results. Rather than base media buying decisions on past performance and gut instincts, AI acts instantly and autonomously, modifying its buying strategy in real-time based the on ever-changing performance parameters of each campaign variable.
Taking the AI Plunge
Because AI is new, and because marketers will be wary of relinquishing control and trusting a black box to make the best decisions about what people will or won’t do, it’s wise to adopt AI tools and systems incrementally, as did Harley-Davidson’s Jacobi. The best way to discover AI’s potential is to run some small, quick, reversible experiments, perhaps within a single geographic territory, brand, or channel.
Within these experiments, it’s important to define key desired performance results; for example, new customers, leads, or an increased return on advertising spending.
When it comes to choosing a tool, know what you want. Some tools focus on a single channel or task, such as optimizing the website content shown to each customer. Others, like IBM’s Watson, offer more general purpose AI tools that need to be customized for specific uses and companies. And still other AI tools produce insights but don’t act on them autonomously.
It’s worth taking the plunge, and, in fact, there’s an early adopter advantage. As Harley’s Jacobi told me, “The system is getting better all the time. The algorithms will continue to be refined. Last year, we tripled our business over the previous year.”
That’s good news for Jacobi and his employees, and not such good news for his competitors.
| How Harley-Davidson Used Artificial Intelligence to Increase New York Sales Leads by 2,930% | 19 | how-harley-davidson-used-artificial-intelligence-to-increase-new-york-sales-leads-by-2-930-12cfe9dd3533 | 2018-02-12 | 2018-02-12 13:32:39 | https://medium.com/s/story/how-harley-davidson-used-artificial-intelligence-to-increase-new-york-sales-leads-by-2-930-12cfe9dd3533 | false | 1,246 | null | null | null | null | null | null | null | null | null | Marketing | marketing | Marketing | 170,910 | Loyaltychain Business Club | Loyaltychain is a revolutionary digital ecosystem for small and midsize business powered by AI to manage customer’s loyalty. Join: https://t.me/LoyaltychainChat | 72f61779d635 | loyaltychain | 10 | 15 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 751b698cbf64 | 2018-07-11 | 2018-07-11 10:21:27 | 2018-07-11 | 2018-07-11 10:28:21 | 2 | false | en | 2018-07-16 | 2018-07-16 07:59:10 | 2 | 12d0a0dbc0a | 6.002201 | 0 | 0 | 0 | According to 2017 Pew research, 75 per cent of Americans think that robots and computers will eventually perform most of the jobs currently… | 1 | Is AI set to replace personal assistants?
According to 2017 Pew research, 75 per cent of Americans think that robots and computers will eventually perform most of the jobs currently done by people. With artificial intelligence improving every day, it’s little suprise.
We look back at our interview with x.ai founder Dennis Mortensen to see what he had to say about creating artificial intelligence that can schedule meetings and whether we should all be fearful for our jobs…
Where did it all begin? How did the idea get started?
Back in 2013 we’d just sold our last company and I had a little bit of extra time on my hands and I ended up counting up all the meetings I did the year prior. Come the end of that, I found that I did 1,019 meetings and I set them all myself. No help, no nothing. Just me, sitting at home alone, crying with a bowl of cereal, trying to figure out who to meet when and where.
What’s even sadder is that I had 672 reschedules that came along with that. If you look at that from just two feet away what you’ll see is a massive amount of pain. And I think anybody who goes through that amount of pain would try to solve that in some way shape or form. Most people would solve that by getting up, hiring a human personal assistant, taking the hit on the $50,000 and that’s it. Or they don’t have the cost setting to justify that, try to put in place some software, seeing how it doesn’t work, ends up on email pingpong. So that seems like a very good setting and starting point but I wish it could be more romantic. But it’s really just all pain.
How did you go about creating something that works so well? I can’t imagine that your first version of this was such a smooth transaction for users.
That’s never the case. We had a few truths that we believed in where we’ve designed the product with these as the backdrop from day one. The first one is that we want this to be invisible software — we don’t want this to be yet another app, a plugin, an extension, a web service, a Doodle, a something where you have to do additional work. We wanted something where we see software disappear and you hand over the job to that agent and that agent runs away, works on it and comes back once solved. That was the first battle.
We also wanted software that did the job in full. Don’t assist you in doing it, don’t do half a job, don’t give you a few times and then you can write the email yourself, don’t not follow up. Don’t just help you be a little bit more efficient. We want to take this off your hands so you don’t have to think about it at all. As soon as you’ve CC’d in Amy [Andrew’s female counterpart] and asked her to set up a meeting with Dennis, you shouldn’t think about it. You should immediately archive that email and await an invite to be injected into your calendar and all is good.
That, of course, provides some stress on the engineering and data side of it because that suggests that if there can be no interface, no buttons to click, no dropdown, no calendar to look at, then it all had to happen in natural language and that you’d be able to communicate with it like any other human.
So I think the more it simplifies on the user end, the more you increase the complexity on the engineering side. But we are just so eager for this to turn into a solution that completely democratises the idea of a personal assistant. That it’s not a luxury for the few, but it’s something that everybody can get access to it. Just as I’m sure you’d never ask at an interview whether you would get access to email or not, you would just assume that of course you would get an email, how else would you be able to do your job? I think this needs to be something equally ubiqutous where, ‘of course, I’m going to get an assistant to help me manage my calendar, why would they be paying me what they pay me to sit and do email pingpong, that doesn’t seem right’. That’s what we’re trying to move towards.
Image from x.ai
Do you see artificial intelligence replacing PAs in the future? If the software is able to do this kind of scheduling job, would it be able to take over the whole role?
Do you have a PA yourself?
Definitely not, I am not important enough.
So, there is part of my answer. Almost all of us are ‘not important enough’ or not at the right place in life to have a personal assistant. That means that the vast majority, if not almost all, of the meetings being set up are being set up by people themselves — the Natalies, the Dennises, the Stefanies. We’re not out to take any jobs. We’re out to give something that was only for the few to the remainder of the organisation.
I’ll give you a good example of my past company: We were 450 people and we had two assistants. That means everybody was on their own, but a few executives — and good for them! — but that’s beside the point. Those PAs do a lot outside of just managing their calendar so we don’t even think about that. This is not about taking their jobs, it’s about giving this to you and I.
And what have your users said about it?
I think what takes the prize is that we see that the feedback is not ‘it’ or ‘the software’ or something else, it’s ‘him’ or ‘her’ and people speak to it, even though they know it’s a machine, as if it’s a human being really. That I find really fascinating. We’ve seen it go all the way to the extreme where we receive chocolates, flowers, whisky to the office, we see Amy being asked out on dates (which I think is super sad!).
It certainly seems like with Amy you’ve achieved what anyone who is working on AI software is aiming for in creating something that seems so human.
If you decide to persuade 60 of your friends to go and work on an intelligent agent for two years, one of the very first decisions that you have to make is whether you humanise the agent or not. You can’t do something in between, you either have to be Google Now, or Siri, Amy, Cortana. We decided to humanise the agent and we have invested heavily into that because we think that part of the success of the agent is how human she feels. Even if I tell you that it’s a machine, I want you to walk away thinking what a nice person — even though you’ve been told. With that in mind, you see that she’s got a full name, she’s got a history, she signs off as you would, she’s certainly formal but knows how to reply back to people who ask her out on a date or to meet her in the lobby or what have you.
Image from x.ai
I think there’s two distinct choices that we made early on. One is that we’re not trying to reinvent the typical dialogue that you would have with a human personal assistant, we’re trying to emulate that experience. It’s not that people dislike that experience. So we’ve gone through tens of thousands of meetings to work out what that dialogue looks like so we can emulate that.
The other is that we don’t want Amy to be a bag of templates, we need her to be a whole being so that if you talk to her yesterday, when you speak to her today, you have a feeling of that being the same person and when you speak to her three weeks from today you remember how she communicates and how she wants to have things arranged. That is not an easy feat.
We ended up, not hiring another data scientist or PhD to help on that but actually a drama major, someone whose education and experience had been one of putting people on stage and creating characters, making sure that if you’re on Broadway, you don’t want to hear a pool of sentences, you want to have some sort of relationship with the characters that they’ve chosen to portray. And we’ve done the same here and it’s super exciting to see that evolve. That’s an entirely new job we created, instead of replacing personal assistants with this technology, we’re actually creating entirely new kinds of jobs like the AI interaction designer.
This article was written by Natalie Clarkson and originally appeared on virgin.com.
| Is AI set to replace personal assistants? | 0 | is-ai-set-to-replace-personal-assistants-12d0a0dbc0a | 2018-07-16 | 2018-07-16 17:09:28 | https://medium.com/s/story/is-ai-set-to-replace-personal-assistants-12d0a0dbc0a | false | 1,489 | The home of entrepreneurship | null | virgin | null | Virgin.com Spotlight | null | virgin-com-spotlight | BUSINESS,ENTREPRENEURSHIP,ENTREPRENEUR,MARKETING | virgin | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Virgin Group | The home of entrepreneurship. | 5396bdf4c67e | virgingroup | 5 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-09-29 | 2017-09-29 13:22:49 | 2017-09-29 | 2017-09-29 13:26:49 | 2 | false | en | 2017-10-20 | 2017-10-20 11:49:03 | 7 | 12d29125ceb0 | 5.039937 | 0 | 0 | 0 | AI Tech spoke with Alex Bunardzic, senior software developer at Staples, about the future of chatbots and the pursuit of their loyalty. | 5 |
Interview: The pursuit of loyal chatbots
AI Tech spoke with Alex Bunardzic, senior software developer at Staples, about the future of chatbots and the pursuit of their loyalty.
Alex Bunardzic, senior software developer at Staples
“People spend a lot of their online time chatting either through SMS, Messenger, or other channels,” says Bunardzic. “Previously, it was the trend that people were spending more time on social media and that’s where businesses were flocking to — you go where the market is. Now, businesses are realising the market is in the chatting channels.”
Chatbots are not a new concept but recent advancements in AI are enabling them to become far more useful. Bunardzic has three categories for chatbots; Non-Stateful, Stateful, and Loyal.
– Non-Stateful: This category represents the most basic form of chatbots and they can only respond by looking for scripted questions. They do not remember anything of a conversation with a user. Most of the earliest chatbots are non-stateful.
– Stateful: These chatbots are generally more useful as, for that session at least, they will remember earlier answers from a user. Many of the latest chatbots fall into this category.
– Loyal: Chatbots in this category represent the most complex and they can remember details of previous conversations with a user even after the chat has closed. These bots are considered the ultimate goal and recent AI developments are helping the industry to reach it.
An example of a non-stateful chatbot could be a weather bot which listens for a city and responds with the forecast. A stateful delivery chatbot may ask for a name and order number to provide tracking information. A loyal car insurance chatbot may keep your details on file and send you a message when you’re due for renewal if it’s found a better deal.
“You can have a really simple chatbot which treats every message as if it’s coming in for the first time — you could be chatting to a bot for hours, and it will not remember your previous messages,” comments Bunardzic. “They’re not very useful and people don’t find them engaging.
“From that, you can move into chatbots which do a little more heavy lifting and remember previous responses in the conversation. Then you can build what I call ‘loyal’ chatbots which actually learn about you, stay with you, and maybe form a relationship over numerous sessions to give a more quality response to queries.”
Moving from non-stateful and stateful chatbots to loyal will also enable them to become reactive and even proactive. For example, with a customer’s postcode details on file, a chatbot for an internet provider could alert specific users of an outage or planned maintenance which impacts them. This, in turn, provides comfort to customers and reduces the number of calls made to service centres which allow more staff to be focused on real issues faster.
“It’s not easy to offer common sense dialogue between machines and people. Machines tend to get confused or go into a loop and become less effective. Right now, to remedy that, a lot of businesses are thinking about escalating quickly to a human operator,” says Bunardzic. “We start a dialogue with a chatbot and when something becomes iffy it will escalate to a human.”
Most readers will have reached an automated voicemail at some point and listened to a multitude of options before thinking “Just connect me to a human operator already!” Then, to make things worse, the dreaded hold music comes in…
When asked if there’s a concern that chatbots which ultimately escalate queries to humans could become a similar source of customer frustration, Bunardzic says: “In a way, it’s worse, because people today [in a text-based chat] feel they can start chatting about anything. If you’re in an automated voice menu they cannot branch off into other topics.”
“In today’s freeform input boxes, we cannot constrain the conversation and often we get really weird or off-topic messages. People can also get frustrated in other ways like if they ask a question that has nothing to do with your service and you fail to respond they can feel betrayed.”
Businesses small and large are waking up to chatbots’ potential. Microsoft has been among the most prominent advocates of chatbots and envisions a future where one can even speak to another. For example, you could ask Microsoft’s virtual assistant, Cortana, to order a pizza and it could bring in the chatbot for Domino’s Pizza to complete and track your order. This is similar to how ‘Skills’ function on Amazon Alexa or ‘Actions’ on Google Home in the voice assistant realm.
“If you are offering services via chat that means you don’t need to worry so much about things such as graphical interfaces and performance,” explains Bunardzic. “If you are building a traditional app, it needs great performance otherwise you will lose your customers.”
Voice assistants, of course, remove the need for a graphical interface altogether.
“We call them ‘screenless interfaces’ because you don’t need a screen to accomplish something,” says Bunardzic. “Especially in situations like when you’re driving, if you can interact with your voice and listen to responses then it’s very desirable and I think will become very popular.”
When asked who he sees as being a current leader in the chatbot space, Bunardzic responds with Berlin-based open source conversational AI company Rasa.
“They are offering a very good platform for machine-learning probabilistic service that’s open source and in my opinion, from what I’ve seen, it gives the best chance to bot builders looking to offer services,” explains Bunardzic. “They offer an opportunity to train up the bots and teach the bot to learn about your contacts and answer various questions.”
“When you ask something like ‘Where is the nearest hospital?’ people can ask that in a variety of ways and the challenge sometimes with a bot is that it’s too rigid and can only understand certain ways of saying it. Rasa allows us to teach a bot how to respond to various ways of saying it and I think it’s very promising.”
Back at Staples, where Bunardzic currently works as a senior developer, the company is looking at using chatbots to help with the diverse work of the organisation. He notes there are many smaller teams within Staples and the manpower is not scalable so customers can end up being left on hold for a long time and getting frustrated, or emails not responded to for days.
Bunardzic and his team are being tasked to automate services so that chatbots can respond immediately in many cases to alleviate customer frustration. Some, of course, will need to be escalated to a human but these queries should be answered quicker due to others being automated.
The excitement around chatbots is growing, and our chat with Bunardzic confirms it’s for good reason.
You can hear from Alex Bunardzic at AI Expo North America at the Santa Clara Convention Center being held November 29th — 30th. He will be giving the ‘Chatbots — From Chat To Full-Serve Experience’ talk and will also be talking on the panel ‘The Realities of Bot & VA development & implementation’.
Other AI Events in the World Series:
AI Expo Global — 18–19th April 2018, Olympia, London
AI Expo Europe — 1–2nd October 2018, RAI, Amsterdam
What are your thoughts about chatbots? Let us know in the comments.
© iStock.com/Peshkova
| Interview: The pursuit of loyal chatbots | 0 | interview-the-pursuit-of-loyal-chatbots-12d29125ceb0 | 2017-10-20 | 2017-10-20 11:49:04 | https://medium.com/s/story/interview-the-pursuit-of-loyal-chatbots-12d29125ceb0 | false | 1,234 | null | null | null | null | null | null | null | null | null | Bots | bots | Bots | 14,158 | AI & Big Data Expo | AI & Big Data Expo - Conference & Exhibition exploring Artificial Intelligence, Big Data, Deep Learning, Machine Learning, Chatbots & more. | 42760402f1a7 | ai_expo | 254 | 100 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-08 | 2018-09-08 11:20:26 | 2018-09-08 | 2018-09-08 13:18:11 | 4 | false | en | 2018-09-15 | 2018-09-15 06:06:53 | 7 | 12d39e4b03b5 | 3.277358 | 4 | 0 | 0 | September has come, Rains have subsided and life was getting back to Normal. | 4 | TWTW — The Week That Was #1 September
September has come, Rains have subsided and life was getting back to Normal.
WIT Learning Program.
Rethink has an amazing program up and running — The Women In Tech Learning Program.
Read all about it here. I had applied to Cohort 1 and had a pleasant interview with Arya the other day. It went well, Arya Murali actually gave quite a few pointers and insights about myself that I did not know about. Thank you, Arya. Fortunately, I got in and am joining 23 other girls in Cohort 1.
We had a small task of submitting an introductory video, not a serious one, ‘a shitty first draft’. Find mine here. Forgive the shake, I was running heavy on time and was creating it at the last moment.
WeCon Banglore
WeCon Banglore was a large conclave of over 3500+ aspiring women entrepreneurs from all over India. I was sent as a representative of my college. Read all about the trip here.
Top 6 statements that hit the spot.
1.To be an entrepreneur is to have the willingness to be disruptive
2. Be emphatic to the problem statement. It is never their problem, it’s yours too.
3. Planning is 1 %, Execution is 99%.
4. Focus on the smaller problem statement, these bring effective larger solutions.
5. Women empowerment shouldn't be exclusive, but inclusive.
6. Don’t focus much on a negative comment, but emphasize it with a positive mindset of another feature to improve customer experience.
Rebuilding PEHIA
Pehia has been admittedly been down for some time now. I have spent quite some time on setting some content regulations so that the girls can share their content on our platform. I picked up some Markdown along the way and is on building a small bot for the same. Check our publication here. I’ll update the blog with the bot link as soon it is done.
We have Welcome to Cs with a very small batch of student’s tomorrow. It’s tough these days, but I’ll keep going anyway. Prioritising is the real devil.
WordRank — Reaching out for Help
I had continued my internship at TurboLab but have been stuck on one single root problem that is the solution to another whole set of problems. It basically involves finding how much an entity A is closest/associated to entity B in a given text — note that it’s associated, not similar. I was so done with everything from text search to tf-idf to glove. Nothing solved the problem nor showed any sign of doing so if done by a better-skilled person. WordRank showed a bit of hope since it was a ranking algorithm. Check out this blog for more.
Let me admit how dumb I felt since I was hitting ‘Segmentation Fault’ like 6th standard kid who doesn’t know his pointers as I tried to implement the basic form. Over that, one round of execution took such a helluva of a time! HOURS! No task had tested me to wits like this, but none has taught me so much either.
There was none around me who could solve the same or had an experience using wordrank. Would you even believe me that there is not a single question on wordrank in Stack OverFlow, the refuge of all programmers bugged by code. Googling forsook me too! I did not have the guts to ask the programmer who wrote wordrank or it’s wrapper. But Muhammed Shibin last day told me his story and some awesome tips to reach out, and I took it from there. It was worth a shot and well, she replied. Thank you Shibin!!!!
About Being Lost
I am finding myself as each day passes, feeling better. I had to say No to attending SHEROES, but okay. Learning to say No and prioritizing is a skill too essential to people who are trying to live than survive.
Next week is Birthday week, time to set another list of goals like the ones I talked about here.
Have an awesome week you people! Keep Learning, Keep Growing!
| TWTW — The Week That Was #1 September | 40 | the-week-that-was-1-september-12d39e4b03b5 | 2018-09-15 | 2018-09-15 06:06:53 | https://medium.com/s/story/the-week-that-was-1-september-12d39e4b03b5 | false | 683 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Enfa Rose George | Data Scientist in making | Space Science Enthusiast | Bibliophile | Women in Tech | 3343796008 | enfageorge | 137 | 46 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-27 | 2018-05-27 04:59:18 | 2018-05-27 | 2018-05-27 05:18:39 | 3 | false | en | 2018-05-27 | 2018-05-27 16:52:31 | 5 | 12d3b27bb581 | 5.742453 | 0 | 0 | 0 | I’m pretty sure you have heard the words Machine Learning a lot over the past few years. I know you have because I know I sure have. There… | 1 | How To Explain Machine Learning to Your Grandma
I’m pretty sure you have heard the words Machine Learning a lot over the past few years. I know you have because I know I sure have. There are so many movies coming out about robots that act and look just like humans, and to make things even scarier there are actual robots now that mimic humans and can even generate their own thoughts and conclusions. At this point it’s getting hard for us to tell the difference between what is human and what is AI. So since all of this has been increasingly showing up on your Twitter and Facebook feeds, what if your Grandma out of the blue asked you to explain to her what machine learning is. What do you tell her? How do you even start? In the case this happens to you, just open this blog post up and read away.
Let’s Start With Data
Initially we want to be able to put data into things that humans can use. What does this mean? So let’s say that you have photograph of a dog and cat and you want the program to tell us which one is which. This would normally require a human person to write a bunch of code that is basically step by step instructions to tell you what the picture is of. But with Machine learning all we need is data for the machine to continue learning. It’s fortunate for us because we have our phones and computers that are constantly storing and gathering data everyday. So woah you’re telling me that computers are learning and are given tasks that they can complete? Yes that’s exactly what I’m saying. With the data that the machine is gathering it will take those statistical patterns in the data and will then be able to recognize the difference between that cat and the dog. The more data that the machine takes in the more accurate the outcomes are. When it gather enough information it can then find the pattern that dogs and cats come in many different sizes and will draw the conclusion to whether its a cat or a dog. Wow they are so smart! Believe it or not machine learning is already creeping into your daily life. For example, your phone can recognize someone that was in the picture you just took or when you are texting it can predict the words that you want to use. Also don’t forget about “self driving cars that rely on machine learning to navigate may soon be available to consumers.(Lisa Tagliaferri)” You can’t hide from it, well as long as you’re around technology. We are now going to branch into three categories: supervised learning, unsupervised
learning, and reinforced learning.
Supervised Learning
No this doesn’t mean that we sit and micromanage the machines learning. “Supervised learning algorithms make predictions based on a set of examples. For example, historical sales can be used to estimate the future prices. With supervised learning, you have an input variable that consists of labeled training data and a desired output variable. You use an algorithm to analyze the training data to learn the function that maps the input to the output. This inferred function maps new, unknown examples by generalizing from the training data to anticipate results in unseen situations.(Hui Li)” This then branches into these three algorithms classification, regression, and forecasting.
Classification: “This is the case when assigning a label or indicator, either dog or cat to an image. When there are only two labels, this is called binary classification. When there are more than two categories, the problems are called multi-class classification.(Hui Li)”. This is with the use of decision trees.
Regression: This is when the program continuously tries to predict the values.
Forecasting: This is when the program can make decisions and predictions about future problems based on the data that it gathered and learned from the past.
Unsupervised Learning
This is used when we have unlabeled data. It uses cluster analysis which is the most common and dimension reduction. The most common places that unsupervised learning is used is in face recognition and bioinformatics for genetic clustering.
Cluster Analysis: This is an algorithm that can find the similarities and patterns.
Dimension Reduction: This is typically used when “reducing the number of variables under consideration. In many applications, the raw data have very high dimensional features and some features are redundant or irrelevant to the task. Reducing the dimensionality helps to find the true, latent relationship(Hui LI).”
Reinforcement Learning
This is similar to teaching a dog a new trick. When they do good you give them a treat and when they do something back you punish them. This is the same with reinforcement learning. A person gives the program feedback and then it can learn certain behaviors and strategies that will get them the reward.
Alright all done. No wait I forgot something that is a little important. We have been talking about machine learning, but you do know that it is different from Artificial intelligence and deep learning right? Oh you didn’t? Let us talk about it before you leave. I’m sure you’ve seen movies like “AI”, “Ex Machina”, “The Matrix” so on and so forth. Take a moment it think about what all of these movies have in common? Yea that’s right, robots. Robots that are fast learners and even look like humans.
“First coined in 1956 by John McCarthy, AI involves machines that can perform tasks that are characteristic of human intelligence. While this is rather general, it includes things like planning, understanding language, recognizing objects and sounds, learning, and problem solving.(Calum McClelland)”. It’s split into two categories, general and narrow. General AI would have the likeliness of a human being while on the other hand narrow AI has some characteristics of humans but you’d be able to tell its a machine. For example it could just recognize images and can’t do anything else.
Let’s move onto what we have been talking about, machine learning. This is when the machine can learn without being programmed. Like we said earlier, if you had a picture of a cat and a dog and you wanted the machine to tell you which one it was. Formally a person would have to write a bunch of code to tell the machine what to do and think, unlike machine learning where it would have “the ability to learn without being explicitly programmed.(Calum McClelland)”. So we ditched the code and trained the programs to learn as a newborn child would learn.
Last but not least, deep learning. Think of your brain as a computer that can crack codes, because essentially that is what it is. Lte me sohw yuo an eamxlep on hwo tish wkors in yuro mndi cmopuert. There you go! I just did it. Did you miss it? No those weren’t typos, as long as the first letter of the word remains in place your brain can crack the code and figure out what word I was trying to type out. Oh my goodness you’re like a computer or something, are you? The idea of deep learning was devised based on the way the human brain operates and functions. It’s basically a more beefy version of machine learning. In a sense the computer can learn in a layered format hint hint like our brains. For example in machine learning if you ask the computer to tell the difference between a boy and girl in a picture, the algorithm would look at the image and recall past data to come up with a conclusion. Unlike deep learning, the past data isn’t provided. What it does instead is scan the images pixels and finds all the edges and shapes into a ranked order of possible importance to determine whether its a boy or a girl.
You did it! You made it through the overview of machine learning. Take this new knowledge and tell it to your grandmother because I’m sure she wants to know. Also remind her that the Terminator is not real……yet.
Sources:
https://blogs.sas.com/content/subconsciousmusings/2017/04/12/machine-learning-algorithm-use/
https://medium.com/iotforall/the-difference-between-artificial-intelligence-machine-learning-and-deep-learning-3aa67bff5991
https://www.digitalocean.com/community/tutorials/an-introduction-to-machine-learning
https://www.youtube.com/watch?v=f_uwKZIAeM0
https://www.slideshare.net/potaters/bmva-ss-2014breckonml
| How To Explain Machine Learning to Your Grandma | 0 | how-to-explain-machine-learning-to-your-grandma-12d3b27bb581 | 2018-05-27 | 2018-05-27 16:52:32 | https://medium.com/s/story/how-to-explain-machine-learning-to-your-grandma-12d3b27bb581 | false | 1,376 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Pamela Maupin | null | fe1e7e5909de | maupinpamela | 1 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-10-18 | 2018-10-18 19:44:16 | 2018-09-07 | 2018-09-07 19:00:41 | 2 | false | en | 2018-10-18 | 2018-10-18 19:44:52 | 1 | 12d46694591b | 3.077673 | 0 | 0 | 0 | The worst thing about an inside job is that once it’s detected, it’s usually too late. Early detection is critical to prevent considerable… | 4 | Taking out the threat from the inside
The worst thing about an inside job is that once it’s detected, it’s usually too late. Early detection is critical to prevent considerable damage arising out of insider threats to the business. But that’s easier said than done! Whether it’s a rogue trader in a bank or brokerage or someone illegally sharing company intellectual property or intelligence, illegal insider actions put enterprises at risk of losing millions. This could be in the form of reputational damage or unfavorable regulatory consequences in case of compromised customer data, for example.
Enterprises prepare to avoid or tackle threats by monitoring employee and third-party communications such as e-mail, instant messaging, social media, and voice, as well as by analyzing log files and file attachments. This conventional approach also employs a Relational Database Management System (RDBMS) technology, which, however, falls short in meeting current business demands for scalable, flexible and cost-efficient solutions to insider threat.
Moreover, this approach struggles to deal with the large volume and variety of data that must be analyzed and often correlated. Analyzing unstructured data sets such as text, audio and images are challenging, especially while determining illegal intent in communications. Worse, insider attacks remain undetected as data is disregarded before it can be correlated, or patterns identified in time.
A tag team against insider threat
Accenture and Cloudera have joined hands to help enterprises move from a post-incident forensic approach to a proactive, preventive method to insider threat. The Insider Threat Detection Solution leverages Cloudera Enterprise together with Accenture’s Aspire Content Processing and Analysis technology and fraud detection consulting expertise to devise a faster, stronger and smarter anti-insider threat system.
The comprehensive solution covers simple text analytics and communications monitoring for e-mail, voice and text messages, to more complex pattern detection and machine learning-based text analysis models. This analysis can also be extended to determine a “risk score” for employee behavior based on application logs, transactions, and other user behavior data. Our joint solution is scalable and can effectively manage the growing volumes of communications data, and extendable with sophisticated risk analysis techniques.
Benefits
Quicker: Faster time to incident investigation and response with comprehensive enterprise visibility
Smarter: Deeper insights with a full complement of analytic and machine learning frameworks for all threat detection
Extendable: Continued innovation, learning, and application of best practices to insider threat programs
The business benefit of this solution is to enable enterprises to move from a post-incident forensic approach to a proactive preventative approach saving millions of dollars.
Accenture and Cloudera’s Insider Threat Detection Solution features the following components and capabilities:
Cloudera Enterprise: A scalable and secure data platform, which is machine-learning ready and optimized for the cloud, offers deeper insights with a full complement of analytics frameworks for threat detection.
Accenture insider threat program development: With Accenture’s vast expertise and experience in fraud and risk analytics, the joint solution brings in continuous innovation and application of insider threat best practices.
Accenture insider threat detection IP portfolio: Leverage Accenture IP in content processing, natural language understanding, risk analytics modeling, and data visualization to deliver faster incident investigation and response with complete visibility into enterprise data.
Next Step — Engage with us to schedule a discovery session to identify:
Organizational needs to monitor existing and new data sources
Requirements for data protection and governance
Areas where we can bring immediate value to data governance
Contact us at [email protected] or [email protected]
Disclaimer: This blog has been published for information purposes only and is not intended to serve as advice of any nature whatsoever. The information contained and the references made in this blog is in good faith and neither Accenture nor any of its directors, agents or employees give any warranty of accuracy (whether expressed or implied), nor accepts any liability as a result of reliance upon the content including (but not limited) information, advice, statement or opinion contained in this blog. Accenture does not warrant or solicit any kind of act or omission based on this blog.
The blog is the joint property of Accenture and Cloudera. No part of this Blog may be reproduced/ redistributed in any manner without the written permission of both the parties.
Originally published at vision.cloudera.com on September 7, 2018.
| Taking out the threat from the inside | 0 | taking-out-the-threat-from-the-inside-12d46694591b | 2018-10-18 | 2018-10-18 19:44:52 | https://medium.com/s/story/taking-out-the-threat-from-the-inside-12d46694591b | false | 714 | null | null | null | null | null | null | null | null | null | Big Data | big-data | Big Data | 24,602 | Cloudera, Inc. | Data can make what is impossible today, possible tomorrow. We empower people to transform complex data into clear and actionable insights. http://cloudera.com | 475d0d40b87a | clouderainc | 0 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-26 | 2018-09-26 18:24:40 | 2018-09-26 | 2018-09-26 18:27:02 | 1 | false | en | 2018-09-26 | 2018-09-26 18:27:02 | 1 | 12d49494409e | 2.690566 | 0 | 0 | 0 | The guest <> hotel relationship is shifting. The mobile market is resetting. There is a new opportunity for hotels to get into their… | 5 | No, Your Hotel Business Doesn’t Need a Mobile App
The guest <> hotel relationship is shifting. The mobile market is resetting. There is a new opportunity for hotels to get into their guests’ smartphones.
Image courtesy of Storyblocks
This statement was made probably for the first time about 5 years ago by Blue Magnet Interactive. The article holds many truths, and I totally agree that a hotel doesn’t need a mobile app — today more than ever — even though there are many [mobile app] companies who still want hoteliers to believe otherwise.
Here is why:
Booking Portals Have Mobile Apps. They Win.
This point should be quite clear to anyone by now, especially to hoteliers. Booking.com, TripAdvisor, AirBnB and the other accommodation portals have found a quite successful way to wedge themselves between hotels and guests. Their websites and mobile apps are powerful, simple, useful and efficient. Plus, by now they have the cash to promote their mobile apps, add new features to the apps, and make them widely available in gazilions of languages.
Unless you own a mega hotel chain like Hilton or Marriott (or work in one), you shouldn’t even think about creating a mobile app, because:
It needs resources (cash, developers, designers, product managers, infrastructure, tech support, etc.)
It needs adoption, i.e. you need to put it in your guests’ hands, which requires marketing and promotion.
That’s a lot of work. It is an expensive process. What is more, it will be extremely hard for a small-to-medium hotel to compete with these conglomerates in the booking business. Even if a hotelier decides to create a mobile app, I doubt that many guests will use it frequently — they would rather call the hotel or book through its website.
Your hotel mobile app will not increase your revenue, because no one will use it.
Travel Review Websites Have Mobile Apps. They Win.
In the past, hotel guests used to book off-property activities through the hotel they were staying at. In today’s connected world, however, guests come prepared and over 70% of them book activities a week or more in advance. They use TripAdvisor, Yelp, FourSquare or a local review website to do this. Yes, most of these review sites have expanded into offering activity and restaurant bookings, and they too offer mobile apps.
It will be close to impossible for a hotel to create a mobile app which to complete with travel companies for the exact same reasons stated before.
Your hotel mobile app will not increase activity bookings nor your ratings.
There Is User Fatigue for Mobile Apps, Email and Newsletters. Hotels Don’t Win.
A majority of users (51%) don’t download any apps in a month. Even if they do, they’ll also often delete apps they don’t like or don’t use too often. These stats tell us that a dedicated hotel app will not last long on a guest’s device unless he/she is a loyal business guest.
What is worse, there is a decrease in email and newsletter engagement (opens, clicks, subscribes) which reflects user discontent and disinterest. I, personally, had unsubscribed from tons of newsletters I used to be subscribed to, including such from hotels and airline companies, because of 1) inbox overflow and 2) lack of quality information that targets me. I prefer to get my news automagically via already curated services like Google Now and news sites, and occasionally from The Morning Brew. The bottom line is that users (and hotel guests) want quality curated information at the right time.
Hotels want to establish strong long-term relationship with their guests in order to increase revenue per guest, yet the guest <> hotel relationship is shifting… How to establish a relationship when guests are becoming more disinterested in apps, emails and newsletters, and more interested in using services such as AirBnB? How to reach out to these guests? See Tame Your Hotel Guests. The Alternative to Mobile Apps Is Here.
Originally published at thinksters.org.
| No, Your Hotel Business Doesn’t Need a Mobile App | 0 | no-your-hotel-business-doesnt-need-a-mobile-app-12d49494409e | 2018-09-26 | 2018-09-26 18:27:02 | https://medium.com/s/story/no-your-hotel-business-doesnt-need-a-mobile-app-12d49494409e | false | 660 | null | null | null | null | null | null | null | null | null | Hotel | hotel | Hotel | 7,290 | Thinks AI | Grow Your Business With AI-Powered Conversations. Start Selling, Marketing, Engaging with your audiences & providing support on Messenger in minutes. | 24c63125ef0f | thinks.chat | 25 | 44 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-03 | 2017-11-03 16:00:15 | 2017-11-03 | 2017-11-03 16:02:54 | 1 | false | en | 2017-12-08 | 2017-12-08 10:06:32 | 1 | 12d6cd4bb4b8 | 0.267925 | 1 | 0 | 0 | Collaborative Filtering, Location and Demographic Based Recommendation engine Trained on the ml-100k dataset | 4 | Recommendation engine
Collaborative Filtering, Location and Demographic Based Recommendation engine Trained on the ml-100k dataset
https://github.com/IdoZehori/Recommendation-engine/blob/master/Recomender%20Engine.ipynb
User similarity
| Recommendation engine | 1 | recommendation-engine-12d6cd4bb4b8 | 2018-03-24 | 2018-03-24 04:58:51 | https://medium.com/s/story/recommendation-engine-12d6cd4bb4b8 | false | 18 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Ido Zehori | Set course with heart, adjust with data. Data Scientist @BigaBid | 2da8865bbec5 | zehori.ido | 50 | 10 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-27 | 2018-07-27 00:26:24 | 2018-07-27 | 2018-07-27 01:15:51 | 3 | false | en | 2018-07-28 | 2018-07-28 00:34:47 | 9 | 12d6d65f4848 | 8.557547 | 12 | 0 | 0 | The race for use of Artificial Intelligence to execute policy, drive future economic infrastructure, and provide automated equipment for… | 5 | U.S. Could Still Win AI Arms Race Despite Future Data Privacy Laws — By Protecting Data
The race for use of Artificial Intelligence to execute policy, drive future economic infrastructure, and provide automated equipment for man-machine teaming for the U.S. Department of Defense is in full swing. What feeds the development of Artificial Intelligence is massive amounts of data. The use of data, both publicly available and private, is and will continue to be a tense conversation. If the originator of that data is the owner of that data, does that mean no one else can use it unless explicitly given permission? Are big data companies such as Facebook, Amazon, and Google doomed to potentially devastating data privacy laws which would limit their ability to compete against companies like Baidu and Alibaba in China, where private data (in the eyes of the government) doesn’t exist? Prime Minister Xi Jinping of China has stated his goal for China to become the world leader in AI by 2030. China will not limit AI development by preventing AI leaders in Baidu and Alibaba to use public, proprietary, or even private data.
If data is the new currency, and much of the United States’ data is inaccessible due to privacy laws, how can America compete? Western civilization and liberty centric ideologies have had to face this kind of question since their inception. The fragile balance of capitalism, liberty, and military may have clashed over the past 250 years but the ingenuity of free individuals have overcome those limits. Recent developments in blockchain and computer science could yet again preserve the fantasy of having the best of both worlds. In this case, the best of both worlds would be using data while maintaining privacy, and even control. Is it really possible to use the data without seeing it? Can AI provide analysis over data that isn’t fully accessible?
Source: Slator.com
Machine Learning and Artificial Intelligence are the current “buzz phrases.” Say this and Google may acquire your intellectual property before ink hits paper. It’s going to be part of our future but what does it mean? First, Machine Learning is a subset of artificial intelligence in the field of computer science that often uses statistical techniques to give computers the ability to “learn” with data, without being explicitly programmed. Machine Learning is the process that teaches AI. Artificial Intelligence is the ability to make decisions off of learned techniques. Machine Learning can be effectively broken down into three subsets:
Supervised: Supervised learning is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples.
Unsupervised: Unsupervised machine learning is the machine learning task of inferring a function that describes the structure of “unlabeled” data (i.e. data that has not been classified or categorized).
Reinforcement: The idea behind Reinforcement Learning is that an agent will learn from the environment by interacting with it and receiving rewards for performing actions. Reinforcement Learning is just a computational approach of learning from action. Reinforcement Learning requires
Source: CleverRoad.com
Each of these subsets of Machine Learning need a healthy amount of data. Just how much data depends on the complexity of the problem you’re trying to solve. Per Caltech Prof. Yaser Abu-Mostaf, “the answer is that as a rule of thumb, you need roughly 10 times as many examples as there are degrees of freedom in your model.” In layman’s terms, an immense amount of data. If interested parties, to include the U.S. Government, want to use Artificial Intelligence to more efficiently govern, the data needed to provide an answer with accurate results will certainly be in the terabytes and petabytes realm. It would be too expensive and too time consuming to think that governing bodies, healthcare data scientists, or defense intelligence specialists would be able to process this data by themselves. Artificial Intelligence and big data tools are the answer to that problem. Even more so, trying to scrub all data that may contain personally identifiable information would prove to be too costly and insurmountable when in competition with those that ignore privacy ethics.
According to Irving Wladawsky-Berger, Visiting Lecturer in Information Technology at the MIT Sloan School of Management, whether physical or digital in nature, identity is a collection of information or attributes associated with a specific entity. Identities can be assigned to three main kinds of entities: individuals, institutions, and assets. For individuals, there are three main categories of attributes:
Inherent attributes are intrinsic to each specific individual, such as date of birth, weight, height, color of eyes, fingerprints, retinal scans and other biometrics.
Assigned attributes are attached to individuals, and reflect their relationships with different institutions. These include social security ID, passport number, driver’s license number, e-mail address, telephone numbers, and login IDs and passwords.
Accumulated attributes have been gathered over time, and can change and evolve throughout a person’s lifespan. These include education, job and residential histories, health records, friends and colleagues, pets, sports preferences, and organizational affiliations.
If we look to the recently passed General Data Protection Regulation (GDPR) we’ll see the need for the ability to use the data within the confines of the law. The General Data Protection Regulation (GDPR) (EU) 2016/679 is a regulation in EU law on data protection and privacy for all individuals within the European Union(EU) and the European Economic Area(EEA). It also addresses the export of personal data outside the EU and EEA areas. The GDPR aims primarily to give control to citizens and residents over their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU. GDPR limits Artificial Intelligence’s benefits on economic development and governing. The ethical data dilemma extends to corporations and companies that are reluctant to provide data due to fears of proprietary espionage. With all of these situations in mind, we look to privacy preserving technology to potentiate AI and comply with pre existing and future data privacy laws. Two such technologies are being developed out of the “Cryptocurrency” space to include Trusted Execution Environments (TEEs) and Secure Multi-Party Computing (sMPC). Both of these examples will be involving blockchain architecture.
A blockchain originally is a growing list of records, called blocks, which are linked using cryptography. Blockchains which are readable by the public are widely used by cryptocurrencies. Blockchains have been on the rise and FINTECH companies are salivating at the potential use of blockchains. One of blockchains features is that the information is inherently public in an effort to perform consensus. Obviously this poses a problem for any organization trying to protect private data or proprietary code and build decentralized applications on the blockchain. At least one project is trying to solve this problem for blockchain developers, MIT’s Enigma Project. Enigma is in the process of developing what are called “Secret Smart Contracts.” The original idea of “Smart Contracts” stems from Ethereum, a decentralized platform for applications that run exactly as programmed without any chance of fraud, censorship or third-party interference. Enigma is building off of the “Smart Contracts” concept and trying to improve on it by making it usable to organizations that need to preserve data privacy. Enigma aims to develop a TEE and sMPC infrastructure to give developers the choice to choose between the benefits of each model.
TEE: A TEE provides a fully isolated environment called an enclave that prevents other applications, the operating system, and the host owner from tampering with or even learning the state of an application running in the enclave. A TEE thereby provides strong confidentiality for smart contract data that blockchains cannot. Unfortunately, a TEE alone cannot guarantee availability or provide secure networking or persistent storage. Thus, it cannot alone achieve blockchains’ authoritative transaction ordering, persistent record keeping, or resilience to network attacks.”
Enigma’s TEE: MIT’s Enigma network provides a permissionless peer-to-peer network that allows executing code (secret contracts) with strong correctness and privacy guarantees. Another way to view the network is as a smart-contract platform (e.g., similar to Ethereum) that enables the development of decentralized applications (dApps), but with the key difference that the data itself is concealed from the nodes that execute computations. This enables dApp developers to include sensitive data in their smart contracts, without moving off-chain to centralized (and less secure) systems.
Enigma plans to incentivize the creation of “nodes” which will perform computation of data and execute the “Secret Smart Contracts.” Nodes will be created by individuals participating in the decentralized networks by providing a proof of stake (PoS). Proof of Stake (PoS) concept states that a person can validate block transactions according to how many coins they hold. This means that the more Enigma tokens are owned by a stakeholder, the more validation power the node has. Each system would require use of their tokens to provide to the nodes as economic incentives to provide the computation. As of now, these TEE require additional software security features like Intel Software Guard Extensions (SGX). Intel SGX is a set of central processing unit (CPU) instruction codes from Intel that allows user-level code to allocate private regions of memory, called enclaves, that are protected from processes running at higher privilege levels. Still, these are subject to side channel attacks and are not a full solution. TEEs, were they perfect, would provide us with a fast general-purpose computing schema which preserved privacy over data, however, there exist subtle flaws within TEE that allow for pulling out information through side channels. Enigma takes security one step further with the development of sMPC.
Enigma’s TEE and sMPC design will be able to be blockchain interoperable and agnostic where it serves as an extension to blockchain platform for off-chain computations. It does not need to be the 100% solution to all of blockchains problems but it solves scalability and privacy issues that have limited blockchain’s adoption. To ensure that data stays secure, information can be encrypted before being sent to the network and this off-chain layer is responsible for distributing this data across Enigma’s nodes and keeping it private. The blockchain’s public ledger only stores references to this data to provide proof of storage, but none of the data itself is public–it remains obfuscated, private, and split-up on the off-chain network.
Source: Microsoft
Enigma is looking to use sMPCs and an off-chain distributed hash-table (DHT) to ensure data privacy. The MPCs distributes data between nodes on the network, splitting the encrypted information into separate pieces to ensure its safety. The DHT, then, is responsible for storing this data in an off-chain database. The DHT stores the data while MPCs are responsible for handling and retrieving it, while both ensure that the data handled remains completely private. According to Enigma’s whitepaper, all data “is split between different nodes, and they compute functions together without leaking information to other nodes. Specifically, no single party ever has access to data in its entirety; instead, every party has a meaningless (i.e., seemingly random) piece of it.”
Machine Learning using Enigma’s sMPC to preserve privacy comes with the consequence of losing speed but not with TEE. According to CEO of Enigma, Guy Zyskind, “It (RL) would potentially be computationally expensive, and for this reason I believe having both TEE/MPC powered implementations is compelling for developers.” Enigma gives developers the ability to choose between TEE and sMPC depending on their situation and what best fits their applications. Other TEE projects are in the works but do not offer the flexibility that Enigma is developing.
Using Enigma’s technology, governments and organizations that value, or at least compelled to by law, can use this infrastructure to train AI on private data. This technological breakthrough could solve a multitude of issues and enable progression while simultaneously preserving the rights of citizens. Developing a technology that allows someone to conduct analysis on data that they cannot see is no easy feat nor is it a quick process. Fortunately, Guy Zyskind and MIT’s Dr. Alex “Sandy” Pentland have been working on this for years. Enigma’s mainnet for their TEE is set to be released in Q3 of 2018 and the mainnet for sMPC is scheduled for release in Q1 2019. The potential for this project, and project like these, cannot be overstated nor the potential fully understood but it could be a critical piece of technology that allows the U.S., Europe, and other privacy valued unions to keep pace with those who don’t.
Sources: Enigma White Paper, Enigma Developers Forum, TheCryptoRealist, https://gdpr-info.eu/
If you’re interested in the Enigma Project, consider resources:
Telegram: t.me/EnigmaProject
Reddit: reddit.com/r/EnigmaProject
Twitter: twitter.com/enigmampc
Discord: https://discordapp.com/invite/SJK32GY
| U.S. Could Still Win AI Arms Race Despite Future Data Privacy Laws — By Protecting Data | 423 | u-s-could-still-win-ai-arms-race-despite-future-data-privacy-laws-by-protecting-data-12d6d65f4848 | 2018-07-28 | 2018-07-28 00:34:47 | https://medium.com/s/story/u-s-could-still-win-ai-arms-race-despite-future-data-privacy-laws-by-protecting-data-12d6d65f4848 | false | 2,122 | null | null | null | null | null | null | null | null | null | Blockchain | blockchain | Blockchain | 265,164 | Tyler Jewell | Promoting civil liberties, preserving democracy, and decentralizing trust through technological advancements. | b01790f3141b | LibertyCrypto | 6 | 5 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 33b49288dc77 | 2018-08-13 | 2018-08-13 13:15:18 | 2018-08-15 | 2018-08-15 09:51:56 | 2 | false | ru | 2018-08-15 | 2018-08-15 09:52:21 | 12 | 12d74ca58af0 | 2.507862 | 7 | 0 | 0 | После листинга MATRIX AI Network на биржах Bitfinex и Ethfinex, имеющих большую ликвидность, теперь проще, чем когда-либо, присоединиться к… | 5 | Позитивные последствия листинга MATRIX AI Network на бирже Bitfinex
После листинга MATRIX AI Network на биржах Bitfinex и Ethfinex, имеющих большую ликвидность, теперь проще, чем когда-либо, присоединиться к сообществу MATRIX.
9 августа 2018 года MATRIX AI Network была официально размещена на биржах Bitfinex и Ethfinex. Листинг Bitfinex особенно примечателен тем, что Bitfinex — цифровая торговая платформа, основанная в 2012 году и одна из лучших бирж в мире — была одним из главных приоритетов в развитии MATRIX.
Листинг произошел вслед за периодом повышенного взаимодействия с блокчейн-сообществом, как на международных рынках, так и в Китае. В конце июля, главный инженер MATRIX, Dr. Steven Deng беседовал со многими ведущими компаниями в Китае; Matrix CTO, Dr. Bill Li, выступил на National Standards Formulation Conference for IT, Blockchain and Distributed Ledger Technology в Kunming, Китай; а MATRIX CEO Mr. Owen Tao и MATRIX Chief Chip Scientist Dr. Tim Shi посетили Стамбул, Турция.
По словам Mr. Simon Han, старшего менеджера MATRIX AI Network, включение в список Bitfinex означает, что “сообщество признает MATRIX как сильный, глобальный блокчейн проект.”
Трудно переоценить важность включения в списки крупных бирж для такого проекта, как MATRIX. Независимо от технического мастерства команды или потенциала проекта, поддержка сообщества остается первостепенным ключом для успеха. Блокчейн-проекты просто не выживают без поддержки сообщества. Mr. Han продолжает: “История полна проектов, которые потерпели неудачу из-за недостатка ликвидности, листинга на популярных биржах и поддержки сообщества.”
Таким образом, необходимо время, чтобы взглянуть на важные вехи глазами сообщества. Как листинг на Bitfinex поможет MATRIX сообществу?
Легкая доступность
Сообщество MATRIX AI Network непрерывно растет, к нему присоединяются все больше и больше людей. Крупные биржи, такие как Bitfinex, играют жизненно важную роль, поскольку они позволяют напрямую взаимодействовать с сообществом. Трейдеры, вероятно, потратят больше времени на посещение биржи с несколькими криптовалютами, поскольку люди обычно просматривают от 4 до 15 монет за посещение. Таким образом, уменьшая барьеры для входа и обеспечивая степень одобрения, листинг на биржах с несколькими криптовалютами может привести к новым уровням принятия и влияния на рынок.
Поддержка фиата
Поддержка фиата также является важной особенностью топовых бирж. Без поддержки фиатных валют трейдеры вынуждены покупать BTC или ETH на других биржах. Затем они могут быть переведены на другие биржи после оплаты соответствующих комиссий. К счастью, Bitfinex имеет торговые пары с фиатом, включая USD/MAN, а также прямой вывод фиата с биржи.
Передовой функционал для торговли
Bitfinex имеет множество инструментов, доступных для опытных трейдеров. Согласно недавнему опросу Encrybit, Bitfinex имеет лучший пользовательский интерфейс для технического анализа и торговли. Помимо основ, таких как графики со свечами, есть также такие функции, как инструменты Фибоначчи, трендовые линии и трендовые индикаторы. А также маржинальная торговля, торговля шортами и другие доступные типы ордеров, включая “fill or kill”, iceberg, OCO, стоп-ордера, трейлинг-стопы и т.д.; каждый трейдер найдет все необходимое, чтобы заниматься изощренной торговлей.
Листинг MATRIX AI Network на Bitfinex и Ethfinex выгоден как проекта MATRIX, так и для всего сообщества MATRIX. MATRIX, конечно, не сделал бы это в одиночку, без поддержки сообщества. Поддержка сообщества, безусловно, также определит будущее MATRIX AI Network.
Mr. Han резюмировал: “Листинг Bitfinex добавляет уверенности будущим инвесторам и партнерам в присоединении к экосистеме MATRIX. Вместе, мы достигнем нашей конечной цели-стать лучшим блокчейн проектом в мире.”
Пожалуйста, посетите наш официальный сайт matrix.io для получения необходимой информации о MATRIX.
Для получения более актуальной информации присоединяйтесь к нам в социальных сетях:
Website | Telegram | TelegramRU | Twitter | Reddit | Facebook | White Paper | White Paper RU
| Позитивные последствия листинга MATRIX AI Network на бирже Bitfinex | 300 | позитивные-последствия-листинга-matrix-ai-network-на-бирже-bitfinex-12d74ca58af0 | 2018-08-15 | 2018-08-15 09:52:21 | https://medium.com/s/story/позитивные-последствия-листинга-matrix-ai-network-на-бирже-bitfinex-12d74ca58af0 | false | 563 | Русский блог MATRIX AI NETWORK | null | null | null | MATRIX AI NETWORK-RUSSIAN | null | matrix-ai-network-russian | BLOCKCHAIN,CRYPTOCURRENCY | Matrixrussia1 | Matrix | matrix | Matrix | 651 | Natali | null | 76d80a9ad00e | ykatanis | 25 | 10 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | ae3382d2ca9f | 2018-05-17 | 2018-05-17 13:35:31 | 2018-05-17 | 2018-05-17 14:10:50 | 1 | false | en | 2018-05-17 | 2018-05-17 14:10:50 | 7 | 12d915aba1f2 | 4.271698 | 17 | 0 | 0 | Due to the numerous interesting projects they are based on and the ample hype that surrounds them, crowdsales are highly visible in the… | 5 | What to watch out for as a crowdsale supporter
Due to the numerous interesting projects they are based on and the ample hype that surrounds them, crowdsales are highly visible in the social media world, which means that e potential supporters can be found on the global level. Eligma COO Ziga Toni has ample entrepreneurial experience, and will discuss some interesting and important aspects of being a crowdsale supporter.
Could you tell us something about your professional background, how you became part of the Eligma project and how the decision to organize a crowdsale came about?
I’ve been an entrepreneur for my entire life. I founded my first web development company ten years ago, and my passion for innovation and IT technologies has always driven me towards new challenges. I met Dejan Roljic, Eligma CEO, about two years ago through my business partner Ana Lukner (now Dejan’s wife, Ana Lukner Roljic). I really liked his way of thinking and our first opportunity for collaboration came about when Ana invited me to join ABC Silicon Valley as Business Development Officer. I spent one year in the Valley, which prepared me for what was about to follow: the Eligma chapter. The core team started working on the project full-time in September last year. When we had our project blueprints ready, making a crowdsale was a logical step as there is an obvious utilization of the ELI token and the project would not be possible without it.
Congratulations on a very successful conclusion of the Eligma crowdsale. The project raised 13,178 ETH. What were the expectations of the Eligma team and what are the projections for the future?
To begin with, the supporter is the decisive factor of any crowdsale. At Eligma, we are extremely proud that the crowdsale attracted as many as 1,668 supporters from 100 different countries. The Eligma crowdsale took place at a time of very serious market fluctuations, but it is a good example that even at times like this, good-quality products will resonate with the general public. We hope that this relationship and trust will now develop further, and that our followers will become users of our products.
This brings us to an important aspect of what to watch for as a potential crowdsale supporter. One dimension is connected with project quality and credibility.
Exactly. Before deciding on any kind of participation, obtaining information about the project is essential. The most immediate factors in one’s decision-making should be the white paper and the team. The project’s scale needs to be credible and have strong development potential with a well-outlined roadmap. The team needs to be verifiable, consist of experts and also rely on a strong advisor pool. All this needs to be presented on a professional website backed by a highly responsive service team who should always be prepared to answer any questions that you might have. The very essence of all this, however, is the idea that the project is based on. One’s motivation to support something should not be the desire for the token value to rise, but the potential of the idea presented. Is the product something that we believe in, and we ourselves would buy and use as customers? If it does not convince a supporter as something you would want, then it is likely that the product will not gain customers when it hits the market.
The other thing to watch for is the common-sense safety principles which everyone should follow when deciding to send money anywhere online.
The most important things to watch for in crowdsales are the following: First of all, only communicate through the official emails on the official crowdsale website. Please check the spelling of the email address you receive an email from literally letter by letter to make sure that it is legit. The crowdsale service team never sends you private messages with special propositions just for you, and does not accept participation after the end of the crowdsale. If you receive a sudden social media message leading you to an official-looking website and offering a fantastic special offer, chances are that this is a fraud. If it looks too good to be true, it usually is. There are a lot of phishing sites out there abusing the hype of the crowdsale process and hoping that you will become less attentive. Online transactions are fast and easy; this is useful, but also dangerous. If you are in doubt and something does not seem right, always trust your common sense.
What about after the crowdsale?
During the crowdsale, the hype is an important part of its momentum. There are lots of social media messages, videos, advertisements, and everything is extremely dynamic. This reflects the nature of the contemporary consumer, whose attention requires high levels of action. I feel, however, that crowdsale supporters should be much more demanding than this. This is why I again emphasize to get oneself informed and seriously think about the long-term potential of the project. Are enough people going to buy its products down the line?
Also, check out for the latest developments of the project and its roadmap after the crowdsale. A credible project will watch out for its community. They will want you to stick around and have you as their early believers and the foundation of their customer base. They will keep communicating and have a certain amount of hype going because they will want early supporters to spread the word. So once again — is their product something that could convince a critical mass with its usefulness?
What kind of plan does Eligma have in connection with its supporters now that the crowdsale is finished?
First of all, we believe that the community of our supporters is the highest recognition we could ever receive. They are the driving force of our project and social media life, which shows enthusiasm, trust and the fact that they want us to succeed. This is why I would once again like to thank all of them. Our mission is to transform commerce by making things easier and faster for all of us buyers, sellers and consumers, and we always keep asking ourselves what our customers would need and want. This is why we eagerly follow any communication from our community, and we are always happy to answer any questions at [email protected]. Only by working together can we build a different commerce world!
Join our community:
Web page|Facebook|Telegram|LinkedIn|Twitter|GitHub
| What to watch out for as a crowdsale supporter | 699 | what-to-watch-out-for-as-a-crowdsale-supporter-12d915aba1f2 | 2018-06-03 | 2018-06-03 05:37:44 | https://medium.com/s/story/what-to-watch-out-for-as-a-crowdsale-supporter-12d915aba1f2 | false | 1,079 | Eligma is a cognitive commerce platform of product based marketplaces creating a new way to discover, purchase, pay, stock and re-sell products using AI and Blockchain technologies. For early announcements and updates join our community on Telegram ➡️ https://t.me/eligma. | null | eligmacom | null | Eligma Blog | eligma-blog | ARTIFICIAL INTELLIGENCE,BLOCKCHAIN TECHNOLOGY,STARTUP,ECOMMERCE,TIME MANAGEMENT | eligmacom | Crowdsale | crowdsale | Crowdsale | 1,644 | Eligma | null | a57c61a2307d | urska_4363 | 132 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 2a4b53aedf | 2018-04-17 | 2018-04-17 14:04:10 | 2018-04-17 | 2018-04-17 14:28:13 | 11 | false | en | 2018-04-30 | 2018-04-30 14:11:09 | 10 | 12dbc78e665b | 5.066038 | 2 | 0 | 0 | The data management platform(DMP) team at Kyivstar serves more than 52 million customers across various markets in Eurasia. This means… | 4 | Want to Win? Be Willing to Dive Into Something New
The data management platform(DMP) team at Kyivstar serves more than 52 million customers across various markets in Eurasia. This means mountains upon mountains worth of data arrive on a daily basis.
Sounds cool, right?
For Sergiy, head of DMP engineering, it’s thrilling to have all this data to play with. It’s like being a kid in a candy store. Where are the chocolates by the way?
“We look for value in each row of data. There’s a lot to go through. We have to find innovative ways to dive into the data and deliver solutions for our customers and business,” states Sergiy.
What does diving into data look like? Watch the DMP team work — and then you’ll understand.
During our first talk with Sergiy, we learned about how awesome the DMP team is. Now, it’s time to learn a little more about winning in our endeavors. Read through our chat with Sergiy. It can teach you a lot.
Meet Sergiy and the DMP team
1. Why did you become an engineer?
In school, I really enjoyed mathematics. I didn’t have a computer in my home but I enjoyed solving equations. So, at university, I majored in programming and it all went from there.
Also, the capabilities of technology to improve lives motivated me to pursue a career in the field. For instance, Facebook, with its ability to quickly relay information and crowdsource problems, actually helps save lives across the world. That’s powerful — and I want to be a part of that.
2. Tell us more about your career.
I began as a programmer with Kyivstar. I later moved to Kyiv to work as a developer in the Business Intelligence (BI) unit for Kyivstar. Following that, I moved to revenue assurance before becoming head of the BI unit. While leading the BI function, we combined finance and data in exciting new ways. We were the first department in Kyivstar to show how value could benefit the business financially.
I worked for Kyivstar from the early 2000s up till 2016, when I joined another IT company. Back then, I worked for one of company clients in California for a few months. Taking that job taught me a lot about how data should be managed, analyzed, and used. I took the leap to do something different, and it paid off. I ultimately returned to Kyivstar with new ideas on how to take the company to new heights.
“ In general, it can be challenging to figure out how to use new technology. But we must learn how to employ new technologies and ideas. People say knowledge is power, but it’s the ability to apply knowledge that separates you from the pack.”
3. As the leader of the DMP team, what type of culture do you try to instill?
The first thing my team culture needs is passion. We must love what we do. Without a passion for the work, there’s no way we can be at our best.
Second, we try to instill a culture of accountability. When people take ownership of their work, their more likely to push through obstacles and get the job done. With such a culture, people take responsibilities for their mistakes and make corrections.
Build a Culture of Accountability in 5 Steps
6 min read Opinions expressed by Entrepreneur contributors are their own. Recently, one of our clients asked us how she…www.entrepreneur.com
We also stress transparency and honesty across the DMP team. If we have a problem, we share it.
Also, we’re not afraid to test ideas and even fail. To fail is to gain knowledge. We take the lessons from failure and use them as an advantage for the next task.
How Planning To Fail Can Help You Succeed
In his 1999 bestseller, The Monk Who Sold His Ferrari , Robin Sharma tells a story of a group of monks in India who…www.fastcompany.com
Of course, instilling this sort of culture requires the right team. That’s why, apart from technical skills, I look for talents that are team players and have a willingness to learn. You must be honest and a good communicator to work on the DMP team. And you must have a sense of humor, too. Come with jokes to tell!
4. What’s your career advice for young programmers?
Great question! Here’s what I have to say:
Everything is possible. You just need the will to implement it. Try to find the right project where you want to develop your skills and gain experience. This will give you direction. Also, when you’re young, it’s difficult to understand what works well and what doesn’t. Don’t hesitate to try some different stuff. You’ll find your way with hard work and an open mind.
5. You said you have a young son. What has he taught you?
He tries to touch everything and combine things that don’t mix. In my job, we work on some things and we say these things can’t combine just because they’re not built a certain way.
Sometimes, though, my son finds ways to put two things together that weren’t ‘meant’ to be together. He has the imagination to connect things which aren’t connected.
Soak up the joys of the journey
Sergiy’s words of wisdom tell us a lot about how to get the most out of ourselves. We must be ready to learn, experiment with new ideas (even if we fail), and work with others. That’s how we get to the top.
During the journey, we should also take time to have fun. When Sergiy isn’t working, you may find him hanging out with his son, cooking food over an open fire, or listening to rock music.
“My family is quite large, so we have a lot of cousins. We have this cool tradition where all the cousins meet up every summer for a reunion party,” says Sergiy.
Now, that’s awesome. Take the words of Sergiy to heart. Make more out of your career and life.
| Want to Win? Be Willing to Dive Into Something New | 51 | want-to-win-be-willing-to-dive-into-something-new-12dbc78e665b | 2018-06-01 | 2018-06-01 11:21:11 | https://medium.com/s/story/want-to-win-be-willing-to-dive-into-something-new-12dbc78e665b | false | 998 | Meet our employees like never before | null | kyivstar | null | Kyivstar Careers | null | kyivstar-careers | CAREERS,TECH,MARKETING,DIGITAL MARKETING,SOCIAL MEDIA | twiykyivstar | Startup | startup | Startup | 331,914 | VEON Careers | At VEON, we know much of the world counts on us (10% and growing). We know that sitting down and being complacent with the status quo just isn’t an option. | 66801d700d70 | VEONCareers | 3,291 | 13 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-17 | 2018-05-17 10:22:20 | 2018-05-17 | 2018-05-17 10:25:51 | 0 | false | en | 2018-05-31 | 2018-05-31 10:59:27 | 2 | 12dcfd8e8b3a | 1.049057 | 0 | 0 | 0 | Summary of Fast AI’s second lesson | 2 | Improving Image Classification
Summary of Fast AI’s second lesson
Recap — learning rate, learning rate finder, annealing (stepwise, cosine), differential learning rate annealing
Finder — learning rate increases over time/iteration
Annealing — learning rate decrease over time/iteration
Data Augmentation (DA)—using different variations of a picture to train a model.
In a pre-trained CNN model, layer activations are pre-computed.
To take advantage of DA in a dataset, precompute should be set to false, and run. Precompute usually should, usually, be run only once with a dataset.
A frozen layer is a layer which is untrained. Unfreeze to relearn activations.
Stochastic Gradient Descent w/ Restarts (SGDR) — using cosine annealing to restart the learning rate every epoch (if cycle_len = 1) or every 2 epochs (if cycle_len = 2)
SGDR helps with the learning to jump out of a local minima
More about SGDR here (blog) and here (fast wiki discussing sgdr h’prms)
Test Time Augmentation (TTA) — makes predictions on original + 4 augmented images and takes the average of them for a final prediction
Tiny detour into the FastAI library — built on top of PyTorch — and its origins
REVIEW — Easy Steps to train a world-class image classifier
* Enable data augmentation, and precompute=true
* Use lr_find() to find the highest learning rate where loss is still clearly improving
* Train last layer from precomputed activations for 1–2 epochs
* Train last layer with data augmentation (precompute=false) for 2–3 epochs with cycle_len=1
* Unfreeze all layers
* Set earlier layers to 3x-10x lower learning rate than next higher layer
* Use lr_find() again
* Train full network with cycle_mult=2 until over-fitting
| Improving Image Classification | 0 | image-classification-12dcfd8e8b3a | 2018-05-31 | 2018-05-31 10:59:28 | https://medium.com/s/story/image-classification-12dcfd8e8b3a | false | 278 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Andrew Chacko | null | 50467d1ba62b | andrewchacko | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-10 | 2017-12-10 21:13:29 | 2017-12-10 | 2017-12-10 21:44:53 | 1 | false | en | 2018-06-19 | 2018-06-19 20:59:24 | 0 | 12e117c3a539 | 1.283019 | 14 | 1 | 0 | Like pancakes and dates this is my first post so set expectations accordingly… | 5 | On Pancakes and first dates…
mmm…burnt
Like pancakes and dates this is my first post so set expectations accordingly…
I believe engineers feel an obligation to question everything and push back on dogmatic structures and encourage thought and change. My own life led me down this path of engineering- having faced abuse and transience in my childhood. There was a lot of chaos growing up, my sister was sent to an outward bound program and my dad ending up in jail and later dying there. I turned to school as a coping mechanism instead of drugs. I did not want to be a statistic. Transmuting hardship into the creative, converting innocence into wisdom, and holding tight to hope has always been my job. It was survival.
I have 3 degrees at 24; in Electrical Engineering, Computer Science, and a Masters in Deep Learning/Robotics and Cryptography. I am not a prodigy, but hard knocks earlier in life have made me skilled at keeping an eye on the periphery. In an uncertain environment, where I did not know when the next negative thing would happen, I had to be vigilant. I switched schools 2nd semester of senior year of college due to family issues. I believe that with skill, tenacity, humility and grit, we are in a position to build and capitalize on fleeting and rare opportunities. Be scrappy. There are always opportunities if you keep your eyes open.
I am stubborn, but it is good to be stubborn. I earned my stubbornness; and I have kept on standing every time I am knocked down. There is no singular path to success — success is the inner knowing it takes to hold tight to a goal and listen. Work hard.
| On Pancakes and first dates… | 119 | love-friendship-and-engineering-12e117c3a539 | 2018-06-19 | 2018-06-19 20:59:25 | https://medium.com/s/story/love-friendship-and-engineering-12e117c3a539 | false | 287 | null | null | null | null | null | null | null | null | null | Life | life | Life | 283,638 | Maddie P | VC Investor and Data Scientist | 13631c0273d6 | madison11_11 | 48 | 58 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-10 | 2017-12-10 16:33:45 | 2017-12-11 | 2017-12-11 11:41:54 | 0 | false | zh-Hant | 2017-12-11 | 2017-12-11 11:41:54 | 8 | 12e59e2217b1 | 0.950943 | 6 | 0 | 0 | 恩,十一月初忘了看 Kaggle 十月的網誌了。總共四篇: | 3 | 十月 Kaggle 官方網誌摘要
恩,十一月初忘了看 Kaggle 十月的網誌了。總共四篇:
文本情感分析教學 in R
亞馬遜雨林競賽首獎訪談
九月 Dataset Publishing Awards 首獎訪談
發表 Kaggle 2017 資料科學暨機器學習現況調查報告
文本情感分析教學 in R
Data Science 101: Sentiment Analysis in R Tutorial
Welcome back to Data Science 101! Do you have text data? Do you want to figure out whether the opinions expressed in it…blog.kaggle.com
這是 Kaggle 的工程師發表的一篇教學,用美國 1989 ~ 2017 年的國情咨文資料庫來做文本情感分析。除了 Kaggle 官方網誌以外,該文在 Kaggle 上也有一個 Kernel 讓你可以直接跑跑看,或是 Fork 出來做練習。主要是使用一個叫做 tidytext 的 R 套件,不過我不會寫 R,自己寫了一份 python 的版本,對照看一下,覺得 R 語法有點神奇。文末附上了三個練習題,以及其他在 Kaggle 上面語言相關的資料集,還有外部的資料資源。
我的 python 版本 kernel。
亞馬遜雨林競賽首獎訪談
Planet: Understanding the Amazon from Space, 1st Place Winner's Interview
In our recent Planet: Understanding the Amazon from Space competition, Planet challenged the Kaggle community to label…blog.kaggle.com
這邊我就不介紹這個競賽了,首獎是一位名為 bestfitting 的 kaggler。bestfitting 加入 Kaggle 不到兩年,參加十場競賽,最差的成績是 61th, top 4% 銀牌,最近一年的戰績最差是 7th,相當可怕。
解法方面,主要是用了四種(ResNets, DenseNets, Inception, SimpleNet)11 個 CNN 來做 ensemble,文中有一張一看就懂的架構圖。資料前處理的部份基本的都有做,多做了一個叫做 haze removal technique 讓圖片更清楚,對某些標籤有用,但是對某些標籤有害,但是 ensemble 會挑有利的 model 所以整體還是有提升的。
關於 evaluation,因為競賽是用 F2Score 評分,如果用 logloss 的話,不保證 logloss 低 F2Score 就會變高,所以這部分他有自己另外寫。另外 bestfitting 有發現某些標籤之前有相依性,所以他在 CNN 的輸出後面加了 Ridge regression 來捕捉這樣的關係,然後增強整理效能。機器是用 TitanX。
最後建議大家讀 cs229 & css231 還有每天看 paper 然後實作。
九月 Dataset Publishing Awards 首獎訪談
September Kaggle Dataset Publishing Awards Winners' Interview
This interview features the stories and backgrounds of our $10,000 Datasets Publishing Award's September winners…blog.kaggle.com
第一名的 Dataset 是關於 ISIS 的宗教意識形態的文字資料,數量不多,但是是個滿有話題性的資料集。第二名是關於性格分析的 MBTI 16 型人格,去相關的論壇爬了 8600 個人發表的東西,然後標上人格標記。第三名資料量就稍微大一些些,是英國 2000 ~ 2016 年的事故資料以及交通流量,這個資料集也算是比較熱門的主題,互動的人也比較多。
發表 Kaggle 2017 資料科學暨機器學習現況調查報告
Introducing Kaggle's State of Data Science & Machine Learning Report, 2017
In 2017 we conducted our first ever extra-large, industry-wide survey to captured the state of data science and machine…blog.kaggle.com
這就是今年 Kaggle 對 Kaggler 們做的調查,除了最後的報告公佈出來,資料以及處理資料的 Kernel 也都公佈了出來,就直接看看吧!
#Kaggle
#MachineLearning
#DataScience
#planet_understanding_the_amazon_from_space
| 十月 Kaggle 官方網誌摘要 | 6 | 十月-kaggle-官方網誌摘要-12e59e2217b1 | 2018-05-13 | 2018-05-13 04:43:10 | https://medium.com/s/story/十月-kaggle-官方網誌摘要-12e59e2217b1 | false | 252 | null | null | null | null | null | null | null | null | null | Kaggle | kaggle | Kaggle | 520 | Rick Liu | null | 17e7bf7173a1 | drumrick | 159 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-29 | 2018-03-29 09:37:05 | 2018-04-03 | 2018-04-03 17:01:00 | 3 | false | en | 2018-04-03 | 2018-04-03 17:01:00 | 9 | 12e6861211e8 | 3.829245 | 4 | 0 | 0 | It was widely reported back in January that smart speakers are now the fastest growing consumer technology trend. Thanks to aggressive… | 5 | Will Alexa ever dream of electric sheep?
Image: Adam Bowie
It was widely reported back in January that smart speakers are now the fastest growing consumer technology trend. Thanks to aggressive device discounts on Black Friday and in the run up to Christmas, downloads of Amazon Alexa and Google Home partner apps topped App Store charts on Christmas day — suggesting that Trivial Pursuit may have become the world’s second favorite festive Q&A.
Both assistants are equipped with enough witty retorts (ask Alexa how much John Snow knows, or how much wood a Woodchuck really can chuck) to satisfy the most relentless comedy quizmaster, yet we can’t help but feel a little sorry for those devices barely out of their wrapping paper before they were subjected to sherry-slurred interrogations. Keyword spotting — the technical term for how smart speakers listen out for relevant words to respond to — sure ain’t for the fainthearted.
But what’s happened to all those smart speakers since? Have they grown to become cherished members of their respective households or were they dismissed as a jolly novelty and packed away along with the half-finished games of Monopoly?
In a bid to uncover the secret life of smart speakers, we partnered with Northstar to survey over 2,000 consumers across the US, Asia Pacific and European regions on how they use their voice assistants and their hopes for the future of these devices. The resulting report, New Electronic Friends, reveals some fascinating insights into how smart speakers are used now, and the role they may play in our future…
Satisfaction is high, yet devices aren’t used to their full potential
Three quarters of voice assistant owners are happy with the way their smart speaker handles what they ask of it. Yet despite the increasingly impressive capabilities manufacturers are enabling within smart home ecosystems, the majority of devices are simply used to play songs or control TV — particularly in the US. When it comes to more advanced smart home features such as automation, though, only 30 per cent of smart speakers have been given the power to switch on the lights, control room temperature or monitor other appliances. Perhaps this will change as accessories such as smart power sockets, climate control systems and light fixtures become more widespread and affordable.
Users have an expectation for conversation
Smart speakers provide a perfect introduction to the capabilities of voice control in a consumer-friendly package. It seems people are waking up to how useful this is; around half of voice assistant users would prefer this method over physical interaction with any device — including laptops and smartphones. Could we be hurtling towards a future in which mice, keyboards and touchscreens are secondary input devices and the predominant form of human-machine interaction is voice?
Cloud connectivity is a key frustration
Despite what their name suggests, existing smart speakers really aren’t that smart; everything you say is uploaded to the cloud for natural language processing (NLP) and machine learning (ML) analysis. This need for an internet connection is by far the biggest frustration users have with voice assistants, whether it’s simply the delay in response while a reply is formulated remotely or legitimate security and privacy concerns. Given the option, over half of users would prefer that devices store and compute personal information (including everything learned about the user) locally — and only send non-sensitive information to the cloud.
This is currently far more easily said than done. While the required processing power and data storage certainly exists, its inclusion within a consumer device would put the price point far out of most consumers’ reach. Yet new technologies are already in development that will serve as catalysts for this vision: Project Trillium, announced by Arm in February 2018, is designed from the ground up to enable even the smallest consumer devices to become thinking machines capable of machine learning (ML) that interact naturally with us and their environment.
We’re already imagining a future of possibility together
While there are undoubtedly some frustrations and concerns among consumers (a significant minority went as far as to worry that voice assistants may one day become more powerful than humans), these demonstrate how we are beginning to look past the current limitations of voice technology to a vision of a future enriched by inherently intelligent devices, interacting with us in the most human of ways. The majority believe that in the next ten years, conversing with a voice assistant will be indistinguishable from speaking to a human.
Part of ‘being human’ will be the ability for a device to understand and infer emotion as well as strive for self-betterment. In Philip K. Dick’s “Do Androids Dream of Electric Sheep?”, the only thing distinguishing androids from humans is empathy. Yet near-future science fiction movies such as Her and Robot and Frank imagine voice assistants that adapt empathically to their human partner’s personality, activities and emotions to such a degree that they evolve to take on new names and personalities themselves. Could we really be only a decade away from that vision?
Download the full survey for free and discover more about the growing role smart speakers are playing in our daily lives here.
| Will Alexa ever dream of electric sheep? | 86 | will-alexa-ever-dream-of-electric-sheep-12e6861211e8 | 2018-04-07 | 2018-04-07 14:17:48 | https://medium.com/s/story/will-alexa-ever-dream-of-electric-sheep-12e6861211e8 | false | 869 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Arm | Arm architects the pervasive intelligence. Arm-based chips and device architectures orchestrate the performance of the technology. | d2a431310d7a | arm | 45 | 3 | 20,181,104 | null | null | null | null | null | null |
0 | # Train model
validation_data = ([X, A], Y_val, idx_val)
model.fit([X, A],
Y_train,
sample_weight=idx_train,
epochs=epochs,
batch_size=N,
validation_data=validation_data,
shuffle=False,
callbacks=[es_callback, tb_callback])
# Evaluate model
eval_results = model.evaluate([X, A],
Y_test,
sample_weight=idx_test,
batch_size=N,
verbose=0)
| 2 | 8dc8c99fa1c1 | 2018-09-10 | 2018-09-10 07:56:51 | 2018-09-14 | 2018-09-14 06:14:25 | 5 | false | ja | 2018-09-14 | 2018-09-14 06:14:25 | 10 | 12e7458f31fb | 7.095333 | 8 | 0 | 0 | Graph Convolutionを自然言語処理に応用するための調査Part3となります。 | 2 | Graph Convolutionを自然言語処理に応用する Part3
Graph Convolutionを自然言語処理に応用するための調査Part3となります。
Part3では理論/実装から少し離れ、グラフ構造を扱うタスクの種類について整理します。具体的には、タスクとそのタスクにおけるinputとoutput(prediction)を整理します。これにより、応用しようとしている問題(自然言語処理)をどんな「グラフ構造を扱うタスク」に変換できるのかを考えられるようにします。理論・実装編であるPart2についてはこちらをご参照ください。
グラフ構造を扱うタスクの種類
グラフ構造を扱うタスクは、以下のように整理できます。
Structured deep models: Deep learning on graphs and beyond p12より
Node Classification: グラフのノードについて、そのクラスを分類する
Graph Classification: グラフ全体について、そのクラスを分類する
Link Prediction: あるノード同士が、接続関係にあるかを推定する。
Edge Classification/Relation Classification: あるノード同士の関係(親子なのか兄弟なのかetc)を推定する
Relation Extractionは、Link Prediction/Edge Classificationを含んだ関係性の推定全般のタスクを指し示す言葉のようです。
ノード、エッジに関するタスクについてはTransductiveとInductiveという概念があります。学習データ中にあるけれども、ラベルがついてないデータについて予測するものがTransductive、学習データ中にないデータについて予測するものがInductiveになります。
Transductiveの設定は通常の機械学習からすればとても奇妙に思えます。しかし、グラフを扱う場合はそもそもグラフ構造全体をネットワークに入れなければなりません。Part2で論文ネットワーク(CORA)の分類を行った際も、training/validation/testで入力しているネットワークは同じ[X, A]であることが確認できます。これはTransductiveな設定になります。
このためTransductiveは「欠損データを補完する」ようなイメージになります。ただ、実際は今までにないノードやエッジを推定したいことが多いでしょう。これがInductiveの設定になります。
Inductiveの設定では周辺ノードの情報が確定していないため、サンプリングを行います。サンプリングした周辺ノードから、ターゲットノードの表現を推定します。GraphSAGE(Inductive Representation Learning on Large Graphs)やRevisiting Semi-Supervised Learning with Graph Embeddingsはこの代表的な手法となります。
GraphSAGE: Inductive Representation Learning on Large Graphs
さて、これまで述べたタスクは従来のグラフに関するタスクです。近年では、グラフの潜在表現を獲得する、グラフ構造を生成するといった研究も行われています。
Structured deep models: Deep learning on graphs and beyond
グラフの潜在表現の獲得は、データの構造情報を埋め込めるという面で有用です。観測されたモーションからそれを生み出している物理構造を明らかにしたり(Neural Relational Inference for Interacting Systems)、ユーザー・アイテムそれぞれの繋がりから潜在表現を作成することで推薦システムに利用するという研究もあります(Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks)。
Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks
グラフ構造の生成は、特定の性質を持つ化学物質構造の生成などが研究されています。創薬は非常にコストがかかるプロセスであり、今後はそこへの応用も進んでくるかもしれません。
MolGAN: An implicit generative model for small molecular graphs
グラフ構造では様々なタスクが可能なことがわかりました。しかし、適用に際してはグラフ構造全体を入れなければならないという点に変わりはありません。文や化学物質はまだ小さいほうですが、SNSにおけるユーザーのつながり等は数えきれないほどのノードがあります。これについてはどう扱えばよいでしょうか。
大規模なグラフ構造を扱う方法
大規模なグラフにGraph Convolutionを扱い方法については、とても参考になる研究があります。
PinSage: A new graph convolutional neural network for web-scale recommender systems
Ruining He | Pinterest engineer, Pinterest Labsmedium.com
こちらの研究では、 Pinterestの PinネットワークにGraph Convolutionを適用しています。そのノード数は実に30億にも上ります。ベースとなっているのは上記で紹介したGraphSAGEなのですが、近傍ノードを重点サンプリングから得ることでグラフ全体をメモリに乗せずとも良くしています。この「重点」は、ランダムウォークを何回か行いその到達回数で重みをかけています。
ノードを重点サンプリングで取得することは3つのメリットがあります。
ノード全体をメモリに乗せなくてもよくなる
トップNの重点ノードを使うことで周辺ノードの数をNに固定できる=計算の最適化を行いやすくできる
各周辺ノードの重要度を考慮できる
この他にも、サンプリングはCPU・畳み込みはGPUで役割分担する、MapReduceによる分散処理を行うなどプロダクションレベルでGraph Convolutionを行うためのノウハウが詰まっています。
Part3では、グラフを扱うタスクの整理を行いました。また、実際グラフを扱う際に、サイズが大きい場合どのように対処すればよいのかについても参照しました。これで、グラフ構造を扱うことでどんなことができるのか、それをスケールするにはどういう方法があるのかが把握できました。
Part4では、いよいよ自然言語のデータにGraph Convolutionを適用していきたいと思います。
| Graph Convolutionを自然言語処理に応用する Part3 | 9 | graph-convolutionを自然言語処理に応用する-part3-12e7458f31fb | 2018-09-14 | 2018-09-14 06:14:26 | https://medium.com/s/story/graph-convolutionを自然言語処理に応用する-part3-12e7458f31fb | false | 123 | Quick & Short programming tips like soda! | null | null | null | programming-soda | null | programming-soda | PROGRAMMING,PYTHON,MACHINE LEARNING | icoxfog417 | Machine Learning | machine-learning | Machine Learning | 51,320 | piqcy | All change is not growth, as all movement is not forward. Ellen Glasgow | c4da6a508 | icoxfog417 | 277 | 35 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-14 | 2018-06-14 18:11:44 | 2018-06-14 | 2018-06-14 18:25:11 | 1 | false | en | 2018-06-14 | 2018-06-14 18:25:11 | 0 | 12e764ee2732 | 3.483019 | 2 | 0 | 0 | In my 12th grade English Literature class, my teacher returned an essay I wrote with B.S. written in red, bold letters across the top of… | 5 | Let’s Bring More B.S. to the Finance Industry — Here’s How
In my 12th grade English Literature class, my teacher returned an essay I wrote with B.S. written in red, bold letters across the top of it. Though incredulous and humiliated at that moment, I noticed others with the same discomfort on their faces. We seemingly had all come to the same conclusion — our teacher thought so poorly of our work that profanity was the only grade she cared to give us. The pain in the room was palpable as our teacher explained that many of us made bold assertions which lacked basic supporting evidence in our essays. She demanded that our essays in the future should “Be Specific.”
How is this story relevant to Abele Group?
In recent years investors have given the financial industry a grade of “Be Specific” on the prevailing regime of fees and “best practices.” Investors want to know why the value of their portfolios perpetually lag the broader market indices. They desire greater transparency on the fine print of agreements and investment processes. Lastly, investors require more diverse products that provide exposure to traditional and non-traditional assets.
As the Digital Economy emerges and transforms every industry, blockchain technology and Artificial Intelligence will radically alter the financial industry.
The application of these two technologies will produce greater data security, reduce an over reliance on latent “middlemen,” increase transparency regarding the movement of funds, and most importantly place the “concept of trust” in an organized process rather than in people or institutions. The financial industry must radically transform and widely adopt cutting edge technology. These are a few questions that the broader market and you should ask as token usage increases:
Should the world’s largest retailers become the largest banks in 3–5 years?
How many billions in coins have been lost or stolen?
Can investors have real-time visibility in a secure fashion of their managed investments?
Can less well-off investors have access to invest in early stage technology companies or “sophisticated” financial products?
Currently investors and regulators give the ICO market a grade of “Be Specific.” A successful ICO does more than raise a substantial sum of money. While most projects offer investors an opportunity to fund cutting edge development in innovative technologies, a few ICOs come to market lacking substantive business plans or real transparency on the economics. The Abele Group fully embraces the explicit “concept of trust” in processes that decentralization intends to achieve. We intend to accelerate the benefits of decentralization in an investor friendly manner through this ICO and in the conduct of the Abele Group’s affairs.
The Abele Group recognizes this dramatic inflection point for the financial economy and has a unique solution. We are ready to successfully navigate the winds and ride the waves of change. Our plan consists of four distinct phases which will each radically transform the financial industry and deliver significant value to you with your participation in our token sale.
In Phase 1, we will create Abele Trust, a digital custodian, which stores digital coins/tokens for retail and institutional clients.
In Phase 2, we will create a fourth generation hybrid blockchain, Abele Chain.
In Phase 3, we will create Abele Trade Solutions (AbeleTS), which is a blockchain and artificial intelligence (AI) supported trade management software product accessible through an interactive user interface (UI).
Lastly, in Phase 4, we will manage a portfolio of traditional and digital assets utilizing a hedge fund structure. Implementing each phase of our business plan requires extensive use of blockchain technology and some AI. The final size of our completed token offering will determine the schedule of the rollout for the four phases.
In effect, the Abele Group will level the playing field for smaller investors by providing them access to early stage technology companies and alternative investments placing them on equal footing with larger investors. The Abele Group will transform the financial industry by enhancing security of digital storage and transmission, eliminating archaic “middleman” technologies, providing global investors greater transparency on their investments, and establishing systematic confidence with market participants in a secure trade management process.
The Abele Group will lead the way in taking the industry’s grade from “Be Specific” to A+. Abele Trust will end the days of storing digital assets on a multi-signature wallet utilizing a thumb drive, in “cold storage” on the night stand, or leaving all your digital assets with an exchange. AbeleTS will eliminate excessive operational expenses due to inefficient technology and trade infrastructure. All investors will prosper in an environment in which funds no longer need to charge high fees to support an inefficient technology infrastructure. Abele Asset Management will provide investors access to sophisticated traditional and non-traditional trading strategies.
The Abele Group will provide token holders with access to complex investments and breakthrough technologies — made possible by the innovation and application of blockchain. This offering will harness the democratic uniqueness of Tokens as a decentralized asset to transform the financial industry. We will lead this industry further into the blockchain and digital era with your investment.
I invite you to navigate the winds and ride the waves of change with us.
Phil A. Woods
– Phil Woods, Managing Partner
| Let’s Bring More B.S. to the Finance Industry — Here’s How | 6 | lets-bring-more-b-s-to-the-finance-industry-here-s-how-12e764ee2732 | 2018-06-17 | 2018-06-17 15:11:02 | https://medium.com/s/story/lets-bring-more-b-s-to-the-finance-industry-here-s-how-12e764ee2732 | false | 870 | null | null | null | null | null | null | null | null | null | Blockchain | blockchain | Blockchain | 265,164 | Abele Group | We're a fintech group using blockchain and AI to modernize finance. Our first project - ABELE TRUST - is a bank for cryptocurrencies and digital assets. | 9b513abb5f92 | abelegroup | 72 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-02-26 | 2018-02-26 10:52:38 | 2018-02-26 | 2018-02-26 11:14:58 | 3 | false | en | 2018-03-28 | 2018-03-28 12:13:37 | 3 | 12e8a5c26f2f | 2.466981 | 1 | 0 | 0 | Global construction of datacenters will reach a high level in 2018, as leading Internet firms, including Google, Amazon, Web Service… | 5 | [TrendForce View] Datacenters Proliferating in 2018
Global construction of datacenters will reach a high level in 2018, as leading Internet firms, including Google, Amazon, Web Service, Facebook, Microsoft Azure, and Alibaba, have major projects in the pipeline.
The investment craze will be fueled by the rollout of a brand new Intel Xeon processor, a version with the largest renovation in 10 years, generating robust data center demands which will be a major growth driver for server shipment in the year.
New Datacenter demands sustain server shipment growth
Aggressive datacenter projects of leading Internet firms resulting from the transformation of the industry will generate massive demands for servers, which have integrated the functions of smart end market devices and IoT.
Distribution of global datacenters
In 2017, there were 390-plus hyper-scale datacenters worldwide, each consisting of hundreds of thousands or even several millions of servers, carrying out computing, storage, and other online functions.
Amazon/AWS, Microsoft Azure, IBM, and Google boast the largest numbers of hyper-scale datacenters, each exceeding 45. The U.S. accounted for 44% of the world’s hyper-scale datacenters, followed by China with 8%, Japan and the U.K., each with 6%, and Germany and Australia, with 5% each.
Figure: Distribution of Hyper-Scale Data Centers
AWS is a frontrunner in this field, boasting operations in 18 geographic areas, plus 49 areas where its services are available. Moreover, it has announced plan to add 12 service-available areas and four geographic areas in Bahrain, Hong Kong, and Sweden, on top of the second AWS GovCloud (Government Cloud Platform) in the U.S.
In China, in compliance with the Chinese regulation banning independent engagement in cloud-end services by foreign enterprises, AWS has entrusted Sinnet to run its global cloud-end services in Beijing, after selling part of its infrastructure facilities to the latter. In the same vein, under the assistance of AWS, another cloud-end data service provider, Ningxia Western Cloud Data Technology, kicked off operation recently, offering mutual backup support to Sinnet’s service, which is separate from other datacenters of AWS worldwide.
Alibaba Cloud, under Alibaba Group, launched its datacenter in Mumbai, India, in Jan. 2018, aiming to serve small and medium enterprises in India, which will be followed by the inauguration of a datacenter in Jakarta, Indonesia, in March. Presently, Alibaba Cloud’s services are available in 33 areas worldwide, including China, Hong Kong, Singapore, Japan, Australia, the Middle East, Europe, the eastern and western regions of the U.S., and India.
Figure: Distribution of AWS cloud-end datacenters
Source:Topology Research Institute,,2018/01
Yellow Circle: Number of geographic areas and service-available areas
Cyan Circle : New areas (ready for launch)
TrendForce, a world leading market intelligence provider, covers various research sectors including DRAM, NAND Flash, SSD, LCD display, LED, green energy and PV. The company provides the most up-to-date market intelligence, price survey, B2B platform, industry consulting service, business plan and research report, giving the clients a firm grasp of the changing market dynamics. For the latest research please check our newsroom or follow our twitter.
| [TrendForce View] Datacenters Proliferating in 2018 | 1 | trendforce-view-datacenters-proliferating-in-2018-12e8a5c26f2f | 2018-03-28 | 2018-03-28 12:13:38 | https://medium.com/s/story/trendforce-view-datacenters-proliferating-in-2018-12e8a5c26f2f | false | 508 | null | null | null | null | null | null | null | null | null | Cloud Computing | cloud-computing | Cloud Computing | 22,811 | TrendForce Corporation | null | b018c10c1ed7 | TrendForce | 9 | 107 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-18 | 2018-09-18 05:06:38 | 2018-09-28 | 2018-09-28 18:03:41 | 6 | false | en | 2018-10-04 | 2018-10-04 16:46:49 | 1 | 12e8d395f72 | 4.414151 | 1 | 0 | 0 | Pattern recognition is just a fancy word for stereotyping | 5 | Stereotyping, or as Computer Scientists like to call it - Nearest Neighbors
Let’s start off with a small thought bubble today.
You’re outside a Chinese grocery store, a Burrito place, and a German deli. You’ve never been inside any of them so you honestly have no idea what each one sells. Now, you see a woman walk by carrying a box of mooncakes. Which store do you think the woman just came out of?
This is a mooncake. (Meme stolen from @memes.hk at http://www.instagram.com/memes.hk)
I hope you said the Chinese grocery store. And why do you think that? You’re stereotyping. No need to be ashamed, we all do. (Of course, what you stereotype about is a whole different topic.) Anyways, you realize (hopefully) that mooncakes are Chinese desserts and therefore, associated with the store which sells Chinese stuff.
Now, is it not possible that the burrito place or the German deli sold the mooncakes? It’s definitely possible. Maybe the German owner decided to take a baking class and learnt how to make mooncakes. Or maybe the burrito place can’t afford to hire burrito makers since they recently unionized and now have to settle with people who only know how to make mooncakes. The point is, however, the mooncakes have the highest probability of being sold in the Chinese grocery store.
We can make a lot of inferences from stereotyping. And perhaps politically incorrect, but many of them are true. That is the whole premise of today’s machine learning algorithm: Nearest Neighbor.
Nearest Neighbor
Nearest Neighbor (NN) looks at a data point that you want to classify and see which existing classified data point is the most similar to the data point you want to classify.
I made this data up, so it’s probably wrong. But let’s just assume old, rich folks are mostly Republicans for educational purposes.
Look at the diagram above. If you are someone who isn’t cursed with the Red-Blue color blindness, you should notice that most Republicans are older and have a higher annual income. So let’s start by stereotyping old, rich folks to be Republicans. As you can tell, this may not necessarily be the case, but according to the diagram MOST old, rich folks are Republican. That is, there is a higher probability that an old, rich folk is a Republican rather than a Democrat. And vice versa for Democrats.
Sarah is the green dot. (Poor color-blind people, I pity you. 😢)
Now, you just met Sarah on Tinder. After using what you’ve learnt in the previous article from this series, you were able to secure a date!!! However, you want to make sure she’s date-able material first.
She’s 25 years old and a grad student (aka broke af). She’s the green dot! You want to find out whether she’s a Democrat or Republican and you can’t just go up and ask. That’s rude. So, what you (and the Nearest Neighbor algorithm) did was find the person who is most similar to her in age and income (i.e. her nearest neighbor) and checked whether that person is a Democrat or Republican. From the diagram, her nearest neighbor is a blue dot. So now you’re going to stereotype and assume she is a Democrat. Looking back at the data, you see that she does appear to be in a position where there is a higher probability of her being a Democrat, so stereotyping here seems reasonable.
K-Nearest Neighbors
Now, the question comes: What if she’s closer to one of the red dots in the Young and Broke region? Wouldn’t she then be classified as a Republican, even though we know she has a higher probability of being a Democrat?
She got younger and apparently now, she’s a Republican. Oh no!!
And the answer would be a resounding YES, at least with just Nearest Neighbor! This is now where the evolved form of Nearest Neighbor shows up: K-Nearest Neighbors.
If Nearest Neighbor was a Charmander, K-Nearest Neighbors (K-NN) would be a Charizard. K-NN looks at NOT JUST the nearest neighbor, but also the other nearby neighbors. How many nearby neighbors? K nearby neighbors. That is, if K = 5, the 5 nearest neighbors to Sarah would be considered in determining whether Sarah is a Republican or a Democrat.
The “weighted” classification of the top 5 nearest neighbors are Democrats.
We assume that neighbors nearer to Sarah has a higher likelihood of representing Sarah’s political views.
So, because the Republican is the closest to Sarah in terms of age and annual income, we will consider this neighbor’s political views a little bit more. But we also ask the other 4 nearest neighbors to Sarah. Astonishingly, the following 4 nearest neighbors are Democrats.
Even though these 4 neighbors are further from Sarah than the Republican (and therefore, individually they have a lesser likelihood of matching with Sarah’s political views), the number of neighbors identifying as Democrats outweighs 1 outlier Republican. So, we (and K-NN) will assume that Sarah is a Democrat, even though a Republican is nearest to her.
What is Weighting? (Optional, read if you want to be smarter, or more confused 🤷)
We call this weighting. We might value the political views of neighbors closer to Sarah a bit more, but neighbors further away from Sarah (when summed together) might outweigh some of the closer neighbors.
Say I give 5 points to the 1st nearest neighbor, 4 points to the 2nd nearest neighbor, 3 points to the 3rd nearest neighbor, etc.
Republican = 5 (the 1st nearest neighbor)
Democrat = 4 + 3 + 2 + 1 = 10 (the 2nd, 3rd, 4th, and 5th nearest neighbors)
The Democrats still have more points because they have a larger number of neighbors in the top 5 neighbors, even though none of them individually has more points than the Republican who is the 1st nearest neighbor.
| Stereotyping, or as Computer Scientists like to call it: Nearest Neighbors | 1 | stereotyping-or-as-computer-scientists-like-to-call-it-nearest-neighbors-12e8d395f72 | 2018-10-04 | 2018-10-04 16:46:49 | https://medium.com/s/story/stereotyping-or-as-computer-scientists-like-to-call-it-nearest-neighbors-12e8d395f72 | false | 918 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | AYSY | AI grad student during the day, broke during the night. Makes memes as a side hustle. | 38ea80b2ded8 | wd50 | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-10-30 | 2017-10-30 20:32:14 | 2017-10-30 | 2017-10-30 20:35:57 | 1 | false | en | 2017-11-01 | 2017-11-01 01:42:23 | 11 | 12e9395686b1 | 2.864151 | 0 | 0 | 0 | Permed hair, Guess jeans, Michael Jackson and the Care Bears―remember the 1980s? That’s what used to come to mind when I heard the words… | 3 | Fear, Uncertainty & AI in Marketing
Permed hair, Guess jeans, Michael Jackson and the Care Bears―remember the 1980s? That’s what used to come to mind when I heard the words artificial intelligence. That era―with expert systems, the Fifth Generation Computer Project, MIT’s Marvin Minsky and the Lisp programming language―was my last frame of reference for AI. For several decades we’ve been in a quiet period referred to as an AI winter. But about five years ago, the winter came to an end. Springtime for AI is here and the robots are at work among us.
I’m guessing it may have something to do with the massive computing and networking power that’s become available and affordable to support end-to-end AI applications. Maybe more powerful algorithms, too. Whatever the drivers, today research in self-learning software, neural networks and quantum computing is all the rage. AI salaries are going through the roof. There’s still fear and uncertainty but no longer much doubt about AI’s viability.
And AI is already impacting B2B and B2C technology marketing. It’s providing new ways of communicating with customers, understanding their needs and behavior, and uncovering strategic opportunities. AI is taking data gathering and analytics to a whole new level to better target, pitch, anticipate, satisfy, protect and take orders from people. Later AI impacts, however, have some people very worried.
Today and Tomorrow
Today’s AI is of a simple sort. We talk to digital assistants like Siri and Alexa and Cortana and get answers and help with simple tasks. Other AI systems diagnose our illnesses, drive our cars, identify us from within a crowd, teach us, uncover selling and upselling opportunities for marketers and show us how to do many things more efficiently and cheaper. Robots with machine intelligence assemble products and fetch them from Amazon warehouse shelves. Digital bots provide customer service online, at kiosks and on mobile devices. AI software elements empower digital assistants to make calendar appointments based on what’s in our inboxes. The use cases for AI are expanding weekly.
Tomorrow’s AI―where AI systems mimic and then surpass the intelligence of humans―won’t be here until mid-century, believes a recent group of AI experts. But it’s already freaking some people out; like Elon Musk and Bill Gates, who are warning of the possibility (or inevitability) that “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded,” as Stephen Hawking wrote.
Cue the scary organ music.
New AI Tools for B2B and B2C Tech Marketing
Whatever your take on AI, however, it’s already here. There are efforts like OpenAI to try to ensure that all AI development is public and has plenty of regulatory oversight. Perhaps at some point governments will intervene. But perhaps not.
In the meantime, we can anticipate how AI is impacting what we do in marketing. AI will increasingly provide us with new tools based on enhanced data mining, machine intelligence and conversational behavior that enable businesses to go beyond what you and I can do, including:
Automation: Automation of data collection from massive and diverse sources
Enhanced insight: The ability to make decisions, respond to opportunities and predict future outcomes based on that data
Communication: The ability for intelligent systems to communicate interactively and intelligently with customers
Personalization: Using data from multiple sources over time to cater to customers on a personal basis
Will the end of work occur in the middle of this century? Will everyone retire and play golf and Canasta and live happily ever after? Who knows?
Lately, most of us seem to be amused and fatalistic about AI as it bubbles up into our consciousness. The October 23 cover of The New Yorker features an illustration of robots heading down the street clutching coffee cups and smartphones, walking robot dogs and giving spare change to a homeless human. Last weekend I went to a wedding where a robot performed the ceremony while another one presented the rings.
So I’m wondering: The next time I report on the latest developments of AI in B2B technology marketing, will I write the blog or will Zaphod, my virtual AI assistant do it?
Gene Knauer is a B2B technology content marketing writer based in the San Francisco Bay Area.
| Fear, Uncertainty & AI in Marketing | 0 | fear-uncertainty-ai-in-marketing-12e9395686b1 | 2018-04-14 | 2018-04-14 18:22:51 | https://medium.com/s/story/fear-uncertainty-ai-in-marketing-12e9395686b1 | false | 706 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Gene Knauer | null | f5bf28e038a | geneknauer | 13 | 27 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 6823acbc2743 | 2018-03-23 | 2018-03-23 15:58:22 | 2018-03-23 | 2018-03-23 16:09:46 | 1 | false | en | 2018-03-23 | 2018-03-23 16:09:46 | 2 | 12e99f80348a | 2.626415 | 5 | 0 | 0 | The Robotina platform helps people control electricity usage and cut expenses. It helps power providers allocate energy more efficiently… | 5 | About Robotina’s Marketplace
The Robotina platform helps people control electricity usage and cut expenses. It helps power providers allocate energy more efficiently. These factors will help reduce pollution. But these are not the only benefits Robotina’s creators had in mind.
Robotina will also have its own marketplace — an organized business environment available to both users (subscribed to platform’s services) and community members (anyone who holds ROX tokens).
The marketplace will be the platform’s central business district, with all ROX transactions taking place within it. Community members will have an opportunity to collaborate with each other by participating in a variety of business initiatives — read more about them here.
In the first phase of the platform’s implementation, the marketplace will support the following functions:
eStore — IoT devices and software developed by Robotina will be available for members to purchase online.
Native benefits — The platform will allow users and members to trade stored power and sell data to power companies
Cooperatives — Users and members will be able to make business proposals and organize themselves in groups.
Business — Companies can offer innovative services based on smart contracts (blockchain).
Social — Robotina will undertake initiatives to improve the community’s quality of life.
Further development of the marketplace will be funded by subscriptions to the platform’s services, commissions earned from business transactions, power-company savings, and different ways of selling data. This will facilitate many kinds of collaboration to benefit end-users: business to business (B2B), business to consumer (B2C), consumer to business (C2B) and consumer to consumer (C2C).
This flexible marketplace will generate extraordinary opportunities for organizations, individuals, and the community as a whole. The platform’s cognitive AI algorithms will be constantly improved to enhance current businesses and create new profitable opportunities.
Automatic negotiating with Brokers
Brokers represent an extremely important part of the Robotina platform and an example of intelligent algorithm implementation. Robotina’s experts have created smart programs that can negotiate and exchange data with the outside world — so-called “Brokers.” Brokers operate according to parameters that are constantly adjusted and improved by extensive implementation of machine learning and artificial intelligence.
Brokers will be used for automatic online negotiations with the businesses and other players in the Robotina platform, including:
Network operators
Power providers and users
Data users (including equipment and service suppliers who need specific data)
Customer analytics
Product manufacturers
Brokers will dynamically seek the best deals for participating users. The community will earn a commission from every transaction negotiated by the Broker as a reward for active cooperation.
Social networking with the Community Book
The platform will provide a unique, interest-based social networking service to participating users and other entities. Users will be able to divulge data, status updates, achievements, and messages from connected IoT devices and entities. They will have an option to make this information publicly accessible or to share it only with a selected group.
The Community Book will allow users to comment, exchange messages, give advice, and share experiences related to achieving energy efficiency, reducing costs, or becoming self-sufficient.
Users will also be able to express their interest in buying or selling equipment. Suppliers can send them offers fine-tuned for them and their requirements. In all cases, the user will enjoy full authority over what is published and when.
The Robotina Community Book will be the first social network encompassing people, things, entities, and businesses. It will allow new types of social interactions and contribute to efficient social and business processes.
Footprints
The Robotina platform will constantly calculate and update data related to significant “Footprints” such as carbon presence, water quality, device performance, and users’ experiences. These Footprints will be used as Key Performance Indicators allowing users, IoT devices, companies, and sites to distinguish themselves.
Intrigued? Join us on our journey to a greener, more energy-efficient future at www.robotinaico.com.
| About Robotina’s Marketplace | 168 | about-robotinas-marketplace-12e99f80348a | 2018-04-09 | 2018-04-09 04:19:05 | https://medium.com/s/story/about-robotinas-marketplace-12e99f80348a | false | 643 | Robotina is currently in the crowdsale phase for Robotina IoT Platform. | null | robotinaico | null | Robotina ICO | robotina-ico | BLOCKCHAIN,CRYPTOCURRENCY,TOKEN SALE,ICO,ENERGY MANAGEMENT SYSTEM | robotinaICO | Blockchain | blockchain | Blockchain | 265,164 | Robotina | ⚡️Future of energy ⚡️ #Blockchain enabled green energy platform. SAVE ELECTRICITY. SAVE MONEY. SAVE THE PLANET. | 2d786eba2516 | robotinaico | 135 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-04-03 | 2018-04-03 13:44:52 | 2018-04-04 | 2018-04-04 05:15:59 | 1 | false | en | 2018-08-25 | 2018-08-25 14:04:49 | 12 | 12ead11ef8d1 | 1.392453 | 6 | 0 | 0 | AI/ML Courses, Conferences and Meetups | 4 | AI/ML Learning Resources Newsletter — April 2018 Edition
I study AI/ML, organize several tech community groups in the greater Seattle area, and taught Android at UW. So I’m always interested in the learning resources on AI/ML: books, courses, conferences and meetups. I will be writing newsletters periodically to share with you these learning resources which are recently announced or upcoming in the near future. [Read my other newsletters: May 2018 Edition and 2017 Edition.]
MOOCs
2/28 /18 Google launched Learn with Google AI, a collection of ML resources including the Machine Learning Crash Course with TensorFlow API. It’s a course that has been used internally by Google engineers and now opened up to the public.
3/27/18 Udacity announced School of AI with this tag line: “AI is the defining technology of our time”. It has the following Nano degrees at the time of the announcement:
AI Programming with Python
Deep Learning
Computer Vision
Natural Language Processing
Deep Reinforcement Learning (upcoming later this year)
4/2/18 Microsoft announced its Professional Program in AI. It provides learning general AI learning resources, as well as how developers can use Microsoft Cognitive Services for computer vision and NLP etc.
Conferences
3/27/18 Udacity’s Intersect 2018. Here is the link to the conference website. You can watch the videos on YouTube.
3/30/18 TensorFlow Dev Summit. Read my blog post Learnings from Dev Summit 2018 for more details about the conference.
4/10 to 4/13/18, AI NextCon. Speakers include Jeff Dean and Francois Chollet. Registration here http://aisv18.xnextcon.com/.
Meetups
I organize / co-organize a few meetup groups in the greater Seattle area. Here are a few recent ones and upcoming events:
1/27/18 TensorFlow and Deep Learning without a PhD: CNN, by Martin Gorner
3/10/18 TensorFlow and Deep Learning without a PhD: RNN, by Martin Gorner
4/2/18 TensorFlow Dev Summit Extended by GDG Seattle.
4/19/18 Data Science in Tableau, a meetup organized by Seattle DAML (Data/Analytics/ML).
| AI/ML Learning Resources NewsLetter — April 2018 | 7 | ai-ml-learning-resources-newsletter-april-2018-12ead11ef8d1 | 2018-08-25 | 2018-08-25 14:04:49 | https://medium.com/s/story/ai-ml-learning-resources-newsletter-april-2018-12ead11ef8d1 | false | 316 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Margaret Maynard-Reid | Google Developer Expert for ML | TensorFlow & Android | ed2f34822130 | margaretmz | 1,101 | 168 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-09-09 | 2017-09-09 19:38:50 | 2017-09-09 | 2017-09-09 19:48:22 | 4 | false | en | 2017-09-09 | 2017-09-09 20:19:26 | 1 | 12ec6fd95d5c | 2.069811 | 4 | 0 | 0 | For more go to | 5 | Fuzzy Logic
For more go to
http://developercoding.com/fuzzy/
Boolean logic is represented either in 0 or 1, true or false but fuzzy logic is represented in various values ranging from 0 to 1. For example, fuzzy logic can take up values like 0.1, 0.3, 0.6, 0.8, 1, etc.
Let’s take up a real-life example:
Let’s say we want to recognize that the color of the flower is red or not.
In the following image, we can say that the flower is red.
In this following image, we can say that the flower is not red.
But what about this image, is it red or yellow?
Now, I am sure that you what to say it is partially red or 40% red or 60% red, etc. This type of ability we want to give it to the computers so that it can say how much of that feature is present.
The core idea behind Fuzzy logic is that it gives degrees of membership. It helps in recognizing more than simple true and false values. Using fuzzy logic, propositions can be represented with degrees of truthfulness and falsehood.
Range of logical values in Boolean and Fuzzy logic
The traditional or classical set is also known as a crisp set. It contains objects which precise properties of membership. On the other hand, fuzzy set contains objects that satisfy imprecise properties of membership.
Therefore, Fuzzy logic is a superset of conventional (Boolean) logic that has been enhanced to fit the concept of partial truth where truth values lie between "completely true" and "completely false".
Linguistic variables and hedges
A fuzzy variable is sometimes called as linguistic variables. For example, "the tower is high", implies that linguistic variable "tower" takes the linguistic value as high.
While writing fuzzy rules in fuzzy expert systems, we will use fuzzy variables. Like for example,
IF the wind is strong
THEN sailing is easy
For a variable, the universe of discourse is the range of values it can take.
Linguistic Hedges
Linguistic hedges are the fuzzy set qualifiers which can be attached to linguistic variables. Linguistic hedges are the adverbs such as very, less, somewhat, more, etc.
For more go to
http://developercoding.com/fuzzy/
| Fuzzy Logic | 56 | for-more-go-to-http-developercoding-com-fuzzy-12ec6fd95d5c | 2018-05-27 | 2018-05-27 03:45:24 | https://medium.com/s/story/for-more-go-to-http-developercoding-com-fuzzy-12ec6fd95d5c | false | 363 | null | null | null | null | null | null | null | null | null | Fuzzy Logic | fuzzy-logic | Fuzzy Logic | 34 | Chetan Ruparel | Happy to code things that could change the world as we know it. | e3335ecf82cb | developercoding.com | 13 | 49 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-02-09 | 2018-02-09 04:18:41 | 2018-02-09 | 2018-02-09 05:48:22 | 2 | false | en | 2018-02-09 | 2018-02-09 05:51:09 | 3 | 12ecc390d10f | 4.251258 | 19 | 1 | 1 | I bet you can easily accelerate your program by 10x by adopting CUDA. But that 10x is far from the end of the story. A fully optimized CUDA… | 5 | Some CUDA concepts explained
I bet you can easily accelerate your program by 10x by adopting CUDA. But that 10x is far from the end of the story. A fully optimized CUDA code could give you a 100x boost. To write highly optimized CUDA kernels, one needs to understand some GPU concepts well. However, I found some of the concepts are not being explained well on the internet and can easily get people confused.
Those concepts are confusing because
Some terminologies were borrowed from the CPU program. But they are in fact not the same concept as their CPU origins.
Some terminologies were invented from the hardware’s point of view, i.e. to describe an actual hardware module or component. Some other terminologies were invented for the software side of things, they are abstract concepts that don’t exist physically. But these concepts are mixed together. And you do need to understand both the software and hardware sides to optimize CUDA code.
I hope I could clarify some of the CUDA concepts with this post.
A GPU is formed by multiple units named SM (Streaming Multiprocessors). As a concrete example, the GPU Titan V has 80 SMs. Each SM can execute many threads concurrently. In the case of Titan V, that maximum concurrent thread count for a singleTitan V SM is 2048. But these threads are not exactly the same as the threads run by a CPU.
These threads are grouped. And a thread group is called a warp, which contains 32 threads. So a Titan V SM can execute 2048 threads, but those 2048 threads are grouped into 2048 / 32 = 64 warps.
What these threads are different from a CPU thread is that CPU threads can each execute different tasks at the same time. But GPU threads in a single warp can only execute one same task. For example, if you want to perform 2 operations, c = a + b and d = a * b. And you need to perform these 2 calculations on lots of data. You can’t assign one warp to perform both calculations at the same time. All 32 threads must work on the same calculation before moving onto the next calculation, although the data the threads process can be different, for example, 1 + 2 and 45 + 17, they have to all work on the addition calculation before moving onto the multiplication.
This is different from CPU threads, because if you have a powerful enough CPU to support 32 threads simultaneously, each of them can work on a separate calculation. Therefore, the concept of the GPU thread is akin to the SIMD (Single Instruction, Multiple Data) feature of CPU.
source: https://i.ytimg.com/vi/4EzaVmD_tKc/hqdefault.jpg
The best real life analogy of GPU threads in a warp I found is the above. I don’t know if you have had similar experience before. In high school, when we got punished to copy words for not being able to finish homework, for example, we tended to bound a group of pens together in a vertical row so that we can write multiple copies of the same content at the same time. That helped us finish it quicker.
As powerful as the school trick is, holding multiple pens isn’t turning you into a mighty octopus who has many tentacles and can perform multiple tasks at the same time. This is the major difference between CPU threads and GPU threads.
source: http://www.leavemetomyprojects.com/wp-content/uploads/2012/02/Multitasking-Octopus.jpg
Why is this important? Because when you launch a GPU program, you need to specify the thread organization you want. And a careless configuration can easily impact the performance or waste GPU resources.
From the software’s point of view, GPU threads are organized into blocks. Block is a pure software concept that doesn’t exist in the hardware design. Unlike the physical thread organization, the warp. Blocks don’t have a fixed number of threads. You can specify any number of threads up to 1024 within a block, but that doesn’t mean any thread number will perform the same.
Consider we want to perform 330 calculations. One natural way is launching 10 blocks, and each block works on 33 calculations with 33 threads. But because every 32 threads are grouped into a warp. To finish the 33 calculations, 2 warps == 64 threads are involved. In total, we will be using 640 threads.
Another way is launching 11 blocks of 32 threads. This time, each block can fit into a single warp. So in total, 11 warps == 352 threads will be launched. There will be some waste, but it won’t be as much as the first option.
Another thing that needs to be considered is the number of SMs, because each block can only be executed within one SM. A block can’t be processed by more than one SM. If the workload is very large, i.e. we have lots of blocks, we could use up all available SMs and we still have remaining work to do. In this case, we will have to launch part of the work as the first batch and then finish the remaining work in following batches. For example, in the case of Titan V, there are 80 SMs. And suppose we have a complex work that requires 90 SMs to finish. We will have to launch a batch of 80 SMs first and launch the remaining work with 10 SMs as the second batch. But in this case, during the second batch, 70 SMs are idle. A better way is adjust the workload for each SM, so that they can do less work each and finish sooner. But in total, you need 160 SMs this time. Although you still need to launch 2 batches of calculations, but because each batch can finish quicker, the overall run time reduces.
Lastly, if you are familiar NVIDIA’s marketing terms, a GPU’s powerfulness is often measured by the number of CUDA cores. But when you learn CUDA programming, you probably seldom see it as a programming concept. Well, a CUDA core is actually a warp. So again in the Titan V case, it has 80 (SMs) * (2048) Threads / 32 (Threads / Warp) = 5120 CUDA cores.
| Some CUDA concepts explained | 60 | some-cuda-concepts-explained-12ecc390d10f | 2018-06-02 | 2018-06-02 09:18:51 | https://medium.com/s/story/some-cuda-concepts-explained-12ecc390d10f | false | 1,025 | null | null | null | null | null | null | null | null | null | Gpu | gpu | Gpu | 604 | Shi Yan | Software engineer & wantrepreneur. Interested in computer graphics, bitcoin and deep learning. | 2f9df7d2fecb | shiyan | 1,019 | 43 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 435d0cd839de | 2018-09-12 | 2018-09-12 03:59:05 | 2018-09-12 | 2018-09-12 04:02:41 | 4 | false | en | 2018-09-12 | 2018-09-12 04:02:41 | 7 | 12ee03c14ebb | 4.601887 | 0 | 0 | 0 | Fintech is a terminology that was given to products which functions cover two aspects: finance and technology. | 4 | The rise of FinTech Mobile Applications
Fintech is a terminology that was given to products which functions cover two aspects: finance and technology.
It is a new definition in the market, and it will stay for a longer duration. Why? Because many companies are already adopting the changes and traditional bankers are worried, considering the growth FinTech industry has generated for the last few years will change the scenarios of market share. If we see the stats of burgeoning FinTech then 10 years back the global investment was $928 million and in current it is $2.97 billion, which is directly triple in amount.
Fintech has increased the level of access in financial services, especially in investment. Data collected through Artificial Intelligence covers a huge amount of people at a cheaper rate and provides easy investment advice. This implies that a specific asset level which was difficult to reach before is now opened for everyone. In several cases, lending is considered. Earlier, FinTechs are dependent on certain data sets, which turns down a lot of people and charge a higher interest rate.
Read more: Savvycom and TomoChain team up in a strategic worldwide partnership
But with the implementation of FinTech, the options are plenty especially in the financial sectors like investment, saving, banking. It is also making the ways of purchasing insurance, such that many options are available for consumers.
1. Different types of companies based on FinTech:
All of us wants maximum convenience in life, so mobile payment is not a surprise anymore. Around US$27 billion will be e-transaction payment in a year according to the prediction made by eMarketer. This is the reason behind so many types of e-wallet choices such as MobiKwik, Google wallet, etc. The innovations are not only happening in the front end rather it is also behind the scenes.
Peer-to-peer lending- Companies like funding circle, Zopa and MarketInvoice have lent money to individuals and businesses and turned into new P2P investors. As P2P is slowly getting popular, people are also considering the new and different alternatives to establish financial sector. This may take a large portion of the market in upcoming years.
Mobile payments- Mobile payment has become the trend of the new market. It allows the people to pay bills through their mobile phones. SumUp and Square are the type of finTech which has reduced the need to rush at bank everytime in need of money.
Money transfers- Companies like TransferWise and Kantox has risen the P2P lending based on the market values. Earlier, banks were working as the medium to change the currency (necessary for international transfers) and to do that takes a specific amount for the process completion.
Trading platforms- On early days, people use to go at banks for investment, funds. But after the arrival of finTech like Nutmeg, the tasks become easier and provides different online platforms at a cheaper rate in comparison to other alternatives. Even some of the services provide a researched platform and recommends specific funds and stocks for sustainability.
FinTech applications | Think with Google
Mobile is playing the role of facilitator with time: Millions of people across the globe enables access to the internet through smartphones and becomes a primary way to access with other enterprises. Besides, all opportunities, there are challenges which need to be addressed like proper use of FinTech solutions to build a fruitful collaboration between mobile and finance sectors.
Mobile has become the most prominent medium to access the internet because of three main key factors.
Online payment is more safe and secure and can be disabled whenever needed.
Provides easy and convenient solutions without any extra effort. For example, without disturbing the security, password and codes are reduced.
Provides a global solution for the authenticated and e-payment market.
A survey done by EY found that half of the consumers are using services related to the money transfer and payments and around one- fourth is using FinTech for insurance. Around 20% for investment and savings and 10% for purchasing and financial planning.
2. Technologies playing a key role in FinTech.
Machine Learning and Artificial Intelligence have expanded its usage toward the financial industries. With that, these sectors realized the need for improvement of customer services and hence, started working in filling the gaps. The implementation of AI has made the processes automated like data analysis, customer services and internal/external communication in the industry. Also, simplified the processes like fraud detection with the help of chatbots, mobile applications etc.
Deep learning is based on the human nervous system | coMakeIT
Data analysis and Cloud computing are helping in making predictions in case of customer requirement and need. That is why financial sectors are getting more benefit by utilising the technology and with FinTech mobile applications companies are facing high competition, so to come out from this companies are providing more secure, user-friendly solutions to their customers.
Cloud computing connects everything| Advantage Services
Social Platforms is helping the financial sectors. How? Don’t be surprised. As the number of consumers is using social media to gather the information about finance so, it’s the best idea for the financial sectors to get into these social sites for trading purpose. But, it doesn’t mean to do transactions on social platforms though it is giving an insight into finance. Social media has entered into the lives of people and through social trading platforms to share thoughts, queries, concerns etc.
Biggest social media platforms | Digiterati
3. The horizon is still expanding
Digital transactions have a huge opportunity in the coming years. The people who have been learning AI, are in the best position to meet the upcoming demands. Mobile Fintech applications are not only helping in transactions but also in feeding the data, For instance, the mobile camera can work similar to an optical card reader for collecting the data of a customer.
__
Author bio: Renu Bisht is a doyen of governing the digital content to assemble good relationships for enterprises or individuals. Renu is specialised in digital marketing, cloud computing, web designing and offer other valuable IT services for organisations, eventually enhancing their shape by delivering the stupendous solutions to their business problems.
| The rise of FinTech Mobile Applications | 0 | the-rise-of-fintech-mobile-applications-12ee03c14ebb | 2018-09-14 | 2018-09-14 00:28:31 | https://medium.com/s/story/the-rise-of-fintech-mobile-applications-12ee03c14ebb | false | 1,034 | Top 30 Mobile App Developers. | null | savvycom | null | Savvycom | savvycom-software | TECHNOLOGY,SOFTWARE DEVELOPMENT,BLOCKCHAIN TECHNOLOGY,ARTIFICIAL INTELLIGENCE,IOT | savvycom | Fintech | fintech | Fintech | 38,568 | Jimmy Pham | Freelancer at https://goo.gl/ZcD7ob | 975a2c8f357b | jimmypham258 | 3 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | d0c08339b41f | 2017-11-21 | 2017-11-21 23:34:08 | 2017-11-21 | 2017-11-21 23:40:30 | 2 | false | en | 2017-11-21 | 2017-11-21 23:40:30 | 21 | 12f03c767c71 | 1.836164 | 0 | 0 | 0 | TWiML Talk 68 | 5 | Bridging the Gap Between Academic and Industry Careers with Ross Fadely
TWiML Talk 68
We close out our NYU Future Labs AI Summit interview series with Ross Fadely, a New York based AI lead with Insight Data Science. Insight is an interesting company offering a free seven week post-doctoral training fellowship helping individuals to bridge the gap between academia and careers in data science, data engineering and AI.
Subscribe: iTunes / SoundCloud / Google Play / Stitcher / RSS
Ross joined me backstage at the Future Labs Summit after leading a Machine Learning Primer for attendees. Our conversation explores some of the knowledge gaps that Insight has identified in folks coming out of academia, and how they structure their program to address them. If you find yourself looking to make this transition, you’ll definitely want to check out this episode.
Thanks to Our Sponsor
This Future Labs AI Summit series is brought to you by our friends at the the NYU Future Labs AI Nexus Lab. Future Labs offer the businesses of tomorrow a network of innovation spaces and programs that support early stage startups in New York City. Acting in a venture catalyst role, they accept high potential companies and build on their success with an approach that emphasizes success rate, sustainability, revenue, and growth. We appreciate the continued support of the FutureLabs, and encourage you to check out some of the great stuff they are working on at futurelabs.nyc.
TWiML Online Meetup
The details of our next TWiML Online Meetup have been posted! On Wednesday, December 13th, at 3pm Pacific Time, we will be joined by Bruno Gonçalves, who will be presenting the paper “Understanding Deep Learning Requires Rethinking Generalization”. If you’re already registered for the meetup, you should have already received an invitation with all the details. If you still need to register for the meetup, head over to twimlai.com/meetup to do so. We hope to see you there!
About Ross
Ross on Linkedin
Ross on Twitter
Mentioned in the Interview
Insight Data Science
Insight Artificial Intelligence Fellows Program
FutureLabs AI Summit Series Cohort 2
NYU Future Labs AI Summit
NYU Nexus Labs Series Cohort 1
TWiML Talk #20 — Kathryn Hume
TWiML Talk #21 — Ruchir Puri
TWiML Talk #22 — Matt Zeiler
TWiML Events Page
TWiML Meetup
TWiML Newsletter
| Bridging the Gap Between Academic and Industry Careers with Ross Fadely | 0 | bridging-the-gap-between-academic-and-industry-careers-with-ross-fadely-12f03c767c71 | 2018-02-26 | 2018-02-26 16:23:52 | https://medium.com/s/story/bridging-the-gap-between-academic-and-industry-careers-with-ross-fadely-12f03c767c71 | false | 385 | Interesting and important stories from the world of machine learning and artificial intelligence. #machinelearning #deeplearning #artificialintelligence #bots | null | twimlai | null | This Week in Machine Learning & AI | this-week-in-machine-learning-ai | MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,DEEP LEARNING,PODCAST,TECHNOLOGY | twimlai | Machine Learning | machine-learning | Machine Learning | 51,320 | TWiML & AI | This Week in #MachineLearning & #AI (podcast) brings you the week’s most interesting and important stories from the world of #ML and artificial intelligence. | ca095fd8e66c | twimlai | 292 | 33 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-08-13 | 2017-08-13 07:21:44 | 2017-09-10 | 2017-09-10 08:01:01 | 5 | false | en | 2017-09-10 | 2017-09-10 09:35:44 | 46 | 12f045a4bd2 | 8.527673 | 8 | 0 | 0 | Are Elon Musk, Bill Gates and Stephen Hawking right about dangers of AI? | 5 | The AI Cataclysm and Other Conspiracy Theories
You’ve probably heard that Elon Musk said that AI is more dangerous than North Korea.
Not only him. Stephen Hawking and Bill Gates have also voiced their serious concerns that the continuing development of AI technologies at this rate will soon lead to our collective demise. So what do we have here? Three famous intellectuals: Stephen Hawking is the best living physicist, Bill Gates is well known for his Windows operating system, Elon Musk has brought us the PayPal and Tesla cars and reusable rockets… — and they are all all saying that AI is the bane of our future.
That sounds serious. Since they are all such valued professionals, we should believe them, right? All the other researchers in the field of AI would certainly agree with them, and so should we. Right? Erm — no... Actually they don’t. Which is completely unsurprising, since the big three have no credentials in the field of AI. Like Kevin Grandia said:
When I think I’m having chest pains I don’t go to the dermatologist, I go to a cardiologist because it would be absurd to go to skin doctor for a heart problem.
So whom should I trust more on the issues of AI — a guy that used to make operating systems, or someone who is actually working on AI research for a living?
Allow me to be blunt and compare this at least to the famous Petition Project, which claims that 30,000 US scientists agree that human-caused climate change is not happening. It’s just that only 0.1% of those are actually climate scientists, and that poll represents a biased sample of a very small population of scientists. In fact, 97% of relevant experts agree that climate change is real, and caused by us humans.
I could be evil and compare that to Gwyneth Paltrow suddenly becoming an expert on nutrition, or worse (giggles). Anyway… let’s get back to the subject…
The main point of comparing those statements with quackery of Gwyneth Paltrow-level is that a modern intellectual, if he wants to be taken seriously in his statements, should provide arguments beyond “I’m successful, you should listen to me”. Arguments that we would all expect those three would have provided. Have they? It’s hard to say.
I’ve spent some time searching for their original statements, but all I can find is the Open Letter on Artificial Intelligence which is alleged to be the underlying basis for their claims. The letter was co-signed by other scholars who do have AI research credentials. However, the Letter is rather general and hand-wavy. It is titled “Research Priorities for Robust and Beneficial Artificial Intelligence” and the word “danger” doesn’t even appear in it.
That certainly didn’t prevent Elon Musk from tweeting that AI is “vastly more risky than North Korea”, or Bill Gates saying that he “doesn’t understand why some people are not concerned”. Why are they saying that? Is there a good reason for it? If there is, they are not sharing it with us. Are they just scaremongering for publicity? Who knows. Are they just being misrepresented by reporters? We sure are not getting any rebuttals. Do they have a hidden agenda? Hard to say. But there’s one thing we can do to know more — we can examine the facts. Bear with me for a bit…
First of all, we have to understand the three most general levels of AI:
Bishop, a fictional human-like android from the Aliens movie (credit: Lance Henriksen, James Cameron)
AI is the (pretty dumb) artificial intelligence as we know it. This is basically computer programs that are doing something beyond merely executing predetermined algorithms, usually based on some kind of “training” or data extrapolation.
AGI is the Artificial General Intelligence. That would be the AI, but strong enough to be comparable to a human. This is how we envision sentient androids in most Sci-Fi movies — think C3PO from Star Wars, Ash and Bishop from Alien and Aliens, or Ava from Ex Machina .
ASI is Artificial Super Intelligence. This would be a sentient machine much, much more intelligent than the smartest human ever.
There’s one additional category that goes beyond this, and that’s Recursive Self-Improving AI. The idea is that if a silicon-based AI would be smart enough (already ASI, or at least AGI-level) and given a chance to modify its own architecture and algorithms, it could in theory be able to improve itself, and then the improved self would be even more capable of self-improvement… yielding in the end a “technological singularity” — sudden exponential chain of improvements leading towards unimaginable capabilities.
Such a theoretical construct would be susceptible to the (theoretically really serious) danger of so called “runaway AI”. In a nutshell, the concept of runaway AI is based on the premise that an AI is given ability to improve itself, and is also tasked with a seemingly innocuous task. (In the original story, the AI was, in a really bad example of the popular trope, told to “make sure we never run out of paperclips again”.) The AI then takes the task to heart, but not having any (heart that is, and thus morals), it uses outrageous methods to reach the results. In some variants of the story, the AI creates an army of self-replicating nanobots that destroys the entire planet and turns all available material into whatever it was tasked to create.
That’s some great material for a Hollywood movie right there. But how real are those risks in practice? For starters, the human brain has only a bit under 100 billion neurons, but each neuron is connected to about 10,000 other neurons, forming a total of about 1,000 trillion synapses. Sounds like a lot. Can the modern technology reach those numbers? This is where it becomes confusing.
Neurosynaptic core (credit: IBM)
On one hand, IBM claimed that they have simulated 530 billion neurons and 100 trillion synapses, way back in 2012. On the other hand, Google’s AlphaGo AI is able to reliably beat the best human players in the world in the game of Go, using a neural network with less than 20,000 neurons. Now wait a minute. Holly guacamole! Those numbers don’t even make sense, do they? Of course they don’t. Because the daily newspapers and scary social media posts are feeding us sensationalist crap every day about what this whole thing with AI and neural networks actually means.
The “AI” as a broad term in computer science, and engineering can be many things. Sometimes we (software engineers) say “AI” when we actually mean “a very carefully crafted plain-old-algorithm that fools the user into thinking the computer is smart”. I work in computer games, and 99% of the “AI” we do is just exactly that. We cheat a lot. Many parts even in supposedly “real AI” systems, like search engines, digital assistants etc, are just that. An elaborate list of if-then conditions that just appears to be (somewhat) intelligent.
In a more traditional sense, “AI” could mean a genetic algorithm, a Bayesian network, a Fuzzy logic setup… but most often it is a “neural network”. Such a neural network consists of “neurons” and “synapses”. But the relation to actual biological neurons and synapses is very, very vague. The underlying principles of connectivity and weighting are very similar, but there are numerous differences between most of artificial neural networks and real biological ones. Artificial neural networks, unlike biological ones cannot usually “grow” new synapses as they want, they use much simpler topologies, they are task-based (Go playing) instead of general (you can teach your dog to fetch you the slippers, but AI is still far from that), and they have much simpler training algorithms (we don’t even know how biological networks actually train)… The artificial neural networks are much faster (think Gigahertz frequencies, and light-speed of signals) than biological ones (think hundreds of Hertz and about 120m/s speed). The artificial ones run on synchronized clock, while biological ones run asynchronously (we don’t even know exactly what significance does that make, but it certainly makes a difference), etc.
While the classic artificial neural network, like the ones used in today’s voice assistants failing haplessly to understand what we are saying, winning games of Go, detecting nipples in your Facebook posts, and doing other such “super-important” tasks… is very different from the biological one, there’s another approach that tries to emulate the biological networks much more closely. It’s the “neuromorphic artificial neural networks”. Which means they use computers to directly emulate what a biological network does. (With a caveat that we are not yet 100% sure we know exactly what are all the things that it does and how.) This is what the aforementioned IBM project was doing back in 2012. It emulated a huge number of life-like (supposedly) neurons… but at a speed 1500 times slower than a human brain. Which means that it took 25 minutes to simulate one second of “thinking”. If that “brain” (which was still much simpler than a human brain) would need to go through a training that an average human goes through (e.g. 20 years of growing and education), it would need roughly 300 centuries to reach maturity. Let that sink in a bit. 300 centuries.
Could we improve on that speed? Theoretically, Moore’s law would give a 2-fold increase in computing ability every 18 months, leading to comparable speed in about 15 years. By that account, it would be available about 2027… just that (a) the Moore’s law doesn’t actually promise more speed. It just promises more transistors, which doesn’t always translate to speed. Then also (b) that network was using 1.5 million CPU cores (it’s a huge server room), and (c), by their own words they “[…]have not built a biologically realistic simulation of the complete human brain […but…] mathematically abstracted away from biological detail toward engineering goals of maximizing function and minimizing cost and design complexity of hardware implementation”. There’s still work to do there.
Putting it in the words of a frustrated neuroscientist:
We have no fucking clue how to simulate a brain.
The “best” simulation we were able to pull off so far, is of C. Elegans — a very simple worm, which has literally exactly 302 neurons, and we know exactly how it is connected, etc. Yet we don’t know what exactly is in it that makes it work the way it does.
C. Elegans (source: Wikipedia)
We can’t perfectly simulate a worm yet, people. You can quit being scared of super-human AI, mkay?
So, we can create an “AI” to do some specific task (like play Go, detect nipples, or find cat videos) fairly well, or sometimes even better than humans. But that AI will be limited to that task and have some very weird limitations (like thinking a panda is a gibbon , or a Stop sign is a 50mph speed limit for no particular reason).
In a far future, we might be facing the danger of being terminated by an army of rogue killer robots. But in the present situation, we should be more wary of an army of irresponsible humans ruining our planet before we even get a chance to devise the killer robots.
While I hope this brings the subject of AI a bit closer to the general public, the topic is very wide and much too complex to be covered in a short article. I’d like to expand on some ideas related to this in a future text, so stay tuned for more soon!
Mascot of the Campaign to Stop Killer Robots
P.S. In before the “killer robots”… Yes, I do realize that some people are confusing the AI problem with the ban on killer robots. While the killer robots problem itself is something else, and is not insignificant, it doesn’t belong to the domain of AI. It is as much of a problem as landmines, chemical weapons or blinding laser weapons. The problem that I’m addressing here is that the three are not explicitly mentioning the killer robots as a problem of ethical warfare, but are stating nonsense like “[…]humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I”. Which is just unnecessary scaremongering, potentially dangerous to scientific and technological progress.
| The AI Cataclysm and Other Conspiracy Theories | 30 | the-ai-cataclysm-and-other-conspiracy-theories-12f045a4bd2 | 2018-05-15 | 2018-05-15 07:14:29 | https://medium.com/s/story/the-ai-cataclysm-and-other-conspiracy-theories-12f045a4bd2 | false | 2,039 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Alen Ladavac | Code, project management, game design and psychological help for programmers. CTO at Croteam. Also insatiable craving for all things tech and science. | 16467cf4d0f1 | alen.ladavac | 102 | 67 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-01 | 2018-06-01 15:21:28 | 2018-06-01 | 2018-06-01 15:42:40 | 4 | false | fr | 2018-08-28 | 2018-08-28 09:10:21 | 4 | 12f143b6eaed | 4.333962 | 9 | 0 | 0 | Du 18 mai au 23 septembre 2018, se tient la 21e édition du festival de sculptures en plein air, « Sculptures en l’Île », à Andrésy. Au… | 5 | Retour d’expérience: le chatbot “Sculptures en l’Île” pour la ville d’Andrésy
Du 18 mai au 23 septembre 2018, se tient la 21e édition du festival de sculptures en plein air, « Sculptures en l’Île », à Andrésy. Au programme, une cinquantaine d’artistes présentent leurs œuvres in situ, sur quatre sites remarquables, entre Paris et Andrésy. Pour cette nouvelle édition, la commune d’Andrésy a souhaité inclure un chatbot qui interagit avec les visiteurs. Pour cela, ils ont fait appel à nous, Ask Mona Studio, après avoir testé l’un de nos chatbots à la Villa Savoye (Centre des Monuments Nationaux).
Depuis 1997, la ville d’Andrésy célèbre l’art contemporain en organisant «Sculptures en l’Île», l’un des plus grands festivals de sculptures en plein air d’Ile-de-France. Chaque année, des dizaines d’artistes présentent leurs œuvres dans un cadre bucolique qui s’étale sur plusieurs sites, de l’étonnante Maison du Moussel à la superbe Île de Nancy, accessible en bateau. En 2017, l’exposition a rassemblé pour sa vingtième édition plus de 40 000 visiteurs. C’est aussi à l’occasion de cette dernière édition que la ville a noué un partenariat avec la SNCF dans le cadre duquel a été exposé la sculpture « Yellow LostDog » de l’artiste Aurèle Ricard à la gare Saint-Lazare. Cette année, il s’agit d’une œuvre de Nathalie Camoin-Chanet dénommée, « Carmen ». Elle constitue le point de départ de l’exposition.
Un chatbot pour une médiation interactive
Pour l’édition 2018, la ville d’Andrésy a souhaité investir davantage le numérique. La ville a donc fait appel à nous, afin de mettre en place un chatbot de médiation, conçu pour accompagner le visiteur tout au long de la visite. Andrésy souhaitait développer de cette manière un outil de médiation disponible gratuitement et adapté aux usages des jeunes publics afin de leur faciliter l’accès aux œuvres. Ce type de dispositif a également séduit la ville par son aspect léger d’un point de vue pratique. En effet, le chatbot est disponible directement dans le téléphone de tous les visiteurs et ne nécessite pas de télécharger une application ni d’avoir de l’espace de stockage disponible pour l’utiliser.
“Jusqu’à présent, nous avions pour cette exposition un support papier pour la médiation, ce qui m’a séduite avec Ask Mona, c’est de pouvoir capter toute la famille en même temps, et notamment le public jeune.” Angélique Montero, Maire Adjointe à la Culture d’Andrésy
Pour répondre aux attentes de la ville d’Andrésy et aux usages de cette cible, nous avons opté pour un scénario d’utilisation qui autonomise le visiteur et multiplie les points d’accroches. Ce dernier peut en effet solliciter le chatbot à la carte, en fonction des œuvres qui ont éveillées sa curiosité. Nous ne souhaitions pas l’enfermer dans un scénario qui contraigne sa déambulation au milieu des œuvres.
Comment s’assurer de l’adoption du dispositif par une cible jeune ?
Pour donner une valeur ajoutée à ces interactions et les rendre d’autant plus ludiques, nous avons opté pour une double entrée : à la fois la reconnaissance d’images et la reconnaissance de messages textes. De cette façon, le visiteur peut solliciter du contenu sur certaines œuvres en envoyant leur photo au chatbot qui grâce à de la reconnaissance visuelle est capable de renvoyer au visiteur des contenus sur l’œuvre. Ce que nous aimions avec cette interaction est qu’elle s’ancre directement dans les usages des jeunes visiteurs, habitués à photographier dans les lieux culturels les œuvres qui retiennent leur attention.
Nous avons également travaillé avec la ville d’Andrésy sur la richesse et la diversité des contenus proposés au visiteur. De cette façon, ils peuvent recevoir de la part du chatbot des explications sous forme de texte, mais aussi des images, des vidéos ou des sons qui leur permettent d’approfondir l’expérience sur place, mais aussi une fois rentrés chez eux.
Ainsi, pour le visiteur, le chatbot est accessible depuis la page de « Sculptures en l’Île ». Il lui suffit simplement d’envoyer un message sur la page Facebook pour entamer la conversation et démarrer l’expérience.
Par expérience, nous savons que pour qu’un dispositif de médiation aussi innovant soit adopté par un nombre important de visiteurs, il est nécessaire de le rendre visible et de faire preuve d’une grande pédagogie à son égard. C’est pourquoi nous avons créer un court guide d’utilisation sur le plus de supports possibles : le guide papier, une brochure d’information, aux côtés des œuvres… Nous avons également travaillé main dans la main avec les agents de la ville d’Andrésy pour s’assurer que le message soit bien relayé auprès des visiteurs.
Un partenariat financé par la région Île-de-France
Ce dispositif repose sur un partenariat qui a réuni trois parties: la ville d’Andrésy, la région d’île-de-France, et Ask Mona. En effet, ce chatbot réalisé par Ask Mona pour la ville d’Andrésy et son festival “Sculptures en l’Île” a été financé par la région Île-de-France.
En effet, la Région Île-de-France soutient les projets visant à mettre en avant l’innovation francilienne, notamment dans le secteur culturel. C’est dans ce cadre que la ville d’Andrésy a soumis ce projet de chatbot pour obtenir un financement. Le projet a été évalué puis retenu après un passage en commission.
Une fois ce financement obtenu, Ask Mona et la ville d’Andrésy ont commencé à travailler sur le projet en étroite collaboration pendant trois mois avant de lancer le chatbot début mai.
Vous avec jusqu’au 23 septembre 2018 pour tester le chatbot !
Pour commencer l’expérience, c’est par ici 😃 !
Pour en savoir plus :
Vidéo de présentation lors la rencontre organisée par le Clic France
Reportage France 3
Si cet article vous a plus et que vous êtes intéressé par ce dispositif , n’hésitez pas à nous envoyer un mail à [email protected], et pour en savoir plus: http://studio.askmona.fr/.
| Retour d’expérience: le chatbot “Sculptures en l’Île” pour la ville d’Andrésy | 276 | retour-dexpérience-le-chatbot-sculptures-en-l-île-pour-la-ville-d-andrésy-12f143b6eaed | 2018-08-28 | 2018-08-28 09:10:21 | https://medium.com/s/story/retour-dexpérience-le-chatbot-sculptures-en-l-île-pour-la-ville-d-andrésy-12f143b6eaed | false | 963 | null | null | null | null | null | null | null | null | null | Culture | culture | Culture | 69,444 | Ask Mona | Ask Mona est spécialisée dans la conception de chatbots pour les lieux culturels. | 4df230e6d931 | askmona | 11 | 3 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | d3df295425d | 2018-06-19 | 2018-06-19 18:35:06 | 2018-09-11 | 2018-09-11 15:01:01 | 1 | false | en | 2018-09-11 | 2018-09-11 20:02:33 | 10 | 12f281992ab8 | 4.939623 | 3 | 0 | 0 | With the proper data, nearly anything is possible when it comes to AI. Are you ready to embrace the change? | 5 | 3 ways AI is already transforming business
This article was contributed by Vincent Brissot, Head of Digital Automation & Channel Operations at HP.
When you think of Artificial Intelligence (AI), it’s easy to imagine Jarvis from Iron Man, or any of the lovable droids from the Star Wars universe. Sometimes, you may even think of AI villains, like Skynet from the Terminator series, or HAL 9000 from Stanley Kubrick’s 2001: A Space Odyssey. AI has long been a tool only seen in science fiction stories, but now it is a reality and its transforming the way we live. From autonomous vehicles to cashier-less shops, robot tutors to robo-advisors, it’s safe to say that society is quickly embracing its automated future.
Now that AI is making its way into the workplace, it is changing the way we do business, from employee experiences to the customer journey. While still in its infancy, it’s only a matter of time before AI becomes a strategic player for every organization.
Now that AI is making its way into the workplace, it is changing the way we do business, from employee experiences to the customer journey.
Employee experience
When it comes to employee experience, it is important for companies to provide quality training, learning and development, and performance management. With the help of AI, employees will have easier access to such opportunities, and, as a result, productivity is likely to increase.
Training new employees during the onboarding stage can be expensive, time-consuming, and unproductive, as it takes significant time out of managers’ schedules and overwhelms new people. If the onboarding process were supplemented by AI, it would likely save time and ultimately result in a more effective learning experience for employees. To avoid overwhelm, an onboarding chatbot could provide new hires with information over time, and could answer common questions as they arise. This same AI could utilize machine learning to create personalized learning paths for new employees, depending on their job title, level of experience, and behavior.
If the onboarding process were supplemented by AI, it would likely save time and ultimately result in a more effective learning experience for employees.
This individualized, AI-assisted path could lead into Learning and Development opportunities for employees and could work to provide performance enhancing tips and lessons. For example, imagine you’re a sales representative, and you’ve been struggling to meet your quota or prospect new potential clients. In order to help you improve your performance, your AI assistant could provide personalized suggestions on how to meet your goals. The AI could recommend specific training sessions, or it could suggest improvements based on the data from your high-performing colleagues.
One such company, called Cultivate, implements AI to monitor team engagement, mood, productivity, and more. This data is then given to managers or HR teams to help them understand what is needed to improve performance and workplace communication. This is especially helpful for managers who are looking to improve their leadership skills, as the tool provides target insights on growth areas.
On the topic of performance, AI has the potential to completely revamp the traditional annual review process. At present, performance management is a practice that can be easily swayed by bias, and employees can often avoid the responsibility until the last minute. AI could revolutionize this flawed process and instead provide ongoing feedback and an unbiased, data-driven view of employee performance.
Customer journey
According to the McKinsey Global Institute, already existing AI systems could automate up to 45% of the time spent on various sales tasks. Even further, companies that use AI could see a 50% increase in the amount of quality leads. With numbers this encouraging, it’s no surprise that sales and customer service teams are embracing AI during the customer journey.
With the amount of data that exists on consumer demographics and behavior, sales professionals could use AI to transform the process of prospecting potential clients. AI-powered assistants can already send emails on behalf of sales representatives, as well as analyze messages to sort out quality leads. As this technology becomes more sophisticated, these assistants could fully automate prospecting, leaving the sales professionals open for key meetings, demos, and pitches.
One bot that can currently qualify leads and schedule sales meetings on behalf of representatives is Driftbot, powered by Drift. When prospects are visiting a company’s website, Driftbot can start a conversation with them and identify what they are looking for. Then, it can set up meetings with sales representatives or provide helpful resources. Since it works 24/7, potential leads won’t have to wait for a human to respond, and this will likely result in more leads.
As customers progress through the buyer journey, AI can provide them with a more personalized experience. Though it sounds like an oxymoron, mass personalization is becoming more and more advanced with help from machine learning. This could revolutionize how companies interact with customers, resulting in simple and creative touches, as well as more helpful and timely assistance, such as a chatbot providing answers to commonly asked questions. After a customer has made a few purchases, a machine learning system could make informed predictions on what that customer might want to buy in the future. This insight could encourage repeat customers and increase brand loyalty.
Established customers could also reap the benefits of an AI-powered system with customer service-oriented chatbots. Through machine learning, these chatbots could analyze consumer behavior and data to provide proactive solutions to common customer issues, solving problems before they even occur. When customers contact support, AI could analyze the sentiment of their messages and deduce the urgency, tone, and topic of the request, and then forward the support ticket to the most suitable representative. This would save time for both customers and service representatives, and improve the experience for both parties. HP is one example of a company that turned to AI to improve the customer support experience, using a virtual agent to seamlessly address up to 80% of customer issues.
Bold360 ai is one example of an AI chatbot that is transforming customer service. Using Natural Language Understanding, Bold360 ai navigates through conversations with customers, identifying their intent and then directing them to the best resource for their issue. Because Bold360 ai is always learning, it is constantly improving. Another chatbot that uses natural language technology is Twyla, which provides both machine learning and rule-based algorithms when creating customer service bots.
Focusing on HP Partners
At HP, we are constantly reinventing. It’s our goal to utilize the rapid rise of chatbot technology to create the best experiences for our customers, employees, and Partners. And, we’re not alone. According to a Business Insider report, 80% of companies have already used or plan to use chatbots by 2020. This technology will make the search to find product information 3x faster and will save nearly an hour per week for sales reps and Partners.
The future of business
Though relatively new, AI is already transforming our lives, both at home and in the office. As the technology progresses and improves, so will our business strategies, employee experience, and customer support. The future of business is happening now, and organizations need to be prepared to take advantage of the opportunity before they are left behind.
With the proper data, nearly anything is possible when it comes to AI. Are you ready to embrace the change?
Let’s continue this conversation! Share your thoughts about chatbots’ impact on business by leaving a comment below or tweeting me @VincentBrissot
| 3 ways AI is already transforming business | 3 | 3-ways-ai-is-already-transforming-business-12f281992ab8 | 2018-09-11 | 2018-09-11 20:02:33 | https://medium.com/s/story/3-ways-ai-is-already-transforming-business-12f281992ab8 | false | 1,256 | The latest trends, news and updates for digital transformation and social selling | null | null | null | Channel Voice | null | channel-voice | INNOVATION,TECHNOLOGY,TECHNOLOGY NEWS,SALES,HP | hpchannelnews | Digital Transformation | digital-transformation | Digital Transformation | 13,217 | HP Channel News | Providing the latest @HP channel news, updates, activities, and information for HP solution providers. | b970f9c900bb | HPChannelNews | 53 | 7 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | e9a03fc6a90d | 2017-08-24 | 2017-08-24 17:35:16 | 2017-09-24 | 2017-09-24 15:39:57 | 2 | false | en | 2017-10-07 | 2017-10-07 22:10:57 | 3 | 12f2c5bb85b2 | 1.711635 | 10 | 1 | 0 | For financiers and investment analysts, the trade-off between risk and returns of financial securities are of utmost importance. In the… | 5 | Assessing the riskiness of a single stock in Python
Balancing Risk & Reward Image Credit: Trinityp3
For financiers and investment analysts, the trade-off between risk and returns of financial securities are of utmost importance. In the last blog post, we looked at assessing the rates of returns for single financial instruments and a portfolio made up of more than one securities. This blog post will look at the concept of risk and how they can be ascertained with the help of Python.
What is risk as a concept in Finance and how is it even measured? Investopedia defines Equity risk as one that:
“Covers the risk involved in the volatile price changes of shares of stock. Changes in prices because of market differences, political changes, natural calamities, diplomatic changes or economic conflicts may cause volatile foreign investment conditions that may expose businesses and individuals to foreign investment risk”
Basically, Investors don’t like surprises. A highly volatile stock is one whose prices changes faster than a typical English summer (so they keep saying). An equity’s risk is thus embedded in the volatility of its prices. So what’s the best measure of price volatilities (risk) in the world of Finance? Standard Deviation!
The standard deviation of the returns on an equity (like stocks) represents the best measure of the equity’s risk. A highly volatile (risky) stock has more wide ranges in prices and vice versa.
Great, let’s now pull some daily real-world stock data and check for their volatilities with Python.
Here’s also a simple script based on the tutorial for visualizing the volatility of daily returns of three stocks stacked against each other:
As it can be seen in the output (image below), the spread of daily returns of Tesla (TSLA) is the most volatile as it is the stock with the most widely ‘spreads’ of daily returns. Followed by Facebook and Apple with the least spreads.
Histogram of Daily Returns of Apple, Facebook and Tesla
In this post, we have looked at the riskiness of single stocks with real-world data from Yahoo Finance. The next post will build on this concept of risk with respect to a portfolio of various stocks. Until then, happy coding!
| Assessing the riskiness of a single stock in Python | 18 | assessing-the-riskiness-of-a-single-stock-in-python-12f2c5bb85b2 | 2018-06-15 | 2018-06-15 08:35:53 | https://medium.com/s/story/assessing-the-riskiness-of-a-single-stock-in-python-12f2c5bb85b2 | false | 352 | A data analytic blog from a newbie for newbies. Check out the associated GitHub page for all the source codes: https://github.com/PyDataBlog/Python-for-Data-Science | null | null | null | PyFin | null | python-data | PYTHON,FINANCE,DATA SCIENCE,SCIPY,QUANT | PyFinBlog | Python | python | Python | 20,142 | Bernard Brenyah | I have a love/hate relationship with numbers | f21ca351a4aa | bbrenyah | 315 | 65 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-07 | 2018-08-07 14:25:09 | 2018-07-11 | 2018-07-11 16:06:31 | 1 | false | en | 2018-08-07 | 2018-08-07 14:33:03 | 3 | 12f2fce84679 | 2.667925 | 2 | 0 | 0 | On 4th of July 2017, the American Independence day, military service members or veterans of the US military were in for a pleasant surprise… | 5 | Smarten your sales team with an AI powered Conversational Insightful actions platform.
On 4th of July 2017, the American Independence day, military service members or veterans of the US military were in for a pleasant surprise when they called U.S Bank’s customer service department. The bank’s staff greeted them with the words, “Thank you for your service”, before beginning the official conversation.
The bank’s CRM solution could identify the users’ profession from their calls in real-time and flash the appropriate greeting on the computer screens of the staff taking those calls. In a similar fashion, the bank’s staff could also wish customers on their birthday in real time due to the AI powered technology.
As a sales leader, you would be happy to know that the current breed of AI powered conversational analytics platforms are so advanced that they feature virtual assistants which ensure that enterprise sales and customer service teams get a 360 degree view of the customer in real-time, while mimicking a human conversation.
Here’s an example on how conversational AI can help enterprise sales personnel.
Consider the sales staff of a leading enterprise who meets business leaders of large corporates for selling their products and services. Let’s assume that these sales personnel are armed with an AI based conversational sales assistant on their phones. This sales assistant can converse with them in real time and support them with all the necessary information to prepare before meetings, update latest information about all the recent transactions, answer insightful questions during the customer meeting as well as update the opportunity after the meeting.
Being a sales leader, you would agree that even to collect basic opportunity and sales performance information takes on an average 17 minutes per meeting, especially considering the constant back-and-forth process that ensues. For a sales team of five and each one averaging eight meetings every week, a Conversational AI assistant can end up saving up to 50 hours of your sales team’s productive hours every week, just on the basic preparation.
The best-of-breed AI powered sales assistants are helping sales leaders and sales teams in the following ways:
Helping salespeople become better professionals by listening and analyzing their calls: AI assistants use natural language processing and machine learning to help train and suggest information to sales people and other customer service reps. These platforms enable sales and marketing teams to analyze strengths, areas of improvement, and challenges that arise during sales calls to improve the team’s sales tactics and close more deals.
Enabling companies to automatically verify leads without the need for a 24/7, on-demand sales staff: If your company has leads coming from various sources, AI assistants help qualify them and free up sales reps to focus only on the leads most likely to turn into sales. Some assistants also use machine learning to parse customer data on CRM, website, social media and email to score your leads and tell you which ones to focus on. They can also help you in your conversation with your customers, check your grammar and better personalize your emails through this data.
Humanizing AI assistants through Conversational Analytics: Besides doing all of the aforementioned activities, a conversational analytics solution like ConverSight.ai from ThickStat can help your sales team to get instant replies to queries around sales pipelines, new products, forecast vs actual, overdue opportunities, and customer insights among others. ConverSight.ai makes this possible by making self-service business intelligence sound like a normal human being, through its humanized interface :)
With the advent of AI assistants, It’s quite evident that the future job definitions of sales personnel would undergo a drastic change. Only time will tell as to how the profession adapts to these changes to ensure it takes maximum advantage of the opportunities AI offers.
For more information, visit our website.
Read more posts by this author.
Originally published at blog.conversight.ai on July 11, 2018.
| Smarten your sales team with an AI powered Conversational Insightful actions platform. | 2 | smarten-your-sales-team-with-an-ai-powered-conversational-insightful-actions-platform-12f2fce84679 | 2018-08-07 | 2018-08-07 14:33:03 | https://medium.com/s/story/smarten-your-sales-team-with-an-ai-powered-conversational-insightful-actions-platform-12f2fce84679 | false | 654 | null | null | null | null | null | null | null | null | null | Sales | sales | Sales | 30,953 | ThickStat | ConverSight.ai - Conversational Insights and Action through Artificial Intelligence | bdeff43a8a1 | thickstat | 8 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-10 | 2018-05-10 20:13:04 | 2018-05-10 | 2018-05-10 20:17:57 | 1 | false | en | 2018-05-11 | 2018-05-11 22:07:11 | 6 | 12f3a375db92 | 6.29434 | 0 | 0 | 0 | Navigating the new world of artificial intelligence. | 5 |
Is the new kid-friendly Echo Dot a good fit for your family? You decide.
Navigating the new world of artificial intelligence.
Our family’s foray into the world of conversational artificial intelligence began several months ago. We received as a gift the Echo Dot, a small disc-shaped speaker that connects to Amazon’s voice-controlled personal assistant-service, a.k.a. “Alexa.”
I’ve been struggling with our family’s relationship with this little can-shaped woman ever since. How exactly do I interact with her, and what role does she play in our household?
Last week Amazon introduced the Echo Dot Kids Edition. The company says that, along with offering more educational content, the new kid-friendly Alexa will reward children with positive reinforcement when they use words like “please” and “thank you,” instead of simply shouting out commands. The device also reads bedtime stories, tells jokes, and answers the questions of young, inquisitive minds.
The announcement gave me pause. Although we haven’t gotten to the “Rosie the Robot” stage of in-home artificial intelligence, we’re getting closer. Now is the time to think about how we engage with these new devices — and how we want them to engage with us.
This is something I’ve been grappling with, ever since Alexa entered our lives.
Hello, welcome to our home.
As an Echo Dot user, I understand why Amazon, under parent pressure, decided to add the politeness feature to their new kid-friendly edition. What I don’t understand is why the “regular” version doesn’t have this feature, too.
Every morning as I bark orders at the newest member of our household, I feel a twinge of discomfort. She doesn’t eat dinner with us or snuggle up on the couch for movie night. She actually does nothing but attend to our every whim.
“Alexa, play classical music.”
“Alexa, how long do blue whales live?”
“Alexa, turn it up.”
I know she doesn’t have thoughts or feelings, despite her human-like voice. But for my own peace of mind, I’ve been feeling the need to show her a little respect.
So, the other day, after Tchaikovsky started echoing through the house, I said, “Alexa, thank you.” Her response? “That’s what I’m here for.” I guess her programmers already thought of this. Even prior to the new child-friendly version, they anticipated that people might feel inclined to show Alexa some gratitude.
After all, isn’t this the kind of home we want to live in? Where family members speak respectfully to one another and acknowledge each other’s contributions? Not that we always hit the mark, but we try.
Creating a positive home environment takes work. The adults in the house have to set the bar, because kids tend to mimic their parents. So, if we value politeness, then we parents better get on the stick, even with our virtual assistants.
Somehow, mom yelling across the room, “Alexa, shut up,” (which is, shockingly, one of the device’s basic commands) doesn’t exactly make for a peaceful, loving atmosphere.
How do we show our appreciation?
For me, the dilemma extends beyond standard politeness. It’s also a question of reciprocity. The odd thing about Alexa, the thing that I’m realizing makes me most uncomfortable, is the fact that I can’t offer her anything in return.
Reciprocity is the exchange of positive emotions. In other life situations, when someone provides me with a service, I thank them. But I also smile, ask them how their day is going, maybe even tell them what a good job they’re doing. I give something back.
Of course, I don’t thank my laundry machine every time it finishes a load of clothes. But my laundry machine also doesn’t talk to me (at least not yet). There’s something about the human voice, even in Alexa’s electronic way, that feels different. She’s kind of like a person even though she’s not. I guess that’s what conversational artificial intelligence (CAI) is.
Thank you. No, thank you.
So, here we are, talking to each other, like any two normal people standing in the kitchen. I verbally ask Alexa for something and she responds promptly and accurately. Shouldn’t I show my gratitude, in some way?
“Thank you” seems appropriate.
In normal human relationships, would I expect for the person I’ve thanked to thank me in return? No, I wouldn’t. This would be like receiving a thank-you note for a thank-you note. Instead, I might expect a response of “you’re welcome” or “happy to help,” or something along these lines.
When a child thanks the new kid-friendly Alexa for something, she responds, “Thank you for asking nicely.”
Some people may thank their kids for saying “thank you.”
I don’t.
So what? We all parent differently.
But this is where artificial intelligence gets interesting. After all, every device is programmed by a human. A simple decision, made by somebody, somewhere, during a product’s development, can change the way we — and/or our kids — would otherwise do things.
There’s certainly no harm in thanking a child for being polite. But it brings up a critical question: When it comes to parenting, when does artificial intelligence reinforce our parenting techniques and family values, and when does it hinder them?
Along with politeness prompts, the device does a lot of other things. It reads stories, recites jokes, and even tells your child when it’s time for dinner or bed.
My hope for parents and kids everywhere is that these features are used as “add-ons” to real-life interactions, not as substitutes. No fun little device can replace that warm, safe feeling of snuggling up next to someone who loves you. It’s impossible to overstate the impact of these simple, everyday ways of connecting with our children.
My response to this latest news by Amazon: Yes, I think children should say “please” and “thank you” to Alexa. And I think adults should, too. Not because she’ll thank us for it. But because we want to create a home environment of kindness and respect. I don’t want to live in a place where people are screaming out commands, whether at each other, the dog, or a little black disc in the corner of the room. Do you?
Just because these gadgets aren’t human, doesn’t mean we get to start acting inhumanely.
And just because devices are capable of doing some of the things parents have traditionally done (like overseeing the bedtime routine), doesn’t mean they should.
Parents beware
Amazon’s objective is clear. The company hopes that parents will, quite literally, “buy in” to the idea that Alexa is a good influence on their kids, and should, therefore, be in more rooms of the house.
As parents we have to decide whether this is a good idea or not. We have to weigh the pros and cons. Most of us don’t think these things through until we’ve already incorporated new gadgets into our lives. Kind of a trial-and-error approach, which can go terribly wrong with technology.
It’s usually the things we miss or don’t think of in advance that come back to haunt us. Just because a device was developed with kids in mind, doesn’t mean that it’s in the best interest of our families. We have to read the fine print.
When my son first got a smartphone, I realized that I was quickly losing control of his screen-time (whoops, something I hadn’t planned for). So I purchased a parental monitoring software program to help keep the device safe from creepy content and overuse. While setting up the parental controls, I noticed that the default “daily time limit” for kids his age was five hours. Five hours! If I had just mindlessly clicked the default setting, my middle-schooler would have been given a screen-time allowance of five hours, courtesy of the “software company experts.”
That wouldn’t have been okay with me. So I had to go in and manually input my own standards.
This is how it goes with technology and parenting. Even if a product is so-called “kid-friendly,” we have to pay attention. We can’t blindly assume that someone else’s standards (especially a developer we’ve never met) will match our own. We have to stay on our toes. We have to keep parenting.
Something to think about
I have nothing against the Echo Dot. We have one in our kitchen. But I think we have to be cautious of any new technology we’re thinking of introducing into our home. It should undergo the same scrutiny as a stranger who wants to move into our guest bedroom.
Who is this person, how will they fit into our family culture, and what is their role in our household?
There are no one-size-fits-all solutions to how we incorporate new gadgets into our homes and families. Devices that a company or product developer thinks would be a good fit for our kids may not be. Technology that incorporates nicely into my home may wreck havoc in yours, and vice versa.
As we step further into this new world of in-home artificial intelligence, there’s a lot we’ll have to think about. Soon entities of all makes and models will be vying to join our households. We’ll have to decide who to let in and who to keep out. And how to manage these new “relationships.”
Welcome to the tip of the iceberg. The beginning of the long and complicated conversation about the role of artificial intelligence in our homes and families. What will these relationships look like? What do we want them to look like? Now is the time to start the discussion.
This article was originally published on ourmerryway.com.
| Is the new kid-friendly Echo Dot a good fit for your family? You decide. | 0 | is-the-new-kid-friendly-echo-dot-a-good-fit-for-your-family-you-decide-12f3a375db92 | 2018-05-11 | 2018-05-11 22:07:12 | https://medium.com/s/story/is-the-new-kid-friendly-echo-dot-a-good-fit-for-your-family-you-decide-12f3a375db92 | false | 1,615 | null | null | null | null | null | null | null | null | null | Amazon Echo | amazon-echo | Amazon Echo | 3,511 | Amanda Kuhnert | Writer, editor, and blogger at ourmerryway.com. Exploring what it means to live deeply and deliberately. | b50250e1faec | amandamitchellkuhnert | 21 | 30 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | aae30e806935 | 2018-08-28 | 2018-08-28 17:25:34 | 2018-08-28 | 2018-08-28 17:36:16 | 1 | false | en | 2018-08-28 | 2018-08-28 17:42:33 | 4 | 12f4a47d229a | 1.626415 | 0 | 0 | 0 | In science, the replication with independent samples is crucial for the precision of an estimate on the statistical analysis and the… | 5 | Example of how to deal with pseudoreplication in an nice way
In science, the replication with independent samples is crucial for the precision of an estimate on the statistical analysis and the subsequent hypothesis conclusions. However, for some experiments it’s not so simple to make true replications, either by the rarity of an organism or by logistical issues, causing the absence of true random and independent replications. This problem is also known as pseudoreplication and it’s a common problem in the literature. Despite the problem, not everything is lost once there are some ways to deal with the pseudoreplication.
These solutions include changing the research objective or the statistical design. One good example of it can be seen in the paper of Monteiro et al. (2017). The objective of the work was evaluating the effects of fire-induced disturbance on the termite’s nests. Despite sampling many termites’ nests, the problem here is that the authors used only one experimental site to induce the fires. So, there is a pseudoreplication problem here. In fact, to accomplish the objective, they should have evaluated different nests in many different sites, and the authors knew that. To deal with it, they chose a different approach: instead of compare areas of disturbance as treatment factor (fire and non-fire) they compared the termite’s nests (n=30), and once they also had the GPS points of the nests, the authors analyzed these points in a continuum way (the distance from the fire boundary), building a regression model. Thereby, they worked around with the problem of lack of true repetitions.
Figure from Monteiro et al. (2017)
It’s also interesting to point out that the authors had a clear idea of the repetition problem in their work, and instead of omitting the information in the paper, they made one transparent topic called “Handling pseudoreplication”. In this topic they discussed about the problem and proposing suitable the solution. This was a very insteresting attitude and avoided any problem with the journal reviewers and also with future readers.
Monteiro I, Viana-Junior AB, Solar RRC, Neves FS, DeSouza O. Disturbance-modulated symbioses in termitophily. Ecology and Evolution. 2017;00:1–10. https://doi.org/10.1002/ece3.3601
Visite nosso site ou mande um e-mail para [email protected]. Você também pode me encontrar no Twitter. Se preferir, também pode adicionar o feed do blog.
| Example of how to deal with pseudoreplication in an nice way | 0 | dealing-with-pseudoreplication-in-an-good-way-an-example-12f4a47d229a | 2018-08-28 | 2018-08-28 17:42:33 | https://medium.com/s/story/dealing-with-pseudoreplication-in-an-good-way-an-example-12f4a47d229a | false | 378 | Bioestatística e Data Science | null | null | null | bio-data-blog | bio-data-blog | R,ESTATÍSTICAS,DATA SCIENCE,CIENCIA DE DADOS,ECOLOGIA | viniciusbrbio | Science | science | Science | 49,946 | Vinícius Rodrigues | null | cf78c91e2cb | viniciusbrbio | 15 | 34 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | dc3468e1f99e | 2018-03-20 | 2018-03-20 20:37:45 | 2018-03-20 | 2018-03-20 20:39:35 | 1 | false | en | 2018-03-20 | 2018-03-20 20:39:35 | 0 | 12f59c17bf6c | 1.241509 | 0 | 0 | 0 | Speaking in the most general sense artificial intelligence is a technology for creating algorithms and programs, their subsequent training… | 1 | What is artificial intelligence?
Speaking in the most general sense artificial intelligence is a technology for creating algorithms and programs, their subsequent training and use for certain purposes.
A computer receives information, processes it, reveals certain patterns and applies the knowledge. One of the main differences between artificial intelligence and conventional algorithms is the ability to learn. There are different technologies for creating artificial intelligence. One of the most known is the creation of AI based on principle of human brain neural network. Neurons are brain cells. Connections between neurons form as a result of information received from the outside. For example, a person sees an object and receives information “this is a house”. The next time a person sees an object like this and connection between neurons will define an object as a house. In artificial intelligence computational elements work in the same way.
How does artificial intelligence learn? Learning AI can be compared with the way a small child learns to read. Child is shown a sign and he is told “This is letter A”. Then a baby is taught to compose words from letters and sentences from words. An artificial brain receives a certain amount of data such as an image database. AI studies images, reveals patterns in shape, color, position of elements. Being learned in this way AI subsequently is able to recognize images shown to it. Similarly, AI can recognize text, voice.
But amount of data is very important. The more the better. A computer brain will have more opportunities for learning and correcting data processing algorithms. With sufficient data amount the result produced by artificial intelligence will be more correct.
| What is artificial intelligence? | 0 | what-is-artificial-intelligence-12f59c17bf6c | 2018-03-20 | 2018-03-20 20:39:37 | https://medium.com/s/story/what-is-artificial-intelligence-12f59c17bf6c | false | 276 | TFH AI Ratings: innovation in data audit based on artificial intelligence | null | TFH-AI-Ratings-218299148797072 | null | TFH AI Ratings | tfh-ratings | null | tfhairatings | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | TFH AI Ratings | null | 48d03229ace4 | 1983irina.vish | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-12-18 | 2017-12-18 10:02:03 | 2017-12-18 | 2017-12-18 15:20:55 | 3 | false | en | 2017-12-18 | 2017-12-18 17:52:07 | 2 | 12f8ce2f2f53 | 3.04434 | 78 | 0 | 0 | We’d been staying the most widely-used and known DIY-solution for website building in Eastern Europe for over 12 years until recently, when… | 5 | Facts about company behind uKit ICO: uKit Group
We’d been staying the most widely-used and known DIY-solution for website building in Eastern Europe for over 12 years until recently, when we got really global with uKit platform 2 years ago. Today everything has come together for our next big step — and the next big thing on the web.
It all started in 2005. As a true startup — just a bunch of guys with an idea that building a website should be as affordable and easy as possible. No investors, no backers (we didn’t even know such words at that time). At first, everything was done with our own resources, and we even had to file one of our servers manually with a hacksaw to make it fit into the rack.
Now we have over 200 servers running in Europe, Asia and North America. For as little as one year, our first builder uCoz got extremely popular on domestic and neighbouring markets and by 2010 we received over 1 million active websites. Later, we merged with the famous “Narod” hosting and doubled the number of active sites on our platforms.
Now we have over 200 servers running in Europe, Asia and North America
It’s been 12 years since that time, now we know the industry all inside out. Being one of the company founders, one of our founders is still running operational processes. And most of the employees that joined us during the first years years are still here.
Each of them has years of experience in developing, marketing and scaling new products for the website building market.
Now we’re launching uKit ICO.
You definitely visited websites that were built with our technologies. Nowadays, there are over 3.5M websites powered by our platforms, including the new one — uKit.
Websites with custom domains hardly ever reveal the CMS they use, but let the numbers speak for themselves: 80% of of the internet audience in CIS countries visit websites built with our platforms. And thanks to Portuguese and English versions we actively grew for last three years in Latin, North America and Europe.
Venture experience with DST Global. We received “smart money” from the DST Global fund, widely known as an Airbnb, Facebook and Twitter investor. Although they withdrew the funds later on, that was a good experience, and, of course, this kind of financing is still a good option in a number of cases. But we’re convinced that in the modern world crowdfunding is the “smartest” option.
We got global and profitable with uKit. Two years ago, we started a new chapter in the history of our company by launching a brand-new flagship product. Thanks to intuitive the CMS, mobile-friendly layouts, integrated marketing tools and fast technical support, uKit became a top choice for many local businesses all over the world. For example, today 10% of our customers are from North America.
We’ve already succeeded in automation of website building. Creating a website today is not a problem. Getting the most of it — that’s a challenge for many entrepreneurs. This is the reason why one and a half years ago we started a research on machine design to automatically customize layouts and increase their effectiveness.
As a part of the uKit AI R&D project, we’ve launched uKit Alt — a system that turns social media profiles into websites by using simple algorithms. Once it became a success, we continued working in this field by moving towards generative models and neural networks that’ll provide variative and effective design options for existing websites. That’s what uKit AI is about.
Not a ghost company. That’s one of the main — and verifiable — points about us. We’ve been selling IT products worldwide for years and are presented by long-established legal entities in different parts of the world. One of them is a BVI-based company Compubyte Ltd. That holds the uKit ICO.
Want to learn more and check out what the press says about us? Visit our company website https://ukit.group.
| Facts about company behind uKit ICO: uKit Group | 1,266 | facts-about-company-behind-ukit-ico-ukit-group-12f8ce2f2f53 | 2018-05-18 | 2018-05-18 13:21:25 | https://medium.com/s/story/facts-about-company-behind-ukit-ico-ukit-group-12f8ce2f2f53 | false | 661 | null | null | null | null | null | null | null | null | null | Web Development | web-development | Web Development | 87,466 | uKit ICO | Boosting website conversion with Artificial Intelligence! Designing dynamic landing pages based on crowd data: https://ico.ukit.com | 454e81a65ea2 | ico.ukit | 311 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-22 | 2018-04-22 19:49:42 | 2018-04-22 | 2018-04-22 19:52:14 | 1 | false | en | 2018-04-22 | 2018-04-22 19:52:14 | 1 | 12fa172bb4a3 | 1.113208 | 0 | 0 | 0 | In simple text and graphics PriBot presents legal privacy rules of every English website/ app. It is using automated analysis and deep… | 4 | Live review of online privacy policies through a bot!
In simple text and graphics PriBot presents legal privacy rules of every English website/ app. It is using automated analysis and deep learning to understand the context. It’s a great example how Artificial Intelligence can be used to present a tough topic as privacy.
Companies use privacy policies to communicate with their users how they collect data and how they will use this data. It is usually presented in a form that is difficult to understand. PriBot chews the privacy data to an eatable piece of information.
At the moment 0.7% of the users read the privacy conditions (research from prof. Florencia Marotta Wurgler) before they click on ‘OK’. Based on 130.000 privacy texts a team from Switzerland created PriBot. To test their application, they used Google Play Store and concluded that in 88% cases their analysis was similar as a human being.
How does it work?
You enter the url of the website. PriBot collects all relevant information and displays the results in a graphic. At the left side you can find the different categories (such as user online activities, demographic, health, etc.), in the middle why it is collected and at the right sight which options you have as visitor of the website. If you move your mouse on the graphic the website will show via a pop up relevant privacy text.
Use it and you will be surprised !!
www.pribot.org
| Live review of online privacy policies through a bot! | 0 | live-review-of-online-privacy-policies-through-a-bot-12fa172bb4a3 | 2018-04-22 | 2018-04-22 19:52:14 | https://medium.com/s/story/live-review-of-online-privacy-policies-through-a-bot-12fa172bb4a3 | false | 242 | null | null | null | null | null | null | null | null | null | Privacy | privacy | Privacy | 23,226 | Edwin Jaspers | Solution Architect with a focus on outsourcing and automation tools. Blogger on intelligent-automation.org. | cdf4c4d85c98 | edwinjaspers | 3 | 12 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-14 | 2018-08-14 15:57:08 | 2018-08-16 | 2018-08-16 14:01:01 | 1 | false | en | 2018-08-16 | 2018-08-16 14:01:01 | 13 | 12fa1a8a9834 | 3.932075 | 0 | 0 | 0 | This article was originally posted on Forbes.com | 5 | Would You Trust An AI To Look After Your Business Finances?
This article was originally posted on Forbes.com
Shutterstock
Artificial intelligence is on the rise. I write and read extensively on AI, and one gets the impression that this is a zero-sum proposition: Either everything improves and we will live in a Utopia — or we face an imminent apocalypse.
Thankfully, things are not so black and white. Automation, long seen as a killer of jobs, may displace fewer people than experts once believed. Advanced algorithms promise to deliver us from routine, tedious tasks that are part and parcel of running a business, without running the business for us. Further, AI is far more accurate than the human mind for certain tasks — the perfect detail-oriented worker.
Yet, efficacy aside, this speculation raises an interesting question: Can you trust an AI to manage your organization’s finances? If you choose to use an AI, will there even be space for you to continue running the financial side of your business?
The answer, believe it or not, is yes to both questions. But as with everything, there are caveats.
AI Is Far More Limited Than You Think
When it comes to AI, overestimating current capabilities is the most common sin. The widespread perception of AI as a world-ending, malevolent superintelligence isn’t accurate; this is only one category of AI (artificial general intelligence, or AGI), and a rather fanciful one at that.
That’s because developing an AGI is incredibly difficult and time-consuming, and some experts believe it may not even be possible. Currently, AIs evolve primarily through machine learning, though machine testing would be a better phrase. If you want to train a program to recognize an image of a bee from the number three, algorithms have to test, fail and repeat millions of times until they hit upon a breakthrough. That’s how Google’s AlphaGo program beat human playersat the notoriously abstract game by playing millions of games on its own and learning through its failures. It is trial and error on a massive scale.
While AGI may come to be, it’s hard to see an AI stumbling into superintelligence anytime soon. Instead, the rote nature of machine learning makes for very talented specialist bots that excel in narrow niches. Need to trade stocks at lightspeed? Need to browse huge databases of documents for legal discovery?
An AI can’t really run your whole business for you, at least not in the near future. Until machine learning becomes more refined, bots can’t deal with the unpredictability of real life. AI can carry out lower-level functions, like keeping your books, tracking your expenses and profits, generating fancy reports and even suggesting courses of action. “Revenue is up this year,” your AI may say, “but due to tariffs on solar panels, it may be down in the next fiscal year.” Consider diversifying to other clean energy sources, like residential wind turbines or biogas digesters.
There are already software tools that achieve this exact purpose, like Intuit’s QuickBooks product line. But QuickBooks can’t run your business for you. It can’t decide to explore a new avenue of business, forge relationships with suppliers and customers, or lead your team.
The greatest flaw of AI is that it cannot adapt as well as humans. Most AI cannot carry over their experiences from one set of circumstances to another. This means that one AI trained on one narrow group of conditions cannot function on another related group. An AI trained to track small business finances can’t track corporate spending; unlike humans, it can’t extrapolate transferable skills to fit a different situation.
Still, AI’s Thought Processes Are Unclear
Another lingering criticism of AI is its lack of empathy. When dealing with humans, we can assume that a person is working off the same emotional wavelengths as us, allowing us to empathize with each other.
Not AI, which is widely seen as a black box. No one knows what it thinks or feels, and humans cannot understand its processes, even if we understand the sort of brute force approach it takes to get there. One expert compares AI neural networks, patterned after the structure of a human brain and containing multiple layers of inputs and nodes, to fiddling with millions of knobs.
A financial reporting AI could tweak billions of different parameters and connections to analyze variables like politics, competitors, margins, etc. The problem is that it can’t explain the thought process behind an answer. If human analysts suggest that you buy stock in manufacturers of electric vehicle battery packs, they can justify their conclusion by analyzing market demand, benchmark indexes and raw material prices.
Yet AI won’t remain a black box for much longer, as organizations are trying to change the nature of machine learning. Some, like Bonsai, move away from the trial-and-error testing that characterizes deep learning. Earlier this year, an image analysis AI successfully justified its answers. In one picture, it explained that water was calm because there were no waves and you could see the sun’s reflection, while the water in another picture was not calm because it showed frothy, foamy waves.
But humans are also terrible at justifying themselves. Just look at an economic recession or a therapist trying to dig out the reason behind a divorce. In each case, there are plenty of underlying, unseen factors that caused this outcome. In truth, a person’s (and a society’s) decision-making process can be as opaque as any AI black box. With AI, at least we can improve transparency through new strategies and upgrades.
So, if you’re worried about an AI making you obsolete or draining your business accounts when your head is turned, don’t be. If anything, succumbing to paranoia will lead you to miss out on AI’s many benefits. And in this world of rapid innovation, clinging to yesterday’s technology is a death knell for competition. Just ask Kodak.
| Would You Trust An AI To Look After Your Business Finances? | 0 | would-you-trust-an-ai-to-look-after-your-business-finances-12fa1a8a9834 | 2018-08-16 | 2018-08-16 14:01:01 | https://medium.com/s/story/would-you-trust-an-ai-to-look-after-your-business-finances-12fa1a8a9834 | false | 989 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Ed Sappin | Business Leader. Thinker. Financial Engineer. Innovator. Global Strategist. Photographer. New Yorker. Stints in Philly, London, DC, Shanghai. https://sappin.com | 5fcf78357c5e | edsappin | 340 | 687 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 32881626c9c9 | 2018-07-25 | 2018-07-25 15:22:33 | 2018-07-25 | 2018-07-25 15:27:27 | 3 | true | en | 2018-07-25 | 2018-07-25 15:27:27 | 8 | 12fa4fb88a77 | 4.636792 | 10 | 0 | 0 | Marketers should celebrate the Artificial Intelligence movement or “AI” as the industry calls it. Don’t worry. AI is an overhyped Silicon… | 5 | Welcome to the Machine, Marketers
Marketers should celebrate the Artificial Intelligence movement or “AI” as the industry calls it. Don’t worry. AI is an overhyped Silicon Valley term that in its current iteration represents more of a natural progression in technology — machine learning — than the birth of sentient beings.
In the marketing technology space, AI attempts to replace many menial marketing tasks. This empowers more creative minds and strategic thinkers to focus on the work they love, rather than the “vulcanesque” data rich tasks that drive them crazy.
The machine learning marketing evolution brings a couple of caveats, of course:
1) First, strategic marketers must comprehend machine learning on macro level and how to use those systems to inform strategy and lead day-to-day tactical exercises.
Second, some more tactical marketing roles will be either replaced or impacted by algorithmic bots.
2) The idea of tasks being performed by bots scares people. But it shouldn’t. Marketers are seeing some of the most time consuming tasks in their business automated, specifically those that revolve around data points. Almost every marketer I know complains about having to do 3x work in 1x space. This relieves much of the pressure.
Still, to not understand machine learning tools antiquates one’s skill set. To stay relevant, marketers need to embrace new technologies like AI and see how they can be incorporated. Algorithms and bots are not sentient. They need guidance to successfully interact with human.
5 Marketing Functions Impacted by Machine Learning
Ever notice how the tech industry positions AI as a heroic savior of efficiency? Yet most people are afraid of it.
Here are the five marketing functions I believe will most likely get impacted by machine learning over the next three years:
1) Community Management
The inability to scale created the biggest knock on social media. Deploying teams of humans across the Internet to reply, engage, and build communities on various social networks remains the luxury of only the largest brands. Further early iterations of automation via Hootsuite and Buffer received low marks from conversation purists who found these offerings inhuman and unengaging.
Current iterations like Social Drift for Instagram level the playing field. New bots and algorithms are let community managers engage in real conversations while delegating mundane tasks of following, liking, and unfollowing fall to the wayside. Further, these new bots fulfill a critical role in identifying and targeting influencers, reducing hours of research.
Community managers should find their tasks to be more enjoyable and less frantic. They will have many tools to make their jobs more fruitful and successful if they adapt the latest tools. Community managers who fail to adapt will likely find themselves falling to the wayside based on performance.
2) Content Marketing
Content marketers will greatly benefit from machine learning. From finding competitive content and relevant source material to optimizing message and delivery preferences, algorithmic programs will greatly assist research and content creation.
Scoop.it is one content research example with its Internet scouring capability. While very helpful, Scoop.it results requires a lot of weeding. This is common with machine learning apps. The early results are not great. The better bots improve as their algorithm optimizes based on human input.
In the near term, AI is unlikely to replace content marketers, simply make them better and more efficient. However, like community managers, content marketers must stay up to date with the latest machine learning tools in order to stay relevant and functional in their jobs.
3) Customer Service
Will customer service bots make contacting companies a better or worse experience?
From chat bots and smart helpdesk sourcing of solutions to actual AI bots answering calls and of course, Alexa, Cortana, Google, and Siri-enabled applications. Yes, there will be a need for real live voices, but only for the most difficult problems.
Did you know companies can integrate Amazon Alexa into their app to answer questions and help you?
Unfortunately, this is one of those areas where machine learning will create significant job loss, particularly in call centers. Expect to see customer service costs and jobs reduced significantly over the next three years.
4) Data Analyst
The data analyst, the person who combs through reports to find prescient data points, and then spits out reports for management will likely see their task automated in the very near future. Increasingly, reports will be offered by algorithms, which when steered and customized to a unique business, will eliminate much of the weekly dashboard testing.
Strong machine learning will also identify emergent trends before a human can, too. Lead scoring system Infer offers a great example of superior data analysis, identifying SQL opportunities well before the human-induced lead scoring algorithm used by most marketing automation systems.
A need remains to steer machine learning to source the right quality data points and ensure that algorithms continue to evolve and meet customer and business model changes. The more strategic data scientist(s) will be required for larger enterprises.
In small businesses, the marketing lead will need a deeper understanding of data science to remain relevant and functional. In essence, someone needs to guide and ensure that data analyzing bots are on point and at a minimum providing useful predictive information.
5) Digital Advertising
Digital advertising is becoming a game of bots. It should be no surprise as this is where the money is at. Google leads the AI ad market with its increasingly complex machine learning-based Google Ads solution that cross pollinates multi-channels to reach customers, even on their smartphones.
As machine learning-based advertising platforms mesh with automation systems and CRM databases, marketer interactions with those systems will increasingly revolve around management. Advertising bots will take inputted spend, target audiences and initial creative and then offer machine learning suggestions. There is a great need for ethics here, as evidenced by the Cambridge Analytica scandal.
In many ways the digital advertising manager will simply approve or correct these suggestions, and then allocate resources as necessary to fulfill them. In the near term, digital advertising agencies can get a leg up on customers by mastering the latest bots and algorithms.
These are just a few of the marketing roles that AI will impact over the next few years. There’s a bot to assist almost every marketing function available now.
But is worrying about job or role replacement the best way to approach the issue? Or should we dive in and embrace the tools provided to us?
Originally published on my marketing blog.
| Welcome to the Machine, Marketers | 50 | welcome-to-the-machine-marketers-12fa4fb88a77 | 2018-08-20 | 2018-08-20 07:40:38 | https://medium.com/s/story/welcome-to-the-machine-marketers-12fa4fb88a77 | false | 1,083 | Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. | null | datadriveninvestor | null | Data Driven Investor | datadriveninvestor | CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY | dd_invest | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Geoff Livingston | A digital marketer, author, and social fundraiser, I help launch products, enact corporate initiatives, and raise money. http://livingstoncampaigns.com | 61af0318eac5 | geofflivingston | 24 | 22 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 7f60cf5620c9 | 2017-09-07 | 2017-09-07 20:52:00 | 2017-09-20 | 2017-09-20 21:11:35 | 5 | false | en | 2018-09-12 | 2018-09-12 08:29:40 | 4 | 12fbdaf8b0b1 | 2.546541 | 6 | 0 | 0 | Did you ever wonder if your Brand acquisition is worth it? Are you willing to understand the incrementality of your brand keyword? | 5 | Inferring the effect of Brand Search using CausalImpact
Did you ever wonder if your Brand acquisition is worth it? Are you willing to understand the incrementality of your brand keyword?
#Just for Fun
if { (Answer(is)==’Yes’)) print(‘this post is for you’) }else{
print("continue going blind and trust your intuition with fingers crossed"))
}
This post will explain the methodolgy for testing the incremental value given for PPC in Brand Search.
Let’s take a step back and discuss the reasons why brand paid search is incremental:
You take more space in the first page by having paid and organic results
You generate more clicks & conversions
You can control the AdCopy, sitelinks etc etc.
In Travel or others competitve industry with high CPAs you might have some competitors bidding in your brand. (in this case your screwed)
Wait, why don’t you use the adwords Paid & Organic report? https://support.google.com/adwords/answer/3097241?hl=en
If you have a Google Search Console acccount you can link them to your adwords account and view the data directly under this report.
Issues with this report? Look at the average position that triggers for Search Result Type == Is organic search only
Despite the fact that I don’t like viewing data without clearly understanding how the test is runned, the avg position output in my case is innacurate. Don’t get me wrong we are talking around half million euros/year spent in Brand Search, for major brands this is not micro optimization.
The best way to measure to uplift of Brand search is a GEO experiment, this way you can create a solid predictor with a set of control time series not impacted by the intervention.
How does it work? I’ve designed an output digram to make it more visual.
#For those of you who need a refresh or learn about Causal effect
What’s a GEO Experiment?— With this approach you can isolate test & control group with cities.
Measuring Ad Effectiveness Using Geo Experiments
Advertisers want to be able to measure the effectiveness of their advertising. Many methods have been used to address…research.googleblog.com
#Github code
https://github.com/gustavobramao/data-sets/blob/master/GEO_Experiment
I’ve added my github project where I extract predictors that are correlated in the times series and used as a synthetic control method to isolate and compute the counterfactual.
The brillant Google’s R causall impact library does the rest, It trains a machine learning model to estimate the counterfactual. :)
The observation between the counterfactual and potential outcome is the causal effect.
In this case I’ve added the Google’s original plot as example.
Not all test are conclusive, you need to check the probabilty of obtaining this effect by chance. If the p value is ≤0.05; Then the causal effect can be considered statistically significant.
Cheers,
Gustavo.
| Inferring the effect of Brand Search using CausalImpact | 15 | inferring-the-effect-of-brand-search-using-casualimpact-12fbdaf8b0b1 | 2018-09-12 | 2018-09-12 08:29:40 | https://medium.com/s/story/inferring-the-effect-of-brand-search-using-casualimpact-12fbdaf8b0b1 | false | 454 | Sharing concepts, ideas, and codes. | towardsdatascience.com | towardsdatascience | null | Towards Data Science | null | towards-data-science | DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,BIG DATA,ANALYTICS | TDataScience | Digital Marketing | digital-marketing | Digital Marketing | 59,312 | Gustavo Bramao | Passionate Performance Marketing Manager with a data driven mindset, driving projects focusing on user acquisition and incrementality measurement. | 10e39f83e8e5 | gustavobramao | 66 | 39 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 8d274e5ccd45 | 2018-06-18 | 2018-06-18 20:48:34 | 2018-06-18 | 2018-06-18 00:00:00 | 1 | false | en | 2018-06-18 | 2018-06-18 21:53:46 | 6 | 12fd0613a002 | 4.377358 | 3 | 0 | 1 | Next week I’m headed to Brussels to be confirmed as a member of the European Commission’s High-Level Expert Group on AI (you can check the… | 3 | Why Do We Need an Ethical Framework for AI?
Photo by Joakim Honkasalo on Unsplash
Next week I’m headed to Brussels to be confirmed as a member of the European Commission’s High-Level Expert Group on AI (you can check the link for the mandate and full list of members). Non-European representatives are incredibly rare occurrences in the EC’s HLGs and I’m honored to have been accepted as we’ll be acting as the steering committee for the European AI Alliance’s work. It is an incredible initiative to support broad collaboration across domains of expertise, industry, society and nationality. I applied to join because Europe has been setting the example for creating the much-needed frameworks for cooperation and regulation locally and as a bloc. It is critical that we do not overcomplicate these frameworks, and create something that the rest of the world can build on.
Ethics is a topic of conversation everywhere in the AI community. Many organizations are flaunting the ethical standards that they’ve created or revamped for their organizations trying to show that they are on the right side of history. But while it’s clear to most people machines should follow ethical rules, I don’t think we’ve done a good job of explaining the limitations of implementing those rules and why we still need to develop an ethical framework for machines. After all, don’t we already have ethical frameworks to use? Yes, we do, but for the behavior of people in society, not machines automating our world. A productive conversation about regulating AI will depend on us figuring out how we even translate our stated values, whatever they may be, into a language that machines can understand.
How we currently shape our ethics
As people, we are born into a framework, a training system that starts with our parents teaching us their values and shaping the fundamental structures for our behavior. After only just a few years of development we mix in another, broader set of instructions at school. There we are taught how to engage in social relationships, learning stories about what’s right and wrong — starting out as simple nursery rhymes and evolving into detailed histories of the ongoing debate of Right vs. Wrong.
Eventually our values are more or less set in stone and we become full adults responsible for applying them, though the training is not yet done. Our businesses and institutions impose long-standing agreements for how those values are applied in day-to-day life. Through codified rules and objectives, we have a long list of explicit ethics of what one should do as a citizen. But, also embedded throughout society we have checks and balances on behaviour — subtle cues or outright whistleblowing — that enforce implicit ethics we have not yet formalized.
We have this gray area because some things are still up for debate, whether the behavior is as old as time or newly possible thanks to technology. We don’t always see how actions can accumulate harmfully or have carry-on effects that are bad for society. Thankfully, we have this robust system of checks and balances that keeps the debate going and act as certain guardrails against runaway behavior as we figure it out, an extension of the role our parents and teachers. Our overall ethical framework as people is ultimately a dialogue; it is constantly evolving, updating with new generations of people and the continuing debate of Right vs. Wrong.
The void of an ethical framework for machines
When we create models of the world to automate tasks, we isolate those tasks from our framework of evolving values. We use AI to encode models of the world by training the machine on data. It’s very useful because it creates models we as people are not able to fully understand (otherwise we would have coded them ourselves). These models are becoming exponentially cheaper and more accurate, but also more complicated and less easy to understand as they continuously improve using feedback loops of more data. We cannot comprehend all the possibilities, and therefore cannot preemptively set all of the needed rules for its behavior.
This is OK, if we are able to set guardrails, but right now we don’t have those either. While the machine’s model of the world may capture the ethics from the moment that the training data was captured and the intent was set (consciously or unconsciously), it can run without any further dialogue and effectively operates in a void of any ethical framework. That is because the language of our ethical framework as people (social relationships, institutions, words) is not the same as the language the machine operates in (data).
If we want to apply the power of these tools to certain areas, we will need to introduce new levels of hygiene to our data, and even ethics as people. A hospital can perform incredible feats of healing, but requires a sterile environment to perform. We can perform great feats of societal cohesion with AI, but will need to practice good hygiene with our data, regularly scrubbing for bias or for behavior that will never do well to be automated. It is in engaging with the feedback loops of training data that we will be able to create levers to extend our ethical framework into the machine’s model.
We must extend our ethical dialogue as people to machines. It is by adding more and more of these touchpoints throughout the machines’ development and use that we can speak the same language and become sure they will respect our laws and values. This conversation is going to be very challenging both with the machines, but also amongst ourselves to determine how to build the new framework. It will force us to become more conclusive about some debates we’ve allowed to stay gray for too long.*
This beginning in Europe is encouraging, though. We are off to the right start by bringing to the table experts who have deep knowledge of our institutions, laws, social relationships and debates. As technologists, we will need to do our part to build the means of translation and not avoid the certain hard questions to come.
////
*While AI has lots of potential for automating harmful bias, it can also highlight it in a powerful way. Right now, the lack of explainability in algorithms used in the justice system prevents them rationalizing biased decisions, letting the pattern of bias speak more plainly. This has helped fuel the debate on overall bias in the justice system and put the breaks on the deployment of algorithms while these very difficult conversations (hopefully) get worked out.
Originally published at www.jfgagne.ai on June 18, 2018.
| Why Do We Need an Ethical Framework for AI? | 52 | why-do-we-need-an-ethical-framework-for-ai-12fd0613a002 | 2018-10-23 | 2018-10-23 17:51:19 | https://medium.com/s/story/why-do-we-need-an-ethical-framework-for-ai-12fd0613a002 | false | 1,107 | thinking out loud about making an AI-first world | null | element.ai2 | null | Element AI | null | element-ai | ARTIFICIAL INTELLIGENCE,DEEP LEARNING,MACHINE LEARNING,TECHNOLOGY,DATA SCIENCE | element_ai | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Jean-Francois Gagné | CEO @ElementAI | edbdbbf6c3ba | jfgagne | 106 | 106 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-02 | 2018-08-02 08:45:25 | 2018-08-02 | 2018-08-02 08:46:59 | 0 | false | en | 2018-08-02 | 2018-08-02 08:46:59 | 1 | 12fd7aded2cf | 0.777358 | 0 | 0 | 0 | In considerations about AI and its influence on business, the footings “predictive analytics” and “machine learning” are sometimes used… | 2 | Predictive Analytics vs. Machine Learning
In considerations about AI and its influence on business, the footings “predictive analytics” and “machine learning” are sometimes used interchangeably. It can be ambiguous. There is a healthy relationship amongst the two, but they are various concepts.
Predictive Analytics
Predictive Analytics is a form of advanced analytics that encompasses a variety of statistical techniques and uses machine learning algorithms to examine probable future and to make estimates about upcoming trends, activity, and performance. It helps businesses with the examination of data which they need to plan for the future this is based on different existing and historical situations. It’s a section of the study, not a specific technology, and it prevailed long before artificial intelligence.
Machine learning
Machine learning is a technology used to assist processors to evaluate a set of data and learn from the insights collected. By using various algorithms, an artificial neural network is imitated that allows machines to categorize, construe and comprehend data and then practice the understandings for unraveling difficulties or making forecasts. Hundreds of new technologically advanced machine learning algorithms are practiced to originate high-end forecasts that leads real-time decisions with less dependence on human interference.
Read More: https://www.knowledgenile.com/blogs/predictive-analytics-vs-machine-learning/
| Predictive Analytics vs. Machine Learning | 0 | predictive-analytics-vs-machine-learning-12fd7aded2cf | 2018-08-02 | 2018-08-02 08:46:59 | https://medium.com/s/story/predictive-analytics-vs-machine-learning-12fd7aded2cf | false | 206 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | adam smith | null | 39a4793dc8e4 | cwadamsmith | 3 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 9bb84158bd51 | 2018-04-30 | 2018-04-30 13:33:05 | 2018-04-30 | 2018-04-30 13:40:23 | 2 | false | lo | 2018-04-30 | 2018-04-30 13:43:04 | 2 | 12ff64641990 | 1.862579 | 1 | 0 | 0 | ບົດຄວາມນີ້ຈະເປັນບົດຄວາມທຳອິດໃນ series ຂອງ Machine Learning Crash Course ເຊິ່ງທ່ານຈະໄດ້ຮູ້ຈັກກັບຄຳວ່າ Machine Learning… | 2 | Machine Learning Crash Course: ML Terminology
ບົດຄວາມນີ້ຈະເປັນບົດຄວາມທຳອິດໃນ series ຂອງ Machine Learning Crash Course ເຊິ່ງທ່ານຈະໄດ້ຮູ້ຈັກກັບຄຳວ່າ Machine Learning ແມ່ນຫຍັງ ແລະ ຄຳສັບຕ່າງໆທີ່ກະເຈົ້າໃຊ້ກັນໃນວົງການນີ້.
Label
Labels ໝາຍເຖິງຜົນໄດ້ຮັບ ຫຼື ສິ່ງທີ່ເຮົາຈະພະຍາກອນ ເຊິ່ງມັນກະຄືໂຕປ່ຽນ y (output) ນັ້ນເອງ ຖ້າເຮົາທຽບກັບສົມຜົນຖົດຖອຍເສັ້ນຊື່ (Linear Regression). Labels ອາດຈະເປັນ ລາຄາຂອງເຂົ້າໃນອະນາຄົດ, ຊື່ສັດຕ່າງໆທີ່ຢູ່ໃນຮູບ, ຄວາມໝາຍຂອງສຽງຕ່າງ ແລະ ອື່ນໆ.
Features
Features ແມ່ນຕົວປ່ຽນ x (input) ຕົວປ່ຽນທີ່ເຮົາຈະເອົາມາຄິດໄລ່ໃນສົມຜົນ. ບາງວຽກໃນສາຍ Machine Learning ງ່າຍໆອາດຈະໄດ້ໃຊ້ແຕ່ feature ອັນດຽວ, ແຕ່ບາງວຽກທີ່ຊັບຊ້ອນອາດຈະໄດ້ໃຊ້ feature ຫຼາຍໆໂຕເປັນ n ໂຕ.
{ x 1 , x 2 , . . . x N }
ຕົວຢ່າງການສ້າງແບບຈຳລອງກ່ຽວກັບການກັ່ນຕອງ spam ຈະຕ້ອງມີ feature ຄື:
ຄຳຕ່າງໆທີ່ໃຊ້ໃນເນື້ອຫາອີເມວທີ່ສົ່ງມາ
ທີ່ຢູ່ອີເມວຂອງຜູ່ທີ່ສົ່ງມາຫາເຮົາ
ເວລາໃດທີ່ອີເມວໄດ້ສົ່ງມາ
ອີເມວທີ່ມີຄຳແປກໆ ແບບເຂົ້າຂ່າຍຕົ້ມຕຸນປົນມາ (one weird trick)
ຂໍ້ມູນ
ຂໍ້ມູນ ຫຼື ແປຕາມໂຕກະຕົວຢ່າງ (example) ປຽບເໝືອນກັບສິ່ງທີ່ເຮົາເອົາມາແທນຄ່າໃຫ້ກັບຕົວປ່ຽນ x. ເຮົາສາມາດແບ່ງຂໍ້ມູນອອກເປັນ 2 ປະເພດຄື:
labeled examples (ກະຄືຂໍ້ມູນທີ່ເຮົາໄດ້ລະບຸໄວ້ແລ້ວ)
unlabeled examples (ແມ່ນຂໍ້ມູນທີ່ເຮົາຍັງບໍ່ໄດ້ລະບຸໄວ້ ຫຼື ຍັງບໍ່ທັນໄດ້ຈັດປະເພດວ່າມັນເປັນຫຍັງກັນແທ້)
ໃນ labeled examples ໃນລວມມີ feature(s) ແລະ label ນຳ, ເຊິ່ງຈະເປັນປະມານນີ້.
labeled examples: {features, label}: (x, y)
ເຮົາໃຊ້ labeled examples ເພື່ອສອນ (train) ແບບຈຳລອງ ຫຼື model ຂອງເຮົາ. ສຳລັບ Spam detector ນີ້, ພວກເຮົາຈະມີ labeled example ທີ່ໄດ້ຈາກການທີ່ຜູ່ໃຊ້ຂອງເຮົາໄດ້ລະບຸໄວ້ໃຫ້ແລ້ວຄື “spam” ຫຼື ບໍ່ “spam”.
ຕົວຢ່າງ, ມີຕ່າງຈະລາງ labeled examples ຢູ່ 5ອັນ ຈາກ data set ຂອງເຮົາເຊິ່ງວ່າຂໍ້ມູນທີ່ຢູ່ໃນນັ້ນກໍຈະມີ ຂໍ້ມູນກ່ຽວກັບລາຄາຂອງລ້ານທີ່ຢູ່ໃນເຂດ California:
ສ່ວນສຳລັບຂອງ unlabeled examples ນັ້ນຈະເປັນຂໍ້ມູນທີ່ບໍ່ມີ label. ໜ້າຕາຂຽນຂອງມັນຈະເປັນປະມານນີ້:
unlabeled examples: {feature, ?}: (x, ?)
ຕົວຢ່າງ ດ້ານລຸ່ມຈະເປັນຂໍ້ມູນ 3ຕົວ ທີ່ເປັນ unlabeled examples ຈາກ data set housing ໂດຍຂໍ້ມູນດັ່ງກ່າວນີ້ຈະບໍ່ລວມ medianHouseValue ທີ່ເປັນ (label):
ຫຼັງຈາກທີ່ເຮົາ train model ຂອງເຮົາແລ້ວດ້ວຍຂໍ້ມູນທີ່ເປັນ labeled (labeled examples), ເຮົາກໍຈະໃຊ້ model ນັ້ນເປັນຄຳນວນ ຫຼື ໄປພະຍາກອນຫາ label ຈາກ unlabeled examples. ຕົວຢ່າງຂອງໂຕນີ້ກະຄື spam detector, unlabeled examples ກະຄືອີເມວໃໝ່ໆທີ່ມະນຸດ ຫຼື ຜູ່ໃຊ້ເຮົາຍັງບໍ່ໄດ້ລະບຸໄວ້ນັ້ນເອງ (labeled).
Models (ແບບຈຳລອງ)
ແບບຈຳລອງ ຫຼື model ແມ່ນໝາຍເຖິງ ຄວາມສຳພັນລະຫວ່າງ features ກັບ label. ຕົວຢ່າງ: ໃນ spam detection model ຈະໄປກ່ຽວພັນກັບ feature ໃດໜຶ່ງທີ່ບົ່ງບອກໃຫ້ຮູ້ວ່າໂຕນັ້ນຕ້ອງແມ່ນ “spam” ແນ່ນອນ. ເຮົາມາເບິ່ງກັນວ່າ 2ຄຳສັບທີ່ກະເຈົ້ານິຍົມໃຊ້ກັນໃນ model ນິມີຫຍັງແນ່:
Training ໝາຍເຖິງ ການສ້າງ ຫຼື ການຮຽນຮູ້ learning ຂອງແບບຈຳລອງ (model). ໃຫ້ເຮົາເຂົ້າໃຈງ່າຍໆກະຄື ການ training ກໍ່ແມ່ນການທີ່ເຮົາເອົາ labeled examples ມາສະແດງ ແລະ ໃຫ້ model ຂອງເຮົາຮຽນຮູູ້ຄວາມສຳພັນລະຫວ່າງ features ກັບ label.
Inference ໝາຍເຖິງ ການເອົາ model ທີ່ເຮົາໄດ້ trained ໄວ້ແລ້ວມາໃຊ້ກັບ unlabeled examples. ເອົາອີກຄວາມໝາຍງ່າຍໆກະຄື ໃຊ້ model ທີ່ຖືກ trained ໄວ້ແລ້ວນັ້ນມາໃຊ້ໃຫ້ພະຍາກອນໄດ້ດີຂຶ້ນ (useful predictions (y’)). ຕົວຢ່າງ: ໃນຂະນະທີ່ inference, ເຮົາສາມາດພະຍາກອນ medianHouseValue ຈາກ unlabeled examples ໃໝ່.
Regression vs. classification
Regression ຖ້າແປເປັນສຳລັບເລກລາວລະແມ່ນ ສົມຜົນຖົດຖອຍ ເຊິ່ງວ່າ ສົມຜົນຖົດຖອຍຈະມີອີກຫຼາຍໆປະເພດຍ່ອຍໄປອີກເຊັ່ນ: ສົມຜົນຖົດຖອຍເສັ້ນຊື່ (Linear Regression), ສົມຜົນຖົດຖອຍຫຼາຍຕົວປ່ຽນ (Multiple Linear Regression), ສົມຜົນຖົດຖອຍໂລກາລິກ (Logistic Regression) ທີ່ກະເຈົ້ານິຍົມໃຊ້ກັນໃນ ML. ສົມຜົນຖົດຖົດຖອຍ ຈະຖືກໃຊ້ເມື່ອເຮົາຕ້ອງການຢາກຮູ້ຄວາມສຳພັນຂອງຕົວປ່ຽນ (ຂໍ້ມູນ) ເຊິ່ງຂໍ້ມູນທີ່ເຮົາເອົານຳມາຄິດໄລ່ ນຳມາພະຍາກອນນັ້ນແມ່ນຈະເປັນຂໍ້ມູນແບບຕໍ່ເນື່ອງ (continuous values). ຕົວຢ່າງ: ແບບຈຳລອງສົມຜົນຖົດຖອຍທີ່ນຳມາພະຍາກອນທີ່ສາມາດຕອບຄຳຖາມຂອງບັນຫາເຮົາໄດ້ເຊັ່ນ:
ມູນຄ່າຂອງເຮືອນໃນ California ເທົ່າໃດ?
ຄ່າກະຕວງຂອງຜູ່ໃຊ້ງານທີ່ກົດເຂົ້າໂຄສະນານີ້ແມ່ນເທົ່າໃດ?
classification ໝາຍເຖິງ ແບບຈຳລອງຄືກັນ (model) ເຊິ່ງຈະແຕກຕ່າງຈາກສົມຜົນຖົດຖອຍ (regression) ຢູ່ບ່ອນວ່າຂໍ້ມູນທີ່ເຮົາເອົາມາຄິດໄລ່ນັ້ນຈະເປັນ ຂໍ້ມູນບໍ່ຕໍ່ເນື່ອງ (discrete values). ຕົວຢ່າງ: ແບບຈຳລອງ classification ທີ່ນຳມາພະຍາກອນທີ່ສາມາດຕອບຄຳຖາມຂອງບັນຫາເຮົາໄດ້ເຊັ່ນ:
ອີເມວທີ່ໃຫ້ມາແມ່ນ spam ຫຼື ບໍ່ແມ່ນ spam?
ໃນຮູບແມ່ນ ໝາ, ແມວ, ຫຼື ໂຕໜູ?
Continue data vs. Discrete data
ຂໍ້ມູນຕໍ່ເນື່ອງ ແລະ ບໍ່ຕໍ່ເນື່ອງ ມັນຕ່າງກັນບ່ອນໃດ?
ຂໍ້ມູນຕໍ່ເນື່ອງ ຈະເປັນຂໍ້ມູນໃດກະໄດ້ ທີ່ສາມາດຫາຄ່າໄດ້ໝົດໃນຂອບເຂດຂອງມັນ ສາມາດເປັນເລກຈຳນວນເສດ (ເລກຈຸດ) ແລະ ມັນສາມາດມັນເລກທີ່ສາມາດຈັດປະເພດໄດ້ຄືກັນເຊັ່ນ: ຍິງ,ຊາຍ, ຖືກ, ຜິດ ແລະ ອື່ນໆ. ເຊິ່ງວ່າສົມຜົນທີ່ໃຊ້ຂໍ້ມູນປະເພດກໍຈະເປັນພວກລີເນແອເປັນຕົ້ນ ເຊັ່ນ: ການຫາຄວາມສຳພັນຂອງອາຍຸກັບລວງສູງ ຫຼື ຜົນກະທົບອື່ນໆຈາກຕົວປ່ຽນຕົ້ມ(independent variable) ກັບຕົວປ່ຽນຕາມ (dependent variable).
ຂໍ້ມູນບໍ່ຕໍ່ເນື່ອງ ເປັນຂໍ້ມູນທີ່ສະເພາະເຈາະຈົງ ແລະ ບໍ່ສາມາດແບ່ງ ຫຼື ເປັນເລກຈຸດໄດ້ເຊັ່ນ: ຈຳນວນຄົນ ມີ 27ຄົນ, ເຮົາກໍບໍ່ສາມາດແບ່ງໄດ້ວ່າ ຈຳນວນຄົນມີ 26.5ຄົນ. ສົມຜົນທີ່ໃຊ້ຂໍ້ມູນປະເພດນີ້ກໍຈະເປັນສົມຜົນໂລກາລິກ (ຖືກ ຫຼື ຜິດ/ 0 ກັບ 1) ຖ້າຄອມເຂົາຈະເອີ້ນເປັນສົມຜົນ binary. ເຊັ່ນ: ຫາຄວາມເປັນໄປໄດ້ຂອງຮູບວ່າຈະເປັນຮູບແມວ ຫຼືບໍ່?; ຫາຜົນກະທົບວ່າອຸບັດຕິເຫດເກີດຈາກການດື່ມແລ້ວຂັບແທ້ບໍ່?
Machine Learning Crash Course: ML Terminology
ບົດຄວາມນີ້ໄດ້ເອົາມາຈາກຕົ້ນສະບັບທີ່ Kongphaly’s Blog
| Machine Learning Crash Course: ML Terminology | 1 | ບົດ-ຄວາມນີ້ຈະ-ເປັນ-ບົດ-ຄວາມ-ທຳ-ອິດ-ໃນ-series-ຂອງ-machine-learning-crash-course-12ff64641990 | 2018-06-04 | 2018-06-04 10:56:42 | https://medium.com/s/story/ບົດ-ຄວາມນີ້ຈະ-ເປັນ-ບົດ-ຄວາມ-ທຳ-ອິດ-ໃນ-series-ຂອງ-machine-learning-crash-course-12ff64641990 | false | 392 | GDG Vientiane | null | gdgvientiane | null | GDG Vientiane | gdg-vientiane | GDG,GOOGLE,GOOGLE DEVELOPER GROUP,VIENTIANE | null | Gdg | gdg | Gdg | 258 | Douangtavanh Kongphaly | Co-organizer Google Developer Group Vientiane. Rookie mathematician who love coding and analyzing data. original from https://kongphaly.la | 907c7708b627 | douangtavanh | 7 | 0 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-06-22 | 2018-06-22 07:17:17 | 2018-06-28 | 2018-06-28 17:01:44 | 7 | false | en | 2018-06-28 | 2018-06-28 17:01:44 | 0 | 12ffd3505542 | 1.438679 | 1 | 0 | 0 | Binary Classification: | 3 | Logistic Regression as a Neural Network
Binary Classification:
Classification weather it is Cat(1) or not Cat(0).
To store in an image in our computer stores three seperate matrices.So our input image is 64 pixels by 64 pixels.Then we have 3 64 by 64 matrices.
We have to unroll all of these pixel values into an input feature vector x.
It is 5*5 pixels assume as 64 by 64 pixels
It is treated as a vector
Logistic Regression:
Sigmoid function graph
Sigmoid function is introduced to have a value between 0 and 1.
If z is very large positive number then Sigmoid function is close to 1.
If z is very large negative number then Sigmoid function is close to 0.
Gradient Descent:
| Logistic Regression as a Neural Network | 1 | logistic-regression-as-a-neural-network-12ffd3505542 | 2018-07-05 | 2018-07-05 20:49:34 | https://medium.com/s/story/logistic-regression-as-a-neural-network-12ffd3505542 | false | 103 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Anurag Lahon | null | 8f2caa9ab54a | anuraglahonmba | 1 | 3 | 20,181,104 | null | null | null | null | null | null |
0 | $ npm install ngentest -g # to run this command anywhere
$ ngentest my.component.ts # node_modules/.bin/gentest
$ ngentest my.component.ts # node_modules/.bin/gentest
$ ngentest my.directive.ts -s # output to my.directive.spec.ts
$ ngentest my.pipe.ts > my.pipe.test.ts
$ ngentest my.service.ts
it('should create a component', async(() => {
expect(component).toBeTruthy();
}));
it('should run #ngOnInit()', async(() => {
// ngOnInit();
}));
it('should run #handleIntersect()', async(() => {
// handleIntersect(entries, observer);
}));
it('should run #defaultInviewHandler()', async(() => {
// const result = defaultInviewHandler(entry);
}));
$ npm install ngentest -g # to run this command anywhere
$ ngentest my.component.ts
| 4 | 21cfd2417ebb | 2018-05-12 | 2018-05-12 19:00:17 | 2018-05-12 | 2018-05-12 20:01:20 | 3 | false | en | 2018-05-23 | 2018-05-23 19:07:23 | 4 | 1300601ed73 | 3.010377 | 30 | 0 | 0 | How much do you like to write unit tests from the scratch? My guess is “not so much”. The reason behind this guess is that writing the… | 5 | Generate Angular Unit Test Automatically
How much do you like to write unit tests from the scratch? My guess is “not so much”. The reason behind this guess is that writing the first successful unit test is just a copy/paste/modification job, and it’s not a pleasant task.
What if there is a command that can do this job(copy/paste/modification)?
TL;DR; Yes, there is a command.
Assuming you coded your component, and you want to write a test,
This will output your auto-generated unit test, and it’s guaranteed to work.
Test requires more coding, and it’s true :(
Writing a component is one thing and testing it is another thing. To make your test to work, you need to know and code more the following;
Import statements
Classes to mock
Window objects to mock
Create TestBed and configure it before each tests
Setup providers using mocks
Etc
The above is just for a component. When it comes to a directive it requires even more for you to code;
Create a component to test a directive
Get the directive instance from the component
Setting up @inputs and @outputs to test
and more
It already makes you tired before you write the test. You have been doing this job by copying, pasting and modifying your code, and I am sure you don’t like it. If there were an AI program, then I would ask AI program to do this job rather than I write it by myself.
Generate the first unit test automatically
There is no reason to automate copying/pasting/modifying the test codes for each type; components, directives, services and pipes. It’s just a matter of;
Parsing a Typescript file and find out the proper file type.
Building data for unit test from the parsed Typescript.
className
imports
input/output attributes and properties
mocks
providers for TestBed
list of functions to test
3. Finally, generating unit test from a template
That’s how the Angular Test Generator has been created. https://github.com/allenhwkim/ngentest.
By running a ngentest command, more than half of your test will be done. The only thing you have left is to complete each function tests.
As you see it here from the generated code example, the constructor test is complete, but function tests for the class is not done, and I don’t think even an AI can do this properly(one day, maybe?). You still need to complete each function tests to make the code coverage 100%.
Sorry, it does not fully generate ALL of your unit test. The good thing is it generate more than half of your unit test. However, even with this small help, you can save a lot of time. These are number of lines generated for unit tests for each type for me.
89 out of 137 lines for my component
90 out of 139 lines for my directive
25 out of 43 lines for my service
12 out of 20 lines for my pipe
In general, with ngentest, you can write more than half of your unit test.
Conclusion
You can find the code here at Github, https://github.com/allenhwkim/ngentest
To try this, all you need to do is to install and run ngentest with your file.
Happy Coding :)
May 12th, 2018
Allen Kim
Do you think this useful? If so please;
* Clap 👏 button below️ to let others to see this too.
* Follow Allen on Twitter (@allenhwkim)!
| Generate Angular Unit Test Automatically | 139 | generate-angular-unit-tests-automatically-1300601ed73 | 2018-06-14 | 2018-06-14 12:30:15 | https://medium.com/s/story/generate-angular-unit-tests-automatically-1300601ed73 | false | 652 | Allen Is Here | null | hongwan.kim | null | Allen Kim | allenhwkim | null | allenhwkim | Unit Testing | unit-testing | Unit Testing | 1,554 | Allen Kim | Front-End Engineer/Tech-Lead | 2076d29c7066 | allenhwkim | 83 | 30 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-10-30 | 2017-10-30 19:22:32 | 2017-10-31 | 2017-10-31 13:49:19 | 1 | false | zh-Hant | 2017-10-31 | 2017-10-31 13:49:19 | 0 | 13006bbe5915 | 0.29434 | 3 | 0 | 0 | 最近觀看AlphaGo Zero完虐Master的棋譜,坦白說,實在有些被AI發展及進步的速度所嚇到。感覺以前對AI的思考,已經不是這麼遙不可及了,也就決心把自己的思想寫下來。當中也有一部分談到自己對靈魂的想像和思考,可能比AI的討論更加激烈。 | 5 | 研發如人類般思考的AI是好是壞?
最近觀看AlphaGo Zero完虐Master的棋譜,坦白說,實在有些被AI發展及進步的速度所嚇到。感覺以前對AI的思考,已經不是這麼遙不可及了,也就決心把自己的思想寫下來。當中也有一部分談到自己對靈魂的想像和思考,可能比AI的討論更加激烈。
創造出一個能夠如人類般思考的AI,那它很有可能也擁有人類的人格,擁有了自己的思想。對一個有人格的思考者,我們能夠,又應該,完全控制他嗎?我們能夠,又應該,使其無條件服務人類嗎?
一個能夠如人類般思考,而計算能力以及資訊掌握能力都遠超人類的思考者,更不受人類控制,當然會成為人類的威脅。情況就彷如,當人類演化到某程度,能力到了某程度,無論遷到澳洲或是美洲,都滅絕了當地大部份大形哺乳類動物。最後成為地球的霸主。
人類稱霸地球,當然不是因為人類擁有最強的身體,而是因為人類的智慧。智慧遠超地球人的AI在地球上出現,其結果,可以想像成科技遠超人類的外星人降臨地球:
人類不再是地球的霸主,不再位處生物鏈的極端。
無法為人類服務,又對人類構成極大威脅。這麼多人對研發AI有所抗拒,也是極其合理的事。
有人或會質疑,我們對電腦一直的想像,就是其不含情感,為人類所操控。難道我們就不能在AI的程式中植入服從人類,或者為人類服務的指令嗎?
另外,既然電腦程式不含情感,即便他的思維模式彷似人類,難道就必然會演化出自己的人格嗎?
對於第一個質疑,我們很難否定這個可能性,這也是AI研發沒有中斷的原因。然而,我們若要電腦能夠如人類般思考,就必然賦予其極大的自由度。若是程式的思考處處受限,先不說如此AI就失去了其存在意義,其思考能力也將難以提高。當然,限制與自由往往是相對而非極端的絕對。我們能不能像電影一樣,允許電腦自由思考的同時,設下一些不可違犯的律令(如不能夠傷害人類)呢?這就不得不提及關於人格的問題。
如果AI有自己的人格、自己的意識、自己的意志,它還會無條件遵從人類的指令,無條件為人類服務嗎?有人會質疑,電腦程式畢竟只是程式,即便智慧再高,難道就會發展出意識嗎?
對於篤信靈魂的人而言,電腦智慧再高,但它畢竟沒有「靈魂」,根本沒有可能擁有自己的意識。然而,對於腦神經科學有所研究的人而言,已經很少人仍然相信靈魂的存在了。因為現今的科學,已經逐步找出每個腦區所對應負責的思考活動。
《誰是我》一書中就提及了很多例子:例如梭狀迴臉孔區負責臉部辨認,如果腦部這個區域受損,我們就難以辨認人臉,如果這個區域與負責情緒反應的周邊系統出現連結問題,我們即便能夠辨認人臉,認出誰是誰,卻難以產生相應的情感。又例如,當負責感覺智覺的腦區與負責情緒反應的周邊系統的連結受損,即便知道自己受傷、覺得痛,卻會毫不在意。而最基本的嗅覺、聽覺等知覺,也自己有相應負責的腦區。一直探究下去,大腦的秘密都會逐漸被我們揭開。
既然大腦的運作已經可以解釋我們包括情緒、思考、智覺、記憶等心智活動,靈魂卻無法解釋我們任何的心智活動,靈魂的存在實在難有說服力。現今大部份腦科學家都傾向相信大腦就是思識的說法。
我再簡單推論一下:如果靈魂擁有思考、聆聽、說話、視覺等能力,我們的眼耳口鼻以及大腦根本是多餘的。反過來說,因為我們必須擁有五官大腦才能擁有以上能力,那麼靈魂本身大概根本沒有這些能力。你能想像這樣的靈魂嗎?不能看、不能聽、不能說,更重要的是無法思考。即便思考了,沒有記憶細胞,你也無法記住上一秒的思路。這種靈魂的存在明顯是荒謬的。
沒錯,科學家的確尚未解釋意識的原理,但意識也是源自於大腦。《大腦簡史》的作者說明:「事實上,我們只是間接看到了世界。我們的各種感覺或知覺體驗,其實完全是大腦的產物。我們真正「接觸」到的,只是大腦對這個世界的虛擬摹本。」
科學家發現,如果對受試者的其中一隻眼睛施以強烈的閃爍圖案,而另一隻眼睛則施以不會變動的靜止圖案。因為閃爍圖案過於顯眼吸睛,大腦會主要處理閃爍圖案,以至於短時間內壓制了靜止圖案,令我們看不見它。在這情況下,受試者只會看到閃爍圖案。
然而,即便我們沒有意識到靜止圖案,並不代表大腦沒有接受到靜止圖案的資訊。何以見得呢?原來,愈顯眼的靜止圖案,就會愈快衝破閃爍圖案的「壓制」,讓我們愈快意識到靜止圖案的存在。科學家就發現,如果在靜止圖案列出不符合語法的語句,比起符合語法的語句,受試者會更快意識到靜止圖案的存在。這意味着,即便我們沒有意識到語句,大腦仍然處理了這個資訊,並判斷出語句不符語法。
說到意識的本質,也不得不提神經生理學家Benjamin Libet於一九八三年發表的驚人實驗。實驗中,他要求受試者自由決定一個時間舉起手,並以碼錶回報「舉手意識」出現的時間。結果竟然發現,在「舉手意識」出現的一秒鐘前,大腦已經出現了相關的神經變化。實驗顯示,與其說是我們的意識決定何時舉手,不如說是大腦已經決定了要舉手,再產生這個「舉手的意識」讓我們能夠認知到這個身體變化。
說了這麼多腦神經科學,其實都只想說明一點:無論思考、意識、情緒、記憶等心智活動,都是大腦的產物,與靈魂沒有關係。因此,沒有靈魂的AI,同樣可以演化出人格。而且,現今AI運算的研發,就有借鑑模擬人類大腦的方法,當模擬人腦到了一個地步,AI出現自己的人格也絕非天方夜譚。
如今智能演化速度之快,單看AlphaGo Zero就能知道。再過數年,AI智慧之高可能已經無人能及。操弄智慧比自己高幾級的智慧體,就像一個小孩在成人面前玩弄計謀,其思路在成人眼中根本無所遁形。以為自己有能力控制智能,其實跟小孩玩弄火焰一樣危險。
現實畢竟不是電影,智能會否脫離人類的掌控,尚是未知之數。然而,智能叛變的威脅,絕對不容忽視,否則電影情節即便沒有成真,人類也隨時成為歷史中其中一種被淘汰的物種。
| 研發如人類般思考的AI是好是壞? | 9 | 研發如人類般思考的ai是好是壞-13006bbe5915 | 2018-03-12 | 2018-03-12 07:51:16 | https://medium.com/s/story/研發如人類般思考的ai是好是壞-13006bbe5915 | false | 25 | null | null | null | null | null | null | null | null | null | 人工智能 | 人工智能 | 人工智能 | 114 | Tony Wong | 倘心以誠,不吐不快。 破除迷思,求道求真。 不以物喜,不為己悲。 我不適合這個世界,這個世界不適合我。 | 142b76436a87 | tonywong_54100 | 65 | 56 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-02 | 2018-07-02 14:58:01 | 2018-07-03 | 2018-07-03 03:38:19 | 3 | false | es | 2018-07-03 | 2018-07-03 03:44:04 | 7 | 1307e962bad6 | 6.572642 | 1 | 0 | 0 | The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion.
– Alan Turing | 4 | Profe, ¿ Y cómo paso la prueba de Turing?
The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion.
– Alan Turing
El termino Inteligencia Artificial o I.A. aunque parece ser un tema de nuestro siglo, se ha estado trabajando desde el siglo pasado cuando se realizaron las platicas de Dartmouth en 1956 y ha sido pensado desde antes de eso más que nada en obras de ficción con nombres como “Frankenstein” por Mary Shelley por nombrar un ejemplo.
Pero si gente lleva más de medio siglo trabajando y pensando en esta rama de las ciencias computacionales, surgen algunas preguntas como: ¿ Qué es exactamente la I.A. ? ¿ Por qué hasta ahorita es cuando se ha estado generando más ruido acerca del tema ? , y la pregunta del millón de dólares creada por Hollywood ¿ Debería estar preocupado de que me quiten mi chamba? o aun más alarmante ¡¿ Se apoderarán del mundo como los robots en Terminator ?
..Antes de que se aceleren vámonos pregunta por pregunta
Que es I.A. ?
La Inteligencia Artificial es la rama de las ciencias computacionales que se enfoca en que una computadora haga cosas que ocupan inteligencia humana para realizarse. En el famoso libro “Artificial Intelligence: A Modern Approach” por Stuart J. Russell y Peter Norvig, los autores dan la definición de la inteligencia artificial en cuatro enfoques separados mostradas en una tabla que se mostrara a continuación:
Pensar Humanamente
Para pensar como humano se ocupa saber como en verdad piensa un ser humano y aqui se incluye la parte de la ciencia cognitiva, la cual junta modelos de computadora con técnicas de la sicología para que la computadora observe a una persona y vea como piensa.
Pensar Racionalmente
Aqui se enfoca en la parte lógica de los humanos se incluyen unos pensamientos de filósofos griegos como Aristoteles. Pensamientos como:
“Juan es un hombre; Todos los hombres son mortales; Por lo tanto , Juan es Mortal”
Se espera que la I.A. piense racionalmente con lógica de acuerdo a la teoría que se ha desarrollado desde la Antigua Grecia.
Actuar Humanamente
Se incluye la Prueba de Turing la cual fue hecho por Alan Turing y esta diseñada para diferenciar si el que la toma es una computadora o un humano. Esta incluye:
Procesamiento del lenguaje — que se pueda comunicar con un humano a través de algún lenguaje
Representación de Conocimiento — guardar información que la computadora ha escuchado o procesado
Razonamiento — Usar información para resolver problemas y hacer conclusiones de acuerdo a esos datos
Aprendizaje o ‘Machine Learning’ — Adaptarse a circunstancias en base a experiencias previas
Visión — Percibir los objetos
Robotica — Manipular y mover objetos
Actuar Racionalmente
Se dice que un “agente” es algo que actua y un “agente racional”es alguien que actua para llegar al mejor resultado. Se espera que la computadora actué de esta forma y de que “haga lo correcto” de acuerdo a su programación.
Con estos cuatro enfoques nos podemos dar una idea de lo que es la Inteligencia Artificial, la que ha estado generando más ruido últimamente es la parte de actuar como un humano. Empresas como Google, Amazon, y Facebook se han estado enfocando en varias sub-categorías de la inteligencia artificial como Machine Learning, Procesadores del lenguaje, y Representación del conocimiento. Ejemplos incluyen el anuncio reciente de Duplex, un asistente creado por Google para facilitarte las reservaciones.
El tema de inteligencia artificial poco a poco empieza a emerger dentro de empresas importantes, pero la pregunta importante es ¿ Por qué ahorita es cuando empiezan a hablar de esto ?
¿Por qué ahora?
Se puede decir que el tema de la llegada de la I.A. es multifactorial y se platicarán dos temas que se consideran de los más importantes de el ¿ Por qué ahora ? Los factores son el avance que la humanidad ha tenido en lo computacional y en lo biológico.
Ray Kurzweil en su libro “The Singularity is Near” habla sobre Gordon Moore, co-fundador de Intel Corporation, y su ley la cual se llego a conocer como “la Ley de Moore”. En esta habla sobre como cada año el número de transistores en un circuito incrementaba al doble manteniendo su costo fijo. Esto implicaba mucho mejor rendimiento y precios más baratos en los circuitos.
Fig 1.1
Como pueden ver en la imagen ( Fig 1.1), esto causa un efecto exponencial en el poder de computo y nos acerca aun más a el poder de procesamiento de un cerebro humano.
No solo los avances en poder computacional son los que han puesto que la sociedad empiece a hablar sobre la I.A., también se tiene que considerar los avances en investigación de la rama de la biotecnología. Cada año se aprende algo nuevo de como funciona el cerebro humano y esto es lo que en alguna forma se busca llegar a recrear cuando se habla de la I.A. La evolución tanto biológica como tecnológica nos han permitido suponer y decir que la I.A. esta prácticamente a la vuelta de la esquina.
Las siguientes dos preguntas son meras especulaciones ya que aun no hay respuesta certera de lo que pasara. Pero se tomara una opinión con la información que se tiene al momento.
¿La I.A. me quitará mi trabajo?
Esta es una de las preguntas más frecuentes que salen al hablar de los avances en Inteligencia Artificial. Imagínense la situación en la que una computadora trabaja 24/7 con errores mínimos y no solo eso, si no que aprenderá a velocidades mucho más rápidas y estará consciente de corregir los defectos que se tiene al hacer su trabajo. Claro que cuando te lo pintan de esa manera surge el miedo en las personas. Hay que recordar que hoy en día hay un número muy alto de personas que dependemos de una computadora ( smartphones ) para llevar a cabo tu día. La colaboración del hombre y la máquina me hace imaginar un futuro en el cual los trabajos se hagan en conjunto con la Inteligencia Artificial . Un ejemplo que pasa hoy en día es el de los jugadores de ajedrez llamados “Centauros”, estos son jugadores humanos colaborando con una computadora de muy alto nivel de procesamiento. Se obtiene la creatividad e intuición que se tiene con los humanos y el nivel de calculo y rapidez que se obtiene con una computadora para obtener una mezcla que puede vencer a cualquiera.
Otro punto importante cuando se habla del decremento de trabajos por la llegada de la I.A. es que aunque puede que muchos empleos queden obsoletos, otros nacerán de la nada. No olvidemos algunos trabajos que quedaron obsoletos por la llegada de la tecnología, por ejemplo antes existía el trabajo de tipógrafo y fue reemplazado por software de procesador de textos como Microsoft Word. A medida de que llegan avances tecnológicos muchos empleos dejan de existir, pero a la vez se crean nuevos trabajos en base a la necesidad que se tenga en ese momento.
¿Debería de estar asustado de la I.A.?
Si nos metemos más a fondo hay varias sub-categorías de inteligencia artificial dependiendo del grado o calibre del programa. Estos son ANI ( Artificial Narrow Intelligence o Soft A.I.), AGI ( Artificial General Intelligence o Strong A.I. ), y ASI ( Artificial Super Intelligence ). Hoy en día empresas han llegado a crear ANI, como ejemplos se puede decir que el buscador de Google o Siri de Apple son I.A. que se enfoca en un solo trabajo. Las otras dos ( AGI y ASI ) son las que no se ha podido llegar y las que se esta buscando llegar hoy en día. Estos dos tipos de I.A. son las que Hollywood ha recreado en películas como 2001: Una Odisea en el Espacio, la serie de Terminator, y I,Robot. Para evitar esos escenarios depende mucho de nosotros, primero que nada se debe buscar de donde recibe y de quien recibe la información la Inteligencia Artificial. Como ejemplo se tiene Tay.ai , un bot con I.A. de la empresa Microsoft que recibía información de los tweets que le mandaban. Este bot solo tardo un día en convertirse en racista gracias a los comentarios que le dejaban los usuarios de Twitter. Es necesario estar conscientes de la información que difamamos en el internet porque puede ser que en un futuro una computadora tome decisiones en base a eso.
Nick Bostrom en su libro “Superintelligence” habla sobre la ASI, una super entidad tan inteligente que va a superar a humanos de maneras en las que no podemos comprender porque no somos tan listos para imaginarnos ese tipo de cosas. Es por eso que se deben de empezar a discutir normas y regulaciones en cuanto a la creación de este tipo de computadoras y prevenir desastres que puedan salir.
Y esto, ¿ A mi qué me importa ?
El tema de la Inteligencia Artificial esta aquí para quedarse. Aunque algunos expertos en el tema dicen que hay una probabilidad del 50% que pase para dentro de 20 años, hay otros expertos que dicen que nunca llegara. Las aplicaciones que se pueden llegar a tener en tu vida personal o laboral son inmensas. Poco a poco las empresas se ponen a cuestionar como lograrán implementar I.A. a sus procesos. Es importante estar al tanto de lo que pasa en el area porque este tipo de proyectos son los que cambian y transforman la humanidad en maneras que no podemos ni imaginarnos.
REFERENCIAS:
Ray Kurzweil — The Singularity is Near
Stuart J. Russell y Peter Norvig — Artificial Intelligence: A Modern Approach
Nick Bostrom — Superintelligence
| Profe, ¿ Y cómo paso la prueba de Turing? | 14 | profe-y-cómo-paso-la-prueba-de-turing-1307e962bad6 | 2018-07-03 | 2018-07-03 03:44:04 | https://medium.com/s/story/profe-y-cómo-paso-la-prueba-de-turing-1307e962bad6 | false | 1,596 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Juan Carlos Garza R | Software Engineer, Passion for tech , startups, and startup acceleration | 954c628211b8 | juancarlosgarzar | 49 | 116 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-25 | 2018-09-25 14:35:55 | 2018-09-25 | 2018-09-25 14:51:19 | 2 | false | en | 2018-09-25 | 2018-09-25 14:51:19 | 8 | 130e5475529d | 1.409748 | 0 | 0 | 0 | An article published in the Harvard Business Review in October 2012 has attracted the attention of the world. | 2 | #bigdata 17e — Data Scientists and Big Data
An article published in the Harvard Business Review in October 2012 has attracted the attention of the world.
The title of the article was “Data Scientist: The Sexiest Job of the 21st Century.”
(credits pixabay)
It was signed by Thomas H. Davenport of MIT and D.J. Patil, this later, coined the term “Data Scientist” and became the first “US Chief Data Scientist” to be officially hired by the Obama administration.
The value of Big Data lies in developing insights about data, and it comes from talented people who can ask intelligent questions and respond using Data Analysis.
In this scenario, Data Scientist emerges as a new type of Professional, still in development, that schools and universities have difficulties in graduating to the market.
In summary, “Data-driven companies” are companies that efficiently use the data, generated by insights from professionals called “Data Scientist.”
The skills of a Data Scientist involve knowledge of Programming, Statistics, Mathematics, Machine Learning, Wrangling, Visualization, Communication, Software Engineering, among others, and perhaps the most important of them that we call “Intuition” to solve problems.
The Data Scientist must have in-depth knowledge of the domain that acts, called “domain knowledge,” that is the specific knowledge about a particular area of human activity.
For example, a Data Scientist working with genetic mapping should have exceptional knowledge of biology as well math, statistic, and programming.
More information about this article
Article selected from the eBook “Big Data for Executives and Market Professionals.”
eBook in English: Amazon or Apple Store (free)
eBook in Portuguese: Amazon or Apple Store (free)
eBook Web Sites: Portuguese | English
"Big Data for Executives and Market Professionals"
| #bigdata 17e — Data Scientists and Big Data | 0 | bigdata-17e-data-scientists-and-big-data-130e5475529d | 2018-09-25 | 2018-09-25 14:51:19 | https://medium.com/s/story/bigdata-17e-data-scientists-and-big-data-130e5475529d | false | 272 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Jose Antonio Ribeiro Neto | Big Data Analytics Researcher. Xnewdata Predictive Analytics Consultant. Big Data, Data Science Author. Coursera Mentor Big Data UCSD San Diego. | 8b62e5eb7eda | joseantonio11 | 116 | 706 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-20 | 2018-09-20 10:53:20 | 2018-09-19 | 2018-09-19 10:39:51 | 1 | false | en | 2018-09-20 | 2018-09-20 10:53:36 | 1 | 130f5695bd23 | 3.879245 | 0 | 0 | 0 | With the current technology trends, one thing we can state is that we are entering into the 4th Industrial Revolution now. As we know ‘The… | 1 | Top Technology Trends That Will Rule In 2018
With the current technology trends, one thing we can state is that we are entering into the 4th Industrial Revolution now. As we know ‘The Year of Intelligence’ has brought many new trends in technology, and since then, there has been significant progress and change in this technology. From overhyped ICO’s to the algorithms used for creating secret languages, the trends of technology keep on changing every year, providing better and better performance.
The companies that have been providing and promising us better technology in our daily lives keep on constantly improving and trying to reach at a point of maturity now. The latest trends in information technology bring a change in the society as to how we work and how we live. But this is not over yet. It will keep on affecting our lives in very different ways by providing the best services to us.
But as there is an improvisation of tech trends always, you will find that we have not reached the full state of maturity. However, when you look at the latest technology trends, you can say that we have left the Information and Technology Revolution of the 1970s behind us and are successful in inventing new trends each year. You will find many tech trends that have changed the scenario of the technology field, and you will realize that the Information Technology is capable of great things.
Since there is no end to the inventions, let us see these top tech trends that are ruling or will rule ahead in 2018:
● The Artificial Intelligence keeps on mesmerizing-
You must be aware of the game of Go by Alpha Go Zero in which you find out that it becomes the best game within a 50 days span and better than any artificial player. This game came into existence without any data input by humans. As a result, we find that artificial intelligence can transform everything and go into the mainstream.
Artificial Intelligence is the ability of machines to learn, influence data, the software, and the cloud to work out at the speeds that are far away beyond the capabilities of the human. For a simple understanding, the voice command and facial recognition technology are considered to be AI technology. AI is a concept or a technology of letting machines to do a better job than a human can do.
● ICO Hype will slow down, and blockchain will mature-
The blockchain is a technology where people can write and can make an entry to the record of information and a group of people or a community of users controls these records of information and keeps on updating it. To protect the consumers and to catch the criminals is very important; the government will increase the regulation on the number of ICOs. As a result, there will be a decrease in the hype of ICO’s giving rise or adopting an innovative technology of Blockchain. But first of all, for Blockchain to become mainstream as ICO’s, it has to become diffusing as the internet. So in 2018, you will see many applications to be invented for blockchain to be in action creating security for the consumers.
● A solution will be introduced to secure our privacy-
As we enter too many websites on the internet, there is a high chance of occurrence of security breaches as more data is created. In order to avoid this, the information technology has come up with a solution.
The startups with the hype around block chains have been working on a new technology called Zero Knowledge Proof(ZKP). The ZKP is a process that uses encrypted language to prove ownership of the data without revealing the full information about that particular data. It uses mathematics for enabling the trustless transaction, which safeguards the user’s privacy. Even you will find that the ZKP has improved the verification processes to such an extent where you or a party can prove to the other party that the argument is true without letting any detailed information out.
● Data will be secured on the Horizon with a new approach-
Data always helps you in two ways. Firstly, it allows you to own the information about a particular individual which is valuable and can be monetized. Secondly, the data helps you have the insight to make smarter and better decisions.
So, from now onwards, i.e., from 2018, everything will be in data form. You will find plenty of opportunity in collecting, storing, securing and sharing data. This will make the marketing process more personalized, and the companies can connect to their consumers by making them feel more secure about the service they provide and will capture more market share.
It has been found that intelligence has been increasing while data were decreasing. But this news great to the entrepreneurs and small businesses who are in need of data to unlock the insights that are required to grow the business.
● Wireless and 5G-
The world saw 4G in 2017. The speed that matters to everyone for the work and transaction to be done faster. Now there is a high chance of introducing a 5G technology where the speed will be 100 times faster than that of 4G. This will not fuel up the speed but also the Artificial Intelligence, Internet of things and big data work, which works for automating the daily experience. The invention of wireless gadgets has taken over. This creates an easy way of carrying and handling the things with no worries.
These are the top 5 technologies that come under the recent trends in information technology. These technologies are not only going to rule the year but will also create a new platform for inventing more better technologies for the betterment.
Originally published at blog.imarasoft.net on September 19, 2018.
| Top Technology Trends That Will Rule In 2018 | 0 | top-technology-trends-that-will-rule-in-2018-130f5695bd23 | 2018-09-20 | 2018-09-20 10:53:36 | https://medium.com/s/story/top-technology-trends-that-will-rule-in-2018-130f5695bd23 | false | 975 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Rinsad Ahmed | null | 5e9c924dbdd3 | rinsadahmed | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 15168b00a53c | 2018-05-24 | 2018-05-24 10:04:04 | 2018-05-25 | 2018-05-25 14:45:46 | 9 | false | en | 2018-07-30 | 2018-07-30 10:03:08 | 6 | 131240e78bea | 8.075472 | 2 | 0 | 0 | In our society, technology is trapped in a never-ending state of flux, a forward acceleration thats driven by innovation, widespread access… | 5 | From Doodle to Digital: Extracting Features from Hand Drawn Maps
In our society, technology is trapped in a never-ending state of flux, a forward acceleration thats driven by innovation, widespread access and advancements in hardware. As a consequence businesses are active participants in an ongoing cat and mouse chase attempting to keep up, adapting their business processes to take advantage of new technologies. This transition (from old to new) has a series of challenges, particularly around data that is attached to legacy systems, this is valuable information that needs to be integrated into modern day (digital) processes.
Kainos have vast experience with these challenges, especially in regard to digital transformation for large organisations (including the UK government!). On occasion, customers will approach Kainos with a problem that lives outside the reach of our traditional skill set, problems that often require cutting edge solutions. This is where the innovation capability comes into play, this is an area where the team adds signifanct value to Kainos. Lets dive into an example use case, which involved the innovation team leveraging computer vision to extract features from hand drawn maps for a customer (hereby denoted as ‘CX’).
It All Started with a Pencil and a Few Crayons
Lets jump back a few decades, back in the old days when all the information associated with an organisation was collected entirely on paper — written manually by humans and stored in multiple, large filing cabinets. Eventually as time went on the digital revolution began to make a big splash, this led to businesses start the slow (often still ongoing!) journey of digitising their own written information for storage on servers.
CX also found themselves smack bang in the middle of this transition, in the past they captured property information (land ownership etc.) by dispatching a case worker to visit a location (equipped with a pencil and crayons) to draw a birds eye view of the land (while also writing up supporting documentation). The output looked something like this…
Examples of ‘Hand Drawn’ Maps
These are whats known as title plans, a scanned drawing which represents a piece of land visually. The anatomy of a title plan is typically made up from colour shading, pencil outlines, north facing arrow and other textual identifiers. Accompanying the visual features associated with title plans are a set of interesting challenges, which wrap a series of constraints around a potential solution.
Absence of a consistent key between colour to definition, the description of what each shading represents is contained in another distinct document. This means, a case worker manually has to read an entire associated text to determine what each of the coloured areas represent —just imagine how tedious this is! 😔
The modern digital mapping system contains additional information that is complementary to the hand drawn maps — which means the hand drawn features need to be overlaid in a consistent manner 😬
Lets not get too down in the dumps…at least the red border is consistent on every title plan 😏
Okay, so there a few interesting challenges, however we felt that these were barriers we could overcome! So, to summarise, the current business process for a case worker attempting to find all the information for a piece of land looks a little something like this:
Example work flow for a case worker
This is not particularly efficient, however this understanding helps define a few core goals that we felt would prove most valuable for CX, including:
Extract the coloured ‘features’ from the scanned title plans and overlay onto the modern digital maps used by CX.
Enable easy searching of the associated textual document for the mapping colour key for each title plan.
Cool, so how we do we actually achieve these goals?
Lets Bring Computer Vision to the Party
Just incase you have never encountered this concept before, lets start with a handy dandy definition:
Computer vision is concerned with the automatic extraction, analysis and understanding of useful information from a single image or a sequence of images
Lets step back, basically what this entails is the extraction of useful information from images — in our case this includes the shaded areas of the scanned map. Within the innovation capablity in Kainos we have extensive experience in computer vision related tasks, using this experience we put together a plan to help achieve our goals, so what did our solution look like?
Fun Fact: CX posed this challenge to other competitors to see who could solve it most effectively…As you would expect this got our competitive juices flowing.
Putting Our Money Where Our Mouth is
We chose to put our faith into the grand daddy of computer vision tooling — OpenCV. OpenCV remains one of the best choices when taking on a vision task due to its robustness, integration with multiple languages (we chose Python), community support and the sheer volume of options avaliable which are hidden behind easy to use abstraction layers.
Image Extraction
The original data was a scanned PDF containing the hand drawn map & additional supplementary machine text containing details like the title plan ID among other things. To extract the element we wanted (the map) we wrote a script to parse the PDF and output the images contained within it — phew, step one, complete.
2. Feature Extraction (Aka where the magic happens)
At this step we had in our possession a dataset containing hand drawn maps, now we had to extract the areas of interest — This is where OpenCV comes into play. We will use the image below as an example to demonstrate how we applied image processing to manipulate the images.
The ‘base’ image prior to any image processing
Step one, Binarization: We first had to convert the base image into what is known as a binary representation — essentially transforming the image to only show black and white pixels, where white are the ‘features’ of interest and black is everything else.
However, we encountered an unknown challenge, not all of the hand drawn maps in our dataset had a pure white background — given that these images were essentially scanned from paper we discovered that the paper could be discoloured (taking on a muted yellow appearance). This ruled out the possibility of a brute force strategy which simply removed the black pixels (the pencil lines) and the white pixels (the backgound) — we needed something a little bit more intelligent.
To overcome this challenge we converted our images to the HSV colour space (rather than the traditional RGB colour space) which enabled us to target the saturation of the image much more effectively. This enabled us to use the mean saturation of the image as a means of creating a mask which could filter out the pixels that are not within the colour ranges where the likes of green, red, bright yellow and blue reside —aka the features we want to extract.
We then applied this mask to our original image, which yeilded this as the result…
The intermitent binary image
Step two, colour retrieval: We could perform a ‘bitwise and’ operation using the mask & the original image to extract a perfect cut out of the areas of interest, only this time with the colour information intact.
Success! I think that is pretty good feature extraction 😄
3. Digital Map Overlay & Association with the Colour Key
It turns out that overlaying the the hand drawn features on top of the digital mapping system was relatively simple (due to the output image having an alpha channel). Additionally the associated document also contained the coordinate information for the land, which helped us define the location where the overlay would be placed. Great, so this enabled us to achieve the first of our two core goals — blending two distinct sources of information into one central, up to date location.
However, there is still one big elephant in the room, how could we associate the extracted features with a description stored in a separate (digital) document? Luckily for us the colours leveraged by case workers are quite distinct from one another with very little potential for overlap, with a focus on primary colours, these have each quite different representations in the pixel domain (RGB, where a colour is made up of three brightness values representing the three colour bands).
Red Range = (255, 180, 180) → (180, 0, 0)
Green Range = (150, 230, 150) → (25, 100, 25)
Blue Range = (180, 210, 255) → (0, 70, 180)
Yellow Range= (255, 255, 150) → (230, 230, 0)
Using these pixel boundaries we could identify which colours are present in the extracted features. This could then be used as a means of searching the associated document for a key word match, which would enable highlighting the description defining what the colour represents. In one swoop we have removed the need for the case worker to read the entire document for the colour descriptions, enabling them to spend their time on more valuable tasks (while also minimising good old human error).
So I know what your thinking, what would this look like for a case worker? To showcase this we put together a demo portal which enable a case worker to search for land based on a code, view both the hand drawn and digital features associated with the land as well as extracting the colour key information, check out the screenshot below.
The basic demo portal we built for CX to showcase the value of our work
Fun fact #2: It took the core team (three people) only two weeks to create this tool
Future Work
This particular problem space has a host of additional interesting opportunities that would require computer vision techniques & our innovation team at Kainos sure does love interesting / challenging problems!
You may have noticed that some of the title plans contain a ‘north’ directional pointer, if we are able to extract this point and adjust for the ‘on paper offset’ it would enable a more accurate overlay of the hand drawn features onto the digital map. So how would we do this? well…
Train a convolutional neural network to detect the north arrow in the title plan — extracting bounding box coordinate information.
Use the orientiation of the north arrow to automate the rotation of the extracted hand drawn features to enable consistency (removing the need for manual editing by a case worker) with the digital map.
An example of north facing arrow surrounded by the blue box
Conclusion
Moving data to new systems is hard. Its an ongoing battle that organisations are currently waged in with no end in sight, often this phenemonem breeds some interesting problems, just like this one we faced with CX. Often the solutions are hidden behind interesting technologies like computer vision, where both specific expertise and demonstrable business agility (away from our core Kainos offerings) are given the stage to shine.
If you feel that anything in this blog is applicable to your own business (even outside of the scope of extracting features from maps), or if you are interested in contacting the innovation team to discover more about what we do, feel free to send us an email: [email protected]
| From Doodle to Digital: Extracting Features from Hand Drawn Maps | 60 | from-doodle-to-digital-extracting-features-from-hand-drawn-maps-131240e78bea | 2018-07-30 | 2018-07-30 10:03:08 | https://medium.com/s/story/from-doodle-to-digital-extracting-features-from-hand-drawn-maps-131240e78bea | false | 1,822 | Blogs from the Applied Innovation team at Kainos. | null | KainosSoftware | null | Kainos Applied Innovation | null | kainos-applied-innovation | VIRTUAL REALITY,INNOVATION,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,AI | KainosSoftware | Machine Learning | machine-learning | Machine Learning | 51,320 | Jordan McDonald | Software Engineer @KainosSoftware | abdae9593f6e | jordanmcdonaldmain | 11 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-13 | 2018-04-13 12:00:29 | 2018-04-13 | 2018-04-13 12:00:30 | 1 | false | en | 2018-04-13 | 2018-04-13 12:00:30 | 1 | 13132a184033 | 2.74717 | 0 | 0 | 0 | null | 5 | The Cold Start Problem with AI
If you have become a Data Scientist in the last three or four years, and you haven’t experienced the 1990’s or the 2000’s or even a large part of the 2010’s in the workforce, it is sometimes hard to imagine, how much things have changed. Nowadays we use GPU-Powered Databases, to query billions of rows, whereas we used to be lucky if we were able to generate daily aggregated reports.
But as we have become accustomed to having data and business intelligence/analytics, a new problem is stopping eager Data Scientists from putting the algorithms they were using on Toy Problems, and applying them on actual real-life business problems. Other wise known as the Cold Start Problem with Artificial Intelligence. In this post, I discuss why companies struggle with implementing AI and how they can overcome it.
Any company either startup or enterprise, who wants to take advantage of AI, needs to ensure that they have actual useful data to start with. Where some companies might suffice with simple log data that is generated by their application or website, a company that wants to be able to use AI to enhance their business/products/services, should ensure that the data that they are collecting is the right type of data. Dependent on the industry and business you are in, the right type of data can be log data, transactional data, either numerical or categorical, it is up to the person working with the data to decide what that needs to be.
Besides collecting the right data, another big step is ensuring that the data that you work with is correct. Meaning that the data is an actual representative of what happened. If I want a count of all the Payment Transactions, I need to know what is the definition of a Transaction, is it an Initiated Transaction or a Processed Transaction? Once I have answered that question and ensured that the organization agrees on it, can I use it to work with.
With the wide adoption of SCRUM and frequent releases, companies have to devote resources to ensure that the data is correct. Companies could, add new sources of data, changes in the code that can have an impact on the logged data or even outside influences like GDPR or PSD2, that can cause the data to be altered because it needs to be more secured or stored in a different way. By ensuring that during each process the correctness of the data is ensured, only then can you move on to the next phase of analytics.
Even though AI is currently what everybody talks about, before we get there we still have to take an intermediate step, which is Analytics. What I mean by Analytics, is the systematic computational analysis of data or statistics. In most companies, the process to get to the visualization of the data might be known to few, but the impact it has on each department is tremendous.
Companies need to determine which Key Performance Indicator’s (KPI’s) actually drive the business. Working in the Payments Industry, my KPI’s include Processed Revenue, Transaction Costs, Profits, Authorization Rates, Chargeback Rates, Fraud and many others that provide me with the information to manage the performance of the business. For a Taxi App, KPI’s might include, Revenue, Profit, Average Pick-up Time, Average Ride Time, Active Users and Active Drivers.
From those KPI’s, a company can then decide what type of reporting or dashboards are necessary for the business users to make informed decisions and work on automating the systematic computational analysis of the data or statistics.
But as the volume of data increases from KB’s to TB’s, and business users are looking more and more at aggregated reports and visualizations of the data, the chances of detecting smaller issues reduces significantly. It is only then, that implementing AI can become a worthwhile investment of time and resources.
Having determined the KPI’s that help steer the business, Artificial Intelligence can be used to improve these KPI’s.
Posted on 7wData.be.
| The Cold Start Problem with AI | 0 | the-cold-start-problem-with-ai-13132a184033 | 2018-04-13 | 2018-04-13 12:00:31 | https://medium.com/s/story/the-cold-start-problem-with-ai-13132a184033 | false | 675 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Yves Mulkers | BI And Data Architect enjoying Family, Social Influencer , love Music and DJ-ing, founder @7wData, content marketing and influencer marketing in the Data world | 1335786e6357 | YvesMulkers | 17,594 | 8,294 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | e5c22d80135a | 2017-09-23 | 2017-09-23 02:26:11 | 2017-10-02 | 2017-10-02 18:53:59 | 8 | false | en | 2017-10-26 | 2017-10-26 18:58:18 | 5 | 131400dce0dc | 4.133333 | 10 | 0 | 0 | A new school year is upon us, and with it comes another round of Undergraduate Council elections. The UC elects three representatives from… | 5 | You, Too, Could Have Won The Dunster UC Election
A new school year is upon us, and with it comes another round of Undergraduate Council elections. The UC elects three representatives from each of the 12 upperclass Houses and 4 freshman yards— plus the president and VP, who are elected separately, making a group of 50.
The UC is a major force on campus. The Finance Committee distributes $300,000 a year to student clubs, and UC leaders run everything from Thanksgiving shuttles to menstrual hygiene product installation in bathrooms. So it’s important that students are engaged in the UC elections.
The UC’s logo
But, last semester, we reported about how literally nobody voted in the Quincy midterm elections, among other signs of disturbingly low turnout. While it seems turnout this semester has improved (Quincy literally improved “infinitely,” if you will, since they jumped up from 0), unfortunately, there are still some glaring issues. Let’s jump into them.
Our open data source
At the Harvard Open Data Project, we are committed to holding Harvard’s institutions accountable through open information. We obtained official turnout data from Harvard’s Election Commission, which we have now made public, and below we’ll share our insights in the hopes of shedding light on the voting process and the outcome.
Only 12 people voted in the Dunster House UC Election
Voter turnout by house/yard. All graphs courtesy of Stephen Moon.
Despite having 402 eligible voters, Dunster’s turnout was abysmally low. However, this is understandable given than only 1 person was running in the first place. Junior Gevin Reynolds ran unopposed and was met with an expected victory. (To his credit, he earned 11 votes.)
Anyone in Dunster could have written in their name and won a seat on the Undergraduate Council.
This wasn’t an isolated incident. Adams, Dudley, Lowell, Mather, and Quincy were all no-contests as well. Everyone who ran got a spot. In fact, the most competitive race in the upperclass Houses was Pforzheimer, where 4 people vied for 2 open spots, an “admission” rate of 50%.
Several houses, such as Adams and Dunster, were uncontested. The freshman yards, seen on the right, were far more competitive.
Why the enthusiasm gap?
We brainstormed several hypotheses for why some houses had such abysmal turnout and why some fared better. The hypothesis most supported by the data was that more-contested elections attracted more voters than less-contested ones. This trend was generally true, but there were many exceptions.
Generally, more candidates meant higher turnout, but this wasn’t true across the board.
Winthrop had 5 candidates in the running, but only 21 of the 410 eligible students, or 5.12%, voted. Mather had virtually no choice regarding their UC representative, yet their turnout (8.92%) still surpassed Winthrop’s.
In general, though, the houses with more candidates did seem to have higher turnout.
Insurgence in Eliot
This fall’s Eliot UC election produced a record voter turnout of 30.25%.
The highest turnout in each House since Spring 2016. Eliot set its all-time high this year.
In Eliot, 4 sophomores faced off against 1 senior for 3 coveted positions. One of the winning candidates, Arnav Agrawal, ran on a platform of providing free menstrual health supplies within the house and improving mental health services. These issues were also central to Agrawal’s freshman year successful UC platform. Perhaps this exciting race will set a precedent for other candidates in future elections.
Who knew freshmen were so politically engaged?
The freshman class had both the greatest amount of candidates and the largest voter turnout by a wide margin. Even the freshman yard with the lowest turnout, Ivy Yard (39.29%), had a better voter turnout than the upperclassman house with the highest turnout, Eliot (30.25%).
Overall, the freshman class’s voter turnout was a striking 45.38%. This relatively high turnout was more than three times as high as the overall voter turnout for upperclassmen, which was just 14.10%.
A call to action
While we might make fun of it sometimes, low voter turnout is a huge problem on campus. The UC was created specifically to give students, elected by their peers, the ability to change the Harvard experience. Failure to participate in this process is a waste of a valuable opportunity. Casting your vote is the easiest way to ensure that your voice is being heard and that you are adequately represented.
Most students will spend 4 years in college. It’s time to make them count.
This was an analysis by the Harvard Open Data Project, a student-faculty group that uses public information to hold Harvard institutions accountable. We believe that transparency and accountability are fundamental to having an effective government.
What’s more, we believe that pundits, analysts, and journalists should make their data and methods open so the public can verify the accuracy of their claims. In that spirit of openness, here’s the original dataset and the Excel file we used for the analysis and graphs. We encourage you to fact-check our work!
Want to work on more projects like this? Join us!
Thanks to Stephen Moon, Erik Johnsson, Jeffrey He, Emma Ling, and Neel Mehta, who contributed to this article.
| You, Too, Could Have Won The Dunster UC Election | 382 | you-too-could-have-won-the-dunster-uc-election-131400dce0dc | 2018-02-04 | 2018-02-04 11:03:45 | https://medium.com/s/story/you-too-could-have-won-the-dunster-uc-election-131400dce0dc | false | 795 | We're a student-faculty team dedicated to opening and analyzing Harvard data to empower our community members to improve campus life. hodp.org | null | HarvardODP | null | Harvard College Open Data Project | harvard-open-data-project | OPEN DATA,HARVARD,TECHNOLOGY,OPEN GOVERNMENT,DATA VISUALIZATION | null | Politics | politics | Politics | 260,013 | Flora DiCara | Harvard ’20. I like making things and solving problems. | 6160370a3b1a | fdicara | 12 | 26 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-09-20 | 2018-09-20 15:37:51 | 2018-09-20 | 2018-09-20 19:26:51 | 2 | false | en | 2018-09-20 | 2018-09-20 19:27:43 | 0 | 131497e53fde | 3.519182 | 0 | 0 | 0 | Our world has its own set of challenges just like other eras had theirs. One of those challenges is credibility. | 4 | The invisible currency of credibility
Our world has its own set of challenges just like other eras had theirs. One of those challenges is credibility.
In an increasingly digital life, humans have alternate gateways to experiences.
A typical human, living in these modern times will be connecting with other humans but also, with machines. While humans continue to develop themselves, ideally towards their full potential, machines are being designed to accompany us, individually and collectively, along this journey.
While an encyclopedia in the home used to be the go-to reference for a typical family, it’s now common to refer to a smartphone, a tablet or a computer to access encyclopedic-level information, therefore multiplying the bridges between those who know and those who want to know.
This multiplies opportunities for knowledge.
Humans therefore evolve their potential for a fuller expression of themselves. And machines benefit from this human evolution. As machines are getting ahead in several fields, thanks to specific artificial intelligence schemes which are self-learning in part or even in full, our reality is expanding beyond what we can grasp.
So with all this content being produced, humans need to stay aware of this utmost importance of credibility.
Credibility of whatever our senses can perceive (the nature of our reality);
Credibility of humans and whatever they manifest or produce; and
Credibility of machines and whatever stems from their processing cycles.
As the potential for greatness through truth is tending towards infinity, so is the potential to lie.
The same way truth is a currency, credibility is too.
One may hold a certain belief as truth but if this individual has no recognized credibility, he’ll never be able to convince others to entertain his beliefs. His truth will therefore likely remain his own. In clear, he’ll probably end up dying with a truth which cannot benefit others. Credibility is therefore a rather invisible currency we all need to foster in order to be able to have our truths emanate beyond ourselves.
Is all the information contained in books credible? How can you know, for sure?
No credibility (or “zero credibility”) means our truth, knowledge and experience will likely dies with us. Perhaps somebody else will add credibility externally but this example is designed to show how important credibility should be, for everyone, not just for those who earn a wage based on their relative credibility.
The same logic applies to machines.
Thus, the symbiotic nature of humans and machines, as they currently stand, provide for both knowledge and credibility to be synergistically used to multiply the value of both.
So looking after credibility matters.
A lot.
But keep in mind that something credible might not be true, or useful.
Whenever uncovered, this lie of sorts will contaminate anything else linked to it. Wether it’s a human, a machine or anything else, including a context or a simple observation.
So credibility itself is mixed bag, as complex as anything else.
What is credibility? Is it contextual or work experience, public recognition, diplomas or even heresay?
The more you dig into credibility, the more you realize how someone patently credible can pass as uncredible because of a detail, like the way he talks or stands. And even if a human emanates credibility, who is to say another human will give proper credit to it?
Emanating credibility can therefore go nowhere if whoever is receiving it can’t understand it. Or properly size it.
To a kid, a parent is a monument of credibility.
So credibility can’t live without concepts such as agreed upon truths (not necessarily the truth, just “a” truth most agree upon). Or trust. Yes, the all important trust which can lay way to wonderful truths or terrible lies, often enveloped in varying degrees of manipulation.
What started as a simple matter about getting a few duck in line regarding credibility is slipping down a slope of endless deceptive tactics humans use to gain (anything, in general).
Although everybody eventually realizes the importance of credibility, not all humans use it with the same intent. Some will want to hide a lie with another lie and others will try to shine a light of truth in a sea of darkness, knowledgewise.
In the end, credibility is a tool among other tools.
An important tool indeed but part of a rather expansive toolset, nonetheless.
Embeded in to other tools or as a standalone one, credibility gets to be used in so many ways, it mimicks the very fabric of reality in its scope. Yep, credibility is a huge tool to wield and our mastery of it can evolve, over time. Especially with our various life experiences. Credibility in forged into us at the same time we attempt to emanate it, outwards.
If you feel credibility is something very intimate which is being used as a foundation, a pillar, for social interactions, you’re probably inching towards understanding how it creates you as much as you create it.
Have fun using credibility in all manners of all you consider to be your life.
Emanate it, receive it, learn to recognize it… and spin it.
It’s yours to explore.
| The invisible currency of credibility | 0 | the-invisible-currency-of-credibility-131497e53fde | 2018-09-20 | 2018-09-20 19:27:43 | https://medium.com/s/story/the-invisible-currency-of-credibility-131497e53fde | false | 831 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Claude Gelinas | Family, the great outdoors and the web... awesome moments! | aa4f214b310d | logixca | 35 | 387 | 20,181,104 | null | null | null | null | null | null |
0 | l=[] # option 1
l=list() # op 2
l=[10,”dog”,variable,452.8]
l.append[200] #Automatically saves the number 200 at the end
l.count(10) #nos devuelve el número de veces que esta en la lista
l.insert(3,10) #inserta en la posición 3 el item 10
l.pop() # Sino pones nada borra el último elemento
l.pop(2) # Borra el elemento en la posición 2
l.sort() # Ordena la lista
l.reverse() # Le da la vuelta a la lista
l.clear() #Borra el contenido de la lista
l.extend(lista) # pone al final de la lista, otra lista.
l=["dog","cat","parrot","ant","human","robot"]
l=[0] #returns "dog"
l=[1:3] #returns "cat"y "parrot"
l=[2::2] #returns "parrot" y "human"
| 8 | 290efb9ac4f6 | 2018-08-04 | 2018-08-04 17:16:52 | 2018-08-04 | 2018-08-04 16:51:43 | 3 | false | en | 2018-08-04 | 2018-08-04 17:22:48 | 1 | 131546d20c36 | 3.146226 | 1 | 0 | 0 | “Those who know how to think don’t need teachers.” Mahatman Gandhi. | 4 | LEARN PYTHON NOW! Book: The Pillars of Python. 8 — Variables II: lists and slicing.
“Those who know how to think don’t need teachers.” Mahatman Gandhi.
Composite variables are designed to store data collections that are themselves objects like the ones we saw in the previous chapter (integer, float, boolean or strings).
There are 4 types of composite data: Lists, tuples, dictionaries and sets. This chapter is one of the most important as over the years I have seen how people who are just starting out don’t know when to use a list or a tuple and never use sets. Each one has its usefulness and we are going to see it in a very simple way:
We have to think of the lists as connected boxes.
Inside the boxes you can save anything, a data type like a 7, a variable that contains a data type like a “hello” or even (and here’s where it’s interesting, you can save a variable that is itself a data set as another list).
The numbers written under the “boxes” are the positions. In python, as in most programming languages, you start from 0.
To create a list, there are two ways:
To insert something into a list you can either write it down when you are creating it… or write it down:
or use one of its properties (called in python methods or what is “the same thing you can do”).
The lists are widely used in python and have many methods, which my recommendation is to know that they exist and little by little when you need them you will learn them. For example:
We have talked about positions but not how to access them. For that we have to introduce the concept of slicing. Let’s pretend we have this list:
There are three key cases of slicing, summarized as follows:
The first element is from where we start, the second one to where we arrive (without reaching that element) and the third one is the jumps or steps. If we don’t put in anything else, python has by default that you finish until the end and the step is 1.
Case 1: We only want one element
Case 2: We want an element from start to finish
Case 3: We want the elements of the even positions from position
In addition, to complicate our existence a little more (although it is useful in some cases) the possibility of doing the same as above but starting from the end was incorporated. So, the positions could be set as follows:
The lists as we said are the most used data set in python, so we have spent more time on them. In the next chapter, we will talk about tuples, sets and dictionaries.
— — — — — — — — — — — — — — The End — — — — — — — — — — — — —
If you like this small and free magazine you can help us by simply sharing it or subscribing to the publication. My name is Rubén Ruiz and I work in Artificial Intelligence in the financial industry and as a personal project I run this little magazine where we experiment with Artificial Intelligence… until the computer explodes :) You can follow me on:
Instagram (Personal life, it’s fun) => @rubenruiz_t
Youtube (Channel about AI, try to make it fun )=> Rubén Ruiz A.I.
Github (Where I upload my code, this is not so much fun anymore) => RubenRuizT
| LEARN PYTHON NOW! Book: The Pillars of Python. 8 — Variables II: lists and slicing. | 3 | learn-python-now-book-the-pillars-of-python-8-variables-ii-lists-and-slicing-131546d20c36 | 2018-08-04 | 2018-08-04 17:22:54 | https://medium.com/s/story/learn-python-now-book-the-pillars-of-python-8-variables-ii-lists-and-slicing-131546d20c36 | false | 688 | Experiments with Artifitial Inteligence. If it doesn't blow up is fine. | null | null | null | AI experiments | ai-experiments | ARTIFICIAL INTELLIGENCE,PYTHON,DEEP LEARNING,PROGRAMMING,R | null | Python | python | Python | 20,142 | Ruben Ruiz | null | 2db774b0464f | rubenruiz_26771 | 24 | 22 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 9476a924ad57 | 2018-08-13 | 2018-08-13 10:14:56 | 2018-08-13 | 2018-08-13 08:50:54 | 6 | false | en | 2018-08-13 | 2018-08-13 11:01:38 | 10 | 1315df575ca7 | 5.210377 | 2 | 0 | 0 | This is a third post in our series exploring different options for long-term demand forecasting. Today, we will explore different… | 4 |
Demand Forecasting 2: Machine Learning Approach
This is a third post in our series exploring different options for long-term demand forecasting. Today, we will explore different approaches to applying classical machine learning to forecasting problem. To better understand our journey and problem setting, you might want to check out our introductory blog post.
Step by step vs 90 days at once
We believed that using models predicting one day ahead and feeding these predictions back to the model to predict the following day (step-by-step predictions) would be more accurate than predicting a 90-day time series at once — simple linear regression had better scores predicting step by step (14.44 SMAPE on the validation set) than doing 90 days at once (15.53). This, however, should be taken with a grain of salt, because feature engineering might give the 90 days at once an advantage.
We haven’t found an existing solution that would allow for doing a step-by-step prediction. Thus we created some helper functions that compute new input features based on original time series and previous model forecasts. Having separate functions for generating training data and making predictions saved us from test data leaking into training which happened a few times when computing features like monthly average ( mistakenly we took future dates into consideration so that the model could ‘peek into the future’). For more information please check our “rolling features” util on GitHub.
Linear regression and XGBoost
At first, we tried various linear regression models from the sklearn package like LinearRegression, Ridge, ElasticNet and BayesianRidge to quickly establish a baseline for the rest. They achieved validation scores between 14.5 and 18, however after submitting the best one (BayesianRidge) to kaggle, it scored a mere 15.29. Afterwards, we tried gradient boosting with the XGBoost library, however it performed similarly to the linear models in step by step prediction achieving a validation score of 15.04.
A comparision of SMAPE distributions between XGBoost and BayesianRidge models.
We tried feeding our models sales from previous 90 and 365 days, as well as adding other features from statistics (min, max, mean, variance, standard deviation, median) of sales in some time intervals — last week or last month, but most of the time adding too many features only made things worse.
The linear models often were able to adapt to the seasonality, but adding deseasonalization based on averaging usually improved their results.
A sample Bayesian Ridge one-week ahead model forcast for one of the time series.
A sample Bayesian Ridge model 3-months ahead forecast for one of the time series.
Error accumulation
We were startled that sometimes when our models achieved very small errors while predicting a single day ahead, their 90 day forecast error would grow. The reason for that is a discrepancy between our training target and the test target — we were training the models to predict a single day ahead but used them to make longer predictions. This has two traps:
The first problem is that for the long-term predictions it’s possible that the whole input to the latter data points is entirely based on previous prediction thus the error may grow exponentially. It would be reasonable to assume that when the error of our model on a single prediction decreases, the long-term error should decrease as well, because the model is more precise. This was, however, not the case: during training we were validating our models performance on single-day predictions, however later we wanted a good performance in the long-term. Predicting a single day and multiple days is not the same and our model was trained for the former while we expected it to be good at the latter. These two tasks have some differences, for example considering the long-term trend and seasonality is not as important for single-day predictions so the model may ignore it, while it is essential for the long-term forecasts.
Data as demonstrator
We’ve decided to implement an algorithm from a paper about improving multi-step predictions. The idea is to train the model multiple times, each time expanding the training set with data points that are meant to correct the errors the model has made when predicting multiple steps.
A visualization of the Data as demonstrator algorithm.
After each training round we use the model to make a multi-step prediction and extend the training set with pairs of inputs created from the predictions of the model and output being the true values that were to be predicted.
This approach is similar to imitation learning– training data provides corrections for the errors that arise in multi-step predictions. The improvement the algorithm provided was, however, slightly random — sometimes the error decreased just after 1–2 epochs, but other times the error skyrocketed. Unfortunately, the algorithm is quite slow, and it did not improve accuracy as much as we had expected (for example, XGBoost only from 14.5 down to 14.15).
Time series as cross-sectional data
Inspired by Kaggle kernels that achieved high scores on the leaderboards by encoding weekdays and months by the mean value of their respective period, we decided to try non-autoregressive models. In this approach, the average sales actually encode 3 kinds of information — day of the week, an item and a store. Thanks to that, one model could be trained for all the items and stores. Because the predictions are independent of each other, there’s no error to accumulate, regardless of the forecast length. On the other hand, this method cannot recognize long-term trends. It’s certainly not a universal approach, but it works well in this case, thanks to the very regular data. While it potentially gets rid of the error accumulation, it stands no chance of predicting spikes and other more complicated features.
To replicate this approach, we extracted features like minimum, maximum, mean, median, variance and standard deviation. Combining median or mean with variance seems to work best. It was important not to fall into the trap of adding too many features, because this actually worsened the scores.
Combining multiple models
In an effort to optimize the kaggle score, we tried stacking, in particular blending with XGBoost, LightGBM, k-NN and RandomForests as base models, and neural network as a meta learner. We split dataset into 3 parts: a training set for models, a training set for meta learner and a validation set. After the training we achieved a score of 14.07 on Kaggle, which is an improvement, but its complexity is usually too high to be viable in production environment.
An example blending architecture used by our team.
Next time in our series we will go through deep neural networks that we used to tackle this problem. Stay tuned not to miss it, and in the meantime feel free to check out our code on GitHub.
Got a project idea? Let’s schedule a quick, 15-minutes call to discuss how Big Data & Data Science services may give you the edge your business needs. Get in touch
Originally published at semantive.com on August 13, 2018.
| Demand Forecasting 2: Machine Learning Approach | 2 | demand-forecasting-2-machine-learning-approach-1315df575ca7 | 2018-08-13 | 2018-08-13 11:37:10 | https://medium.com/s/story/demand-forecasting-2-machine-learning-approach-1315df575ca7 | false | 1,129 | Big data and data science services to make you data-driven organization. | null | semantive | null | semantive | semantive | BIG DATA ANALYTICS,DATA SCIENCE,ARTIFICIAL INTELLIGENCE,BIG DATA | semantive | Machine Learning | machine-learning | Machine Learning | 51,320 | Radosław Waśko | I’m currently studying Computer Science at the University of Warsaw. Interested in functional programming, networking, voxel rendering and compilers. | fc38227fddf9 | radeusgd | 3 | 6 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 97cefb4d066 | 2018-05-29 | 2018-05-29 22:33:57 | 2018-05-22 | 2018-05-22 23:03:19 | 4 | false | en | 2018-05-29 | 2018-05-29 22:39:16 | 2 | 1316d4d120e6 | 2.745283 | 8 | 0 | 1 | According to Wikipedia, “Natural Language Processing, also known as NLP, is an area of computer science and artificial intelligence… | 5 | What is Natural Language Processing (NLP) & Why Chatbots Need it
According to Wikipedia, “Natural Language Processing, also known as NLP, is an area of computer science and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to fruitfully process large amounts of natural language data.”
In lamens terms, Natural Language Processing (NLP) is concerned with how technology can meaningfully interpret and act on human language inputs. NLP allows technology such as Amazon’s Alexa to understand what you’re saying and how to react to it. Without NLP, AI that requires language inputs is relatively useless.
Why do Chatbots Need Natural Language Processing (NLP)?
Like the previous example with Amazon’s Alexa, chatbots would be able to provide little to no value without Natural Language Processing (NLP). Natural Language Processing is what allows chatbots to understand your messages and respond appropriately. When you send a message with “Hello”, it is the NLP that lets the chatbot know that you’ve posted a standard greeting, which in turn allows the chatbot to leverage its AI capabilities to come up with a fitting response. In this case, the chatbot will likely respond with a return greeting.
Without Natural Language Processing, a chatbot can’t meaningfully differentiate between the responses “Hello” and “Goodbye”. To a chatbot without NLP, “Hello” and “Goodbye” will both be nothing more than text-based user inputs. Natural Language Processing (NLP) helps provide context and meaning to text-based user inputs so that AI can come up with the best response.
Natural Language Processing (NLP) can Overcome Natural Communication Challenges
The same problems that plague our day-to-day communication with other humans via text can, and likely will, impact our interactions with chatbots. Examples of these issues include spelling and grammatical errors and poor language use in general. Advanced Natural Language Processing (NLP) capabilities can identify spelling and grammatical errors and allow the chatbot to interpret your intended message despite the mistakes.
Advanced NLP can even understand the intent of your messages. For example, are you asking a question or making a statement. While this may seem trivial, it can have a profound impact on a chatbot’s ability to carry on a successful conversation with a user.
One of the most significant challenges when it comes to chatbots is the fact that users have a blank palette regarding what they can say to the chatbot. While you can try to predict what users will and will not say, there are bound to be conversations that you would never imagine in your wildest dreams.
While Natural Language Processing (NLP) certainly can’t work miracles and ensure a chatbot appropriately responds to every message, it is powerful enough to make-or-break a chatbot’s success. Don’t underestimate this critical and often overlooked aspect of chatbots.
So what’s next?
The Relay Chatbots have advanced Natural Language Processing (NLP) capabilities that will not only wow your customers but greatly improve your team’s ability to provide a superior customer service or support experience.
We would love to give you a demo to show you how powerful our native chatbots are when combined with the Relay intelligent support platform. Schedule a demo, and we’ll help you transform your support and service operations.
| What is Natural Language Processing (NLP) & Why Chatbots Need it | 90 | what-is-natural-language-processing-nlp-why-chatbots-need-it-1316d4d120e6 | 2018-06-12 | 2018-06-12 17:23:59 | https://medium.com/s/story/what-is-natural-language-processing-nlp-why-chatbots-need-it-1316d4d120e6 | false | 542 | The #1 place to learn how AI, Chatbots, ML, NLP, and technology, in general, will revolutionize the way we get support and ask for help. | null | null | null | Support Automation Magazine | support-automation-magazine | AI,ARTIFICIAL INTELLIGENCE,CHATBOTS,AUTOMATION,SUPPORT | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Casey Phillips | AI Chatbot UX Owner @ Intuit. AI fanatic, tech enthusiast, and passionate product builder! LinkedIn.com/in/casey-phillips-mba-pmp/ | e3f0721527ec | casphill | 470 | 538 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 961abc9ded16 | 2017-11-08 | 2017-11-08 18:58:38 | 2017-11-08 | 2017-11-08 21:43:18 | 2 | false | en | 2017-12-13 | 2017-12-13 20:55:47 | 16 | 13170bc59dd | 4.949371 | 161 | 13 | 0 | The new tech oligarchs are terrible for our economy, argues Scott Galloway’s “The Four.” Except …maybe not. | 5 | Shift Reads
Pretty Sure That Amazon, Facebook, Apple, Google Are Bad.
The new tech oligarchs are terrible for our economy, argues Scott Galloway’s “The Four.” Except …maybe not.
Scott Galloway has to be pleased with the timing of his recent book The Four: The Hidden DNA of Amazon, Apple, Facebook and Google. Published just three weeks before Facebook and Google’s highly anticipated Congressional testimony, the book feels as timely as a New York Times OpEd calling for more regulatory scrutiny of our tech overlords. (There have been a lot of those lately).
But Galloway is no johnny-come-lately to the genre — he’s been dining out on “the Four” for years. He’s one of a very few consistent critics of tech whose arguments are based on principles of economics and business, rather than cultural (Foer) or social objections (Tufecki). In this his first book, he’s polished his arguments to a brilliant sheen. If you’re already skeptical of these companies’ power, you’ll come away from The Four convinced of the danger they represent to society. But if you’re not, it’s likely you’ll be more angry than won over by his arguments.
But before I get to that contradiction, I want to spend a bit of time praising Galloway’s work. First, The Four is damn fun to read — Galloway trades in outré larded with expletives, and I for one ate it up (then again, I’m the one who wrote this little gem…). Here’s one of my favorite passages, taken from his chapter on Amazon. For context, he’s speaking of Jeff Bezos’ support for Universal Basic Income (a policy Zuckerberg has also embraced):
What’s clear is that we need business leaders who envision, and enact, a future with more jobs — not billionaires who want the government to fund, with taxes they avoid, social programs for people to sit on their couches and watch Netflix all day. Jeff, show some real fucking vision.
Or this one, toward the end of the book:
What is the endgame for this, the greatest concentration of human and financial capital ever assembled? What is their mission? Cure cancer? Eliminate poverty? Explore the universe? No, their goal: to sell another fucking Nissan.
He has absolutely no problem calling out each of the dominant four tech giants — for monopolistic practices, for tax evasion, for rapacious screwing over the little guy, and most deliciously, for intellectual dishonesty, in particular around each company’s core founding mission and narrative. Along the way he demonstrates an exacting study of the Four’s core business models, and evinces serious insights any business leader should possess if they are going to either compete, cooperate, or simply exist within the Four’s sphere of influence. Which, let’s face it, is pretty much the entire economy these days.
Galloway dedicated one chapter each to his chosen companies, then focuses on the shared injuries the companies have forced on the world. He then pulls back and ladders each company to a theory of head, heart, and sex (fun, if debatable), explains how each of the companies might get to a trillion-dollar market cap (Apple looks to be closest, but Galloway thinks Amazon will win that crown). He then strokes his chin around which company might join the Four’s ranks (Alibaba? Airbnb?), offers some career advice for readers in the Four’s realm of influence, and finally ties the Four’s rise to today (and tomorrow’s) politics — a feat that would have been a stretch a year ago, but is utterly realistic today.
Galloway does wallow in a few personally driven anecdotes here and there — it’s clear his time as a professor at a major university (NYU) has impacted his view of the Four (he suggests that Apple or Amazon fund tuition for everyone), and he goes a bit too deep into his own story of a failed attempt at saving the New York Times via private equity. Still and all, his voice and pacing is such that the diversions don’t really get in the way.
So why might I argue that Galloway’s book will fail to convince die hard proponents of the Four to change their minds about the companies they idolize? Because while Galloway does a remarkable job laying out the case for how these companies have come to dominate their respective markets, sowing carnage in the traditional economy along the way, he fails to make a convincing argument for why we should do anything about it. In fact, the book is couched in caveats both fore and aft. “[The Four] make the world a better place,” Galloway notes in his first chapter. And in his last: “It may be futile, or just wrong, to fight them or blanket-label these incredible firms as “bad.” I don’t know.”
Actually, a close reading of The Four leads me to believe he does know, but ultimately chose to pull his final punch. And perhaps that’s the most telling example of the Four’s true power. Regardless, if you consider yourself a student of business and its impact on society, you must read The Four.
Meanwhile, today’s news reflects the themes of Galloway’s book:
America's 'Retail Apocalypse' Is Really Just Beginning
The so-called retail apocalypse has become so ingrained in the U.S. that it now has the distinction of its own…www.bloomberg.com
After reading his chapter on Amazon, I came away convinced there’s no hope for nearly all legacy retailers (save a few who have focused on customer service and offline/online integration).
Uber's new cultural norms
I've spent my first two months as Uber's CEO meeting our teams around the world, dealing with a few firefights, and…www.linkedin.com
New Uber CEO Dara Khosrowshahi lays out his company’s new set of values, a key first step in turning around Uber’s shattered reputation, which Galloway utterly savages in The Four.
Tencent Buys 12% Stake in Developer of Snapchat
Snap disclosed in a regulatory filing that Chinese giant Tencent bought a 12% stake, a vote of confidence in the…www.wsj.com
Snap isn’t one of Galloway’s potential new horsemen, but it is the most significant competitor to Facebook’s dominance to come out of the tech industry lately. And if a Chinese company decides to partner with Snap, well, things could change. That Snap had terrible earnings this week was the story most outlets covered, but I find this angle far more interesting. Chinese companies that want to have truly global presence need to have a serious beachhead in the US. Tencent, which already owned a lot of Snap, seems to be getting pretty close to declaring its preference.
Buy Scott Galloway’s book here.
Scott Galloways’ the Four is one of many books we’re reading at NewCo as we prepare for the conversation at the Shift Forum this February. Others include Zeynep Tufekci’s Twitter and Tear Gas: The Power and Fragility of Networked Protest, Bellamy’s Looking Backwards, Edward Luce’s The Retreat of Western Liberalism, Franklin Foer’s World Without Mind, Tim O’Reilly’s WTF, Yuval Noah Harari’s Homo Deus, Richard Florida’s The New Urban Crisis, and many others. If you’re interested in Shift Forum’s new Reads program, be sure to sign up for my weekly newsletter here.
Shift Forum Reads
The books we’re reading at NewCo as we prepare for the conversation at the Shift Forum this Februaryshift.newco.co
| Pretty Sure That Amazon, Facebook, Apple, Google Are Bad. | 937 | pretty-sure-that-amazon-facebook-apple-google-are-bad-13170bc59dd | 2018-04-20 | 2018-04-20 09:37:52 | https://medium.com/s/story/pretty-sure-that-amazon-facebook-apple-google-are-bad-13170bc59dd | false | 1,210 | Covering the biggest shift in business and society since the industrial revolution | null | newcofestivals | null | NewCo Shift | newco | BUSINESS,WORK CULTURE,STARTUP,ENTREPRENEURSHIP,CAPITALISM | newco | Tech | tech | Tech | 142,368 | John Battelle | A Founder of NewCo, Federated Media, sovrn Holdings, Web 2 Summit, Wired, Industry Standard; writer on Media, Technology, Culture, Business | dac511047268 | johnbattelle | 44,146 | 1,421 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-07-09 | 2018-07-09 03:13:25 | 2018-07-26 | 2018-07-26 06:13:32 | 1 | true | zh-Hant | 2018-07-26 | 2018-07-26 06:13:32 | 0 | 13176dd5e643 | 0.298113 | 16 | 0 | 0 | 香港人最鍾意一窩蜂,羊群心態,執輸行頭慘過敗家嘅概念根深蒂固。咩咩雲端大數據AI… | 2 | 珍惜生命,遠離AI研發
香港人最鍾意一窩蜂,羊群心態,執輸行頭慘過敗家嘅概念根深蒂固。咩咩雲端大數據AI Blockchain個個都話做,當中有唔少IT9真係以為市場上係咁噏AI,AI就一定有得做,走去瞓身搞研發搞產品。搞研發唔係請吃飯,係個hardcore燒錢game。就算當你有錢燒,燒咗都唔一定有成果,仲要同全世界鬥快,簡直好似mine bitcoin咁,完全係軍備競賽,唔係有無限金錢絕對唔應該諗任何研發嘅嘢。
AI呢樣嘢,香港當然引以自豪啦,話晒sensetime都算係間香港公司(呢個定義唔好拗住先啦),超級AI獨角獸喎,自high到射上moon都係應該嘅。但人哋做到係因為人哋一早就投入超大量大陸專才PhD,專門搞algo model計數,大大力燒銀紙用血汗錢換返嚟。
Google呢d大龍頭,搞democratize AI,少不免上游下游都做。上游講緊d日日都state of the art,今天的我打倒昨天的我嘅paper研究,下游就係AI嘅某d特定應用。NLP,object recognition呢d下游嘢,Google會將件事做好再開心share俾全世界,咁先唔會俾某d巨頭把持住個IP,任佢收錢,呢d就叫democratize;而sensetime果隻就係靠license搵食嘅模式,完全係兩套不同諗法。(註:sensetime一d都無做錯,唔駛搵食咩?)
香港公司仔話自己識做chatbot,走去做NLP,train自己嘅model,基本上完全係錯誤方向。呢d咁general範疇嘅嘢,人哋大把resource去做,今日未做聽日都做硬,你真係有信心去做NLP會叻得過Google?廣東話的確係一個幾好嘅切入點,中英夾雜又係一個幾獨特嘅edge,但做廣東話個風險實在太高啦。time to market要時間,搶market share又要時間,個市場又細到出奇,全力落去搞完隨時人哋轉過頭就opensource咗件事,血本無歸果陣真係唔好喊。
AI係一個超級大bubble,好多出嚟搞AI嘅startup公司都陸續執笠,呢個係全球大趨勢,唔係淨係講香港。梗係啦,花唔知幾多年以為做到個核心價值,有個competitive edge,點知遲起步過你果d執咗人哋個研究成果,做得仲好過你,你話情何以堪呀?果種慘真係不足外人道。情況就好似每年IO/WWDC,一公佈d新功能出嚟果陣,startup公司都係當睇緊六合彩開彩咁,求神拜佛d os千祈唔好又加新built-in app開正自己果飯一樣。
所以呢,做AI只可以做最低層嘅單一應用,認吓特定花草,認吓某d pattern,攞人哋個imagenet,輕輕改吓最尾果層做到件事,最好唔會直接益到其他人,咁就最好啦。正所謂high tech揩嘢,low tech撈嘢,咁樣low low哋做AI長做長有,方為上策。
| 珍惜生命,遠離AI研發 | 92 | 珍惜生命-遠離ai研發-13176dd5e643 | 2018-07-26 | 2018-07-26 06:13:33 | https://medium.com/s/story/珍惜生命-遠離ai研發-13176dd5e643 | false | 26 | null | null | null | null | null | null | null | null | null | Research | research | Research | 17,602 | chiuto | 9upper/曾經識寫code/而家係專業吹水 | 47ea8c013c97 | chiuto | 207 | 225 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-03 | 2018-03-03 21:56:43 | 2018-03-21 | 2018-03-21 01:33:49 | 4 | false | en | 2018-07-08 | 2018-07-08 15:04:12 | 6 | 13192964edb3 | 5.866038 | 5 | 0 | 0 | In minutes | 5 | Well, we actually made it not literally in garage. We are just a “garage-type” startup.
How we measured Twitter demographics in garage using AI
In minutes
We are proud to publish our first case study. In this case study we used Demografy to measure age range and gender of US Twitter users and compared our results with two separate studies on Twitter demographics published by Pew Research and comScore. We also made a brief overview of existing technologies of measuring demographics of website audiences.
Demografy is a B2B SaaS platform that uses machine learning based noninvasive technology to get demographic data using only names. It can be used to get demographic insights or append lists with missing demographic data.
Unlike traditional solutions, businesses don’t need to know and disclose their customer or prospect addresses, emails or other sensitive information. This makes Demografy privacy by design and enables businesses to get 100% coverage of any list since all they need to know is names.
For those who are interested in results, we are going to start with key findings and then dive deeper into comparing existing technologies and describing our methodology.
Key Findings
Accuracy
We evaluated Demografy’s accuracy compared to well established existing solutions. The gender measuring accuracy of Demografy was 96% (49% male 51% female vs 51% male 49% female) if we deem Pew Research survey results as a golden standard. The age measuring accuracy was 94% compared to comScore. These numbers are pretty in line with our existing accuracy benchmarks which are 97% and 95% for gender and age range respectively when compared to self-reported data.
Time and cost
*- not including data collection time though this process doesn’t take much time.
**- it’s hard to estimate large scale costs that comScore and Pew Research bear but they are much higher than applying machine learning algorithm on the set of easily available data
Goal of the research
One of the reasons behind the research is assessment of Demografy’s accuracy. However before conducting this research we have already tested accuracy of Demografy by comparing its results with hundreds of thousands of publicly available self-reported records of real people.
So there are three key aspects of the research:
Explore and compare available approaches to measuring Internet demographics.
Benchmark our own performance against known and well established solutions.
Implement proof of concept of a brand new demographics measuring method that eliminates disadvantages of traditional approaches.
As a case for our research we chose the task of measuring US Twitter demographics. More specifically age range and gender distribution of US Twitter users.
Existing methods and problems
Many are curious how companies measure demographics of websites they don’t have access to. We can split traditional approaches into three main methods:
Panel data from volunteer Internet users. This method involves large scale and diversified network of volunteer users with tracking software installed on their devices. These users provide demographic data about them and sites they visit are being automatically tracked. Then sampling is performed so these results can represent a wider population. Some examples are comScore or Nielsen.
Surveys of Internet users. These are traditional surveys of users that include questions about their demographics and Internet usage. Examples are Pew Research or any other polling organization.
Cookie-based on-site analytics. Most popular example is Demographics in Google Analytics. It involves putting third-party cookies on site visitors’ devices to track them across the Google Audience Network in order to infer their demographics using Internet usage patterns and/or Google+ profile data. Unlike other methods, this method is available only to site owners.
As the major goal of our research is measuring demographics of third-party websites (for example competitors’ websites) we will focus on the first two methods since they allow to measure not only own websites but any other available.
While these two approaches are recognized as being pretty accurate with sampling error plus or minus approximately 3 percentage points they have some disadvantages:
Cost. Large diversified panels of Internet users or large scale surveys are very expensive involving heavy infrastructure and many man-hours.
Time. These measurements are normally ran over long period of time to collect and process enough data from all respondents or panel users.
The two examples of these approaches we’re going to use in our research are comScore (panel data) and Pew Research (surveys).
Methodology and the new method
In this research we measured gender and age distributions of US Twitter users and compared it to other researches. For this purpose we combined two studies from Pew Research and comScore on gender and age distribution respectively. Both studies are conducted during 2016 at the time when the data we feed Demografy with was also collected.
Normalizing data
Pew Research, Demographics of US social media in 2016. A survey of 1,520 adult Americans conducted March 7-April 4, 2016. Pew Research samples landline and cellphone numbers to yield a random sample of US population with average sampling error of 2.9 percentage points according to their methodology. The study provides both gender and age distributions. However metrics used are different from ours since they provide age distribution of US population while comScore provides more convenient for our case age distribution of US Twitter users. So we used only gender data from Pew Research to avoid excessive data normalization which may cause significant errors in estimations. As of gender we normalized Pew Research’s data using US Census gender data for 2016 so 24% males of US population and 25% females of US population became 49% and 51% of US Twitter population respectively.
comScore, Distribution of Twitter users in the United States by age group. comScore uses panel data approach which means it has a large diversified network of volunteers with self-reported demographic data and tracking software that analyzes their Internet usage. comScore normally updates statistics each month resulting in 30 days data collection period similar to Pew Research’s 4 weeks survey period. They don’t provide information on their accuracy but they sample their audience to represent the entire US population so probably they have comparable 3% sampling error. comScore provides more convenient metrics for age groups as age distribution of US Twitter population. However their age groups are different from the ones that Demografy uses so we normalized data and transformed their 6 groups into 3. Since new age margins fall into the middle of original age groups, we splitted each such group into two equal parts and added them to respective new age group. So 18–24 (17.7%), 25–34 (22.5%), 35–44 (19.5%) became 18–39 (49.95%) (17.7% + 22.5% + 9.75%) or 50% after rounding. Though this is not the perfect way of normalizing data, we made this assumption as the best available option due to the lack of additional data and low probability to spoil results noticeably.
Demografying Twitter
As of our measuring, we already had a data set of 500,000+ random Twitter users collected in 2016 for another project using public Twitter API. The collected profiles are completely random to ensure that there is no bias in source data. For the research we cleansed this data to have a quality sample of US Twitter accounts only. For this purpose we used both available location profile data and applied text mining algorithms to filter only accounts of people (those with personal names) who located in US. Additionally we used only accounts that were active during the last 30 days before they were collected. Finally we extracted a random sample of 10,000 US accounts.
After that we applied our proprietary technology to detect age and gender distributions of these accounts. The resulted data was compared to Pew Research and comScore.
Conclusion
Demografy can be used as a new viable solution for measuring demographics on a large scale. Unlike traditional approached it doesn’t require long term and expensive investments while showing comparable accuracy. At the same time it should be noted that Demografy is limited to only audiences containing personal names. For instance, it can’t be used to measure anonymous site audiences. However, it can be used to measure sites like social networks with published profile data of its users. It can be also applied to marketing lists and other data sources containing names.
Follow us in social networks to get updates:
Twitter
Facebook
Medium
| How we measured Twitter demographics in garage using AI | 94 | how-we-measured-twitter-demographics-in-garage-using-ai-13192964edb3 | 2018-07-08 | 2018-07-08 15:04:12 | https://medium.com/s/story/how-we-measured-twitter-demographics-in-garage-using-ai-13192964edb3 | false | 1,369 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Demografy | First privacy-enabled platform that predicts customer demographics using AI - www.demografy.com | 3441f6119822 | demografy | 13 | 10 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 9c6f47bd575c | 2018-05-24 | 2018-05-24 20:37:12 | 2018-05-31 | 2018-05-31 17:23:43 | 2 | false | en | 2018-05-31 | 2018-05-31 17:23:43 | 1 | 1319bb261347 | 2.734277 | 2 | 0 | 1 | The voice assistants still do not have a widespread use in México, mainly because they are not yet available in Spanish. In English they… | 5 | Voice Assistants. At your service?
The voice assistants still do not have a widespread use in México, mainly because they are not yet available in Spanish. In English they have already had acceptance and, with several assistants from different brands, they promise what can be the next level of interaction between advertising and consumers.
Click to join!
What are they?
Devices that are permanently alert, activated by voice, are able to understand the common language we use to communicate with other people, unlike the way we usually communicate with computers.
To use a computer / tablet / cell phone, we have to write commands or clic on the screen to give us the results we are looking for.
That makes the relationship more complicated, because many times we have to use the appropriate terms or we need to refine them to find what we need, in addition to that we have to focus our attention on that activity, like when we want to send a message or look for an address while we are driving .
With the voice assistants, it is possible to have an interaction as if we were talking with a person, it can understand and know what we are looking for based on the context, what we are doing at the moment and taking into account the usage history that we have had.
The context
If we seek information about the series “Andromeda”, the possible results would be about the television series, the constellation, the character of the mythology or the video game, we would have to differentiate the content according to what we really look for.
The voice assistants, using the context of what we asked, what other sites we visited before, what other searches we did, or if we returned to look for something related, offer us a better result.
If you do not understand what we are asking or find ambiguities that you can not resolve, you can ask questions to confirm or have more details, such as: “Regarding Andromeda, do you mean the constellation, the series, the video game or the mythological character? “And in this way he will perfect his knowledge of what we need to incrementally be able to offer better answers.
Record
Following with the example of a search on “Andromeda”. Based on the type of topics we ask, the sites where we do it, our content preferences, if we have seen the series, read about other mythology characters or about astronomy, that help the assistant to determine which of the results will be most relevant at the moment.
The more we use it, the better it will be at predicting what it is that we will require and it will adapt to our style and likes to respond immediately with the correct answer.
New opportunities
The advance in the voice assistants presents new possibilities so that the brands are present and have a greater knowledge of the habits and behavior patterns of the consumers.
You can deduce the tastes of the users or the intention to buy something through the phrases and words they use and the time of day or week when they do it.
In this way it will be possible to make offers related to the context of the consumer, at the time that has a specific need and thus create a less intrusive advertising and that is perceived as useful.
As in the case of making the list of pantry, with the knowledge of our habits the assistant can suggest products that we forget or that are already depleting, even new ones that we have not tried, based on our preferences.
Do you think that voice assistants are the present?
Are you doing something to create advertising in this channel?
| Voice Assistants. At your service? | 50 | voice-assistants-at-your-service-1319bb261347 | 2018-05-31 | 2018-05-31 17:29:24 | https://medium.com/s/story/voice-assistants-at-your-service-1319bb261347 | false | 623 | Driving the AI Marketing movement | null | aimamarketing | null | AIMA: AI Marketing Magazine | aimarketingassociation | AI,MARKETING,MARKETING TECHNOLOGY,MACHINE INTELLIGENCE,AI MARKETING | AIMA_marketing | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Gabriel Jiménez | CONSULTATIVE SELLING | AI FOR BUSINESS | CHATBOTS | ANALYTICS | SPEAKER | WRITER | TEACHER | d1be01aca7a8 | garabujo77 | 449 | 472 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-12-09 | 2017-12-09 02:37:53 | 2017-12-09 | 2017-12-09 02:39:42 | 0 | false | en | 2017-12-09 | 2017-12-09 02:39:42 | 0 | 131bc0c70b47 | 0.483019 | 5 | 0 | 0 | Don’t be surprised that an AI beat chess software. Be surprised that it played 1,228,800,000 games in 4 hours. | 1 | It’s the automation
Don’t be surprised that an AI beat chess software. Be surprised that it played 1,228,800,000 games in 4 hours.
We conflate "AI" and "automation" at our peril.
The power of automation isn't the algorithm. It's the relentless, parallel attention.
The AI that makes a drone hunt a person isn't the scary part; it's the 5,000 of them, equipped with sensors, never stopping.
The software that tells a self-driving car not to hit anyone is cool, but the array of superhuman sensors, total lack of distraction, and instantaneous response is what saves lives.
Yes, machine learning can outperform on a narrow task quickly. But it’s the abundant training data, superhuman sensors, and unswerving, unflinching automation that really makes technology overwhelming in terms of economic impact.
| It’s the automation | 14 | its-the-automation-131bc0c70b47 | 2018-03-24 | 2018-03-24 01:28:22 | https://medium.com/s/story/its-the-automation-131bc0c70b47 | false | 128 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Alistair Croll | Writer, speaker, accelerant. Intersection of tech & society. Strata, Startupfest, Bitnorth, FWD50. Lean Analytics, Tilt the Windmill, HBS, Just Evil Enough. | b46946f1386e | acroll | 12,566 | 5,595 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 1f35b6f451e8 | 2018-05-22 | 2018-05-22 09:12:04 | 2018-05-22 | 2018-05-22 09:13:00 | 0 | false | en | 2018-05-22 | 2018-05-22 12:14:08 | 7 | 131cca05e994 | 0.403774 | 0 | 0 | 0 | http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/master/cookbook/Chapter%201%20-%20Reading%20from%20a%20CSV.ipynb | 5 | Daily webography for 3 dummies to make it in machine learning — Act 13, Scene 2
Time Series Forecasting as Supervised Learning
Time series forecasting can be framed as a supervised learning problem. This re-framing of your time series data allows…machinelearningmastery.com
Excel Viewer - Visual Studio Marketplace
Extension for Visual Studio Code - View Excel spreadsheets and CSV files within Visual Studio Code workspaces.marketplace.visualstudio.com
Machine Learning for time series analysis | Kaggle
Edit descriptionwww.kaggle.com
Rédigez en Markdown !
En savoir plusopenclassrooms.com
pandas.read_csv - pandas 0.23.0 documentation
Parameters: filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any \ The string could be a URL. Valid…pandas.pydata.org
Jupyter Notebook
Edit descriptionhub.mybinder.org
http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/master/cookbook/Chapter%201%20-%20Reading%20from%20a%20CSV.ipynb
| Daily webography for 3 dummies to make it in machine learning — Act 13, Scene 2 | 0 | daily-webography-for-3-dummies-to-make-it-in-machine-learning-act-13-scene-2-131cca05e994 | 2018-05-22 | 2018-05-22 12:14:09 | https://medium.com/s/story/daily-webography-for-3-dummies-to-make-it-in-machine-learning-act-13-scene-2-131cca05e994 | false | 107 | We offer contract management to address your aquisition needs: structuring, negotiating and executing simple agreements for future equity transactions. Because startups willing to impact the world should have access to the best ressources to handle their transactions fast & SAFE. | null | ethercourt | null | Ethercourt Machine Learning | ethercourt | INNOVATION,JUSTICE,PARTNERSHIPS,BLOCKCHAIN,DEEP LEARNING | ethercourt | Timeseries | timeseries | Timeseries | 346 | WELTARE Strategies | WELTARE Strategies is a #startup studio raising #seed $ for #sustainability | #intrapreneurship as culture, #integrity as value, @neohack22 as Managing Partner | 9fad63202573 | WELTAREStrategies | 196 | 209 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-03-19 | 2018-03-19 14:05:19 | 2018-03-19 | 2018-03-19 14:05:10 | 1 | false | tr | 2018-03-19 | 2018-03-19 14:08:48 | 1 | 131d1f33b608 | 2.928302 | 1 | 0 | 0 | null | 5 | Machine Learning Pazarlama Sektörünün Geleceğini Nasıl Etkileyecek?
Pazarlama sektörü yalnızca bugün değil, geçmişte de veriye dayanıyordu. Ancak bugün sahip olduğumuz verilerdeki fark, onun büyüklüğü. Büyük verilerin çoğunun yapılandırılmamış olması, pazarlamacıların ondan harekete geçirici bilgiler kazanmasını zorlaştırıyor. Son zamanlarda pazarlamacılar, yapay zekâ, özellikle de machine learning (makine öğrenimi) sistemlerinin bu görev için mükemmel şekilde uygun olduğunu düşünüyor.
Veriyi işleyerek bilgilerin öğrenilmesini sağlayan machine learning algoritmaları, bilgisayar programlarının, doğru veri kalıplarını algılayarak gerekli bilgilerin toplamasını sağlar. Bu algoritmalar pazarlamaya uygulandığında, pazarlamacıların tüketici tercihleri, davranışları ve mevcut piyasa koşulları gibi faktörleri tanımlayarak derinlemesine bilgi üretmesini olanak sağlayabilir.
Pazarlamacılar için machine learning ve büyük veri ne kadar yararlı olabilir?
Bu istatistikleri göz önünde bulundurun:
Küresel bilgi işlem pazarının 2019 yılına kadar 12,5 milyar dolara ulaşması bekleniyor.
Tüm şirketlerin%30'u, satış süreçlerinden en az birini artırmak için yapay zeka kullanacaktır. (Gartner)
Büyük veri ve makine öğrenimi gelecekte pazarlama dünyasını nasıl etkileyecek? Hadi gelin buna yakından bakalım:
Müşteri Yolculuğunu Optimize Etme
Pazarlamacılar çevrimiçi tüketici davranışları hakkında bilgi edinmek için dijital dünyaya yönelmeden önce, müşteri yolculuklarını haritalayabilmek oldukça öngörülebilirdi. Örneğin, markanızın imajını incitebilecek ve satın alma davranışlarını etkileyebilecek olumsuz çevrimiçi yorumlar (sosyal medyada veya çevrimiçi forumlarda) gibi faktörler hakkında endişelenmenize gerek yoktu.
Çevrimiçi erişilebilirlik ile müşteri yolculuğu artık farklılaştı. Şu istatistiklere dikkat edin:
Amerikalı yetişkinlerin yaklaşık üçte ikisi haftada 12 saat sosyal medyada zaman geçiriyor.
Müşterilerin % 55'i alışveriş yapmadan önce sosyal ağlarda araştırma yapıyor.
Müşteri yolculuğunun haritalanması artık doğrusal değil, ama veri bilimi ve makine öğrenimi, tüketici istekleri hakkında bilgi sağlayabilir:
En uygun hedefleme:
Machine learning çözümleri sayesinde artık pazarlamacılar kesin hedeflemeler yaparak için koşulları ve bağlamları keşfedebilecek. Bu, aynı zamanda belirli demografilere hedeflenecek teklifleri manuel olarak yapmak zorunda olmayacakları anlamına da geliyor.
Kullanıcı yolculuklarında tetikleyici noktaları belirleme:
Makine öğrenimi algoritmaları ve yazılımları geliştikçe, çevrimiçi kullanıcı yolculukları da daha iyi takip edilebiliyor. Böylece müşterileri neyin tetiklediği hakkında daha fazla bilgi edinmek de mümkün oluyor.
Otomatik müşteri hizmetleri:
Dünya otomasyonla yeni tanışmıyor. En sevdiğiniz fastfoodu bir mobil uygulama ile sipariş edebiliyor, doktor randevusu alabiliyor, hatta tek bir kişiyle konuşmadan online olarak uçak bileti bile alabiliyoruz. Ve şimdi, makine öğrenimi sayesinde, otomasyon müşteri hizmetleri alanında bir yeniliğe imza atabilir. Müşteri sorgularına ve tüketici geçmişine dayanan verilerdeki kalıpları tahmin ederek makine öğrenimi şöyle olabilir:
Hizmetleri tavsiye etmek:
Uzmanlar, tüketicilerin tüm online etkileşiminin yüzde 85'inin bir insana ihtiyaç duymadan gerçekleşebileceğini tahmin ediyor. Kullanıcıların harcama alışkanlıklarını geçmiş harcama alışkanlıklarına göre tanımlayabilecek ve finansal hizmetleri tavsiye etmek için bu verileri kullanabilecek sistemlerle karşılaşacağız.
Deneyim olmadan satış:
Müşteri merkezli bir hizmet ve bir video öneri sistemi olan Netflix’i düşünün. Video verilerini, müşteri verilerindeki kalıpları tanımlamak için machine learning algoritmalarını kullanarak önerir. Aynı fikirleri satış odaklı bir işletmeye uygularsanız, gelecekte neler olabileceğini hayal edebilirsiniz. Bunun gibi kesin veriler, herhangi bir deneyim olmadan belirli ürünleri satabilme şansı tanır.
İlgi alanına dayalı hedefleme:
Facebook reklamları ile binlerce kullanıcının verilerin ulaşılabilir. Bu kullanıcıların ilgi alanlarını analiz eden Facebook, onlarla aynı ilgi alanlarına sahip diğer kullanıcıları da bulabilir. Facebook gelecekte bu verileri alacak ve ortak olan ilgi alanlarını ve davranışları eşleştirecek.
Daha hızlı potansiyel, müşteri hedefleme ve dönüşümler:
Benzer bir şekilde hedefleme yapıldığında, reklamlarınız, en iyi müşterilerinizle benzer özelliklere sahip olan ve kampanyalarınıza yanıt vermesi muhtemel kişilere gösterilmesi sağlanır. Facebook Pixel gibi makine öğrenimi araçları, kullanıcı verilerini analiz etme konusunda oldukça yüksek potansiyellere sahip olabilir. Hızlı bir şekilde doğru müşteri hedeflemeleri yapıldığında, dönüşümde daha hızlı ve yüksek olacaktır.
“İşlenmiş veri bilgidir. İşlenmiş bilgi bilgeliktir.”
Ankala V. Subbarao
Reklamları daha akıllıca yapmak:
Makine öğrenimi araçları tarafından oluşturulan tahmini analiz verileri, pazarlamacıların reklamları daha akıllı hale getirmesine yardımcı olabilir. Örnek vermek gerekirse, bir havayolu şirketini tanıtmanız gerektiğini varsayalım. Google Adwords gibi araçlar, hedef kitlenizin tatil sezonu boyunca düşük maliyetli uçuşları araştırdığını öğrenmenize yardımcı olabilir, ama hava koşulları gibi göremediğiniz veriler ne olacak? Her eyalette farklı eyaletlere dağılmış seyirciler arasında bilet alımlarını etkileyebilecek kadar kar yağmaz. Eğer makine öğrenimi, gezgin verilerindeki bilgileri analiz etmek için ilişkili algoritmalarını uygulamaktan vazgeçerse, pazarlamacıların reklam kampanyalarının içeriği ve zamanlaması konusunda daha bilinçli kararlar vermelerine izin verebilir.
Reklamları daha kârlı hale getirmek:
İşletmeler, milyonlarını pazarlama kampanyaları için harcar. Ama hepsi beklediği gibi kâr getirmeyebilir. Makine öğrenimi, pazarlamacıların hedef kitleleri hedefleyen ve reklamları ile daha fazla gelir elde etmelerini sağlayacak stratejiler tasarlamasına izin verebilir.
Gelecekte…
Pazarlamacılar davranış kalıplarını anlamak için büyük veriyi kullanarak müşteri psikolojisine daha iyi bir bakış açısı kazanabilecekler. Ve machine learning bu aşamaya ulaşmalarına yardımcı olacak.
“Öngörüsel veri analizi işletmeler için neden önemlidir?” başlıklı yazımız da ilginizi çekebilir.
| Machine Learning Pazarlama Sektörünün Geleceğini Nasıl Etkileyecek? | 1 | machine-learning-pazarlama-sektörünün-geleceğini-nasıl-etkileyecek-131d1f33b608 | 2018-03-19 | 2018-03-19 14:08:49 | https://medium.com/s/story/machine-learning-pazarlama-sektörünün-geleceğini-nasıl-etkileyecek-131d1f33b608 | false | 723 | null | null | null | null | null | null | null | null | null | Büyük Veri | büyük-veri | Büyük Veri | 68 | Datateam Bilgi Teknolojileri | http://www.datateam.com.tr/ | 48081bdbc544 | socialdatateam | 9 | 11 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-06 | 2018-07-06 21:24:42 | 2018-07-06 | 2018-07-06 22:24:28 | 3 | false | en | 2018-07-06 | 2018-07-06 22:24:28 | 5 | 131f7d29c14f | 6.599057 | 2 | 0 | 0 | The roles and the skills required | 5 | Introduction to Data Science
The roles and the skills required
Data Science Equilibrium
Article questions
What is data science?
What skills are required?
What are the roles in data science?
Why is it important to follow the predefined roles?
If you can’t answer all of these with sufficient amount of confidence, I encourage you to read the article or find the points of interest.
If you do know the answers, I’ll be glad if you read it anyway. In the end, repetition is the mother of wisdom.
Preface
Not long ago when my interest in data science started to kick in, one of the first things I got confused about are stated in those questions above. And surprisingly enough, not only those questions are not trivial to find (or actually define) answers to, in case of roles, for example, they are very easily mistaken in reality.
Writing an article about anything you are learning is a great way to learn yourself and remember what you’ve learnt.
As I am quite new to data science, a very experienced person, from whom I’ve been learning a lot in the recent past, suggested me I wrote down an article which I would eventually present to others. Hence, following his advice, I am going to present my understanding of data science, the skill set required and the roles it covers.
Data Science
Data Science domains [fig. 1]
So what IS data science, right? The definition I like is that data science is inclusive analysis[2]. It is data analysis which includes various sorts of data. Data science basically uses anything that is potentially usable to make sense of the data trying to get actionable steps and insights.
For the analysis to be valid and useful, usually domain knowledge is needed as well. Such data analysis may then be used for business improvements, future predictions or information gathering (i.e. research, studies, polls).
The ultimate purpose of the data science is not only to apply existing algorithms and procedures, but also to define new ones and integrate them in an efficient pipeline. That is, except for empirical methods used in data science, scientific and mathematical foundations are also required.
Data science combines statistics and programming in an applied settings. Statistics and mathematics is one of the most important skills if one wants to make sense of the data. Now, the programming part might be source of confusion and might actually highly depend on the given role and position.
Roles
Data Scientist
Data scientist is a person who has statistical skills better than any software engineer and programming skills better than any statistician.
In the article Data Scientist: The Sexiest Job of the 21st Century[3] the data scientist is described like this:
Think of him or her as a hybrid of data hacker, analyst, communicator, and trusted adviser. The combination is extremely powerful — and rare.
Lets discuss the personality traits in a bit clearer way. Lets focus on the whole process here so that later on when mentioning another roles we might have brighter idea.
In order to make sense of the data, data scientist needs to exploit the data. He explores and combines various sources, parses unstructured data and searches for the patterns or latent variables hidden in them. He also might come up with new features, such as combinations or augmentations of the current ones — this is usually called feature engineering. Expert visualization skills are very desirable.
To get a feeling of distributions, correlations and possible relationships, analysis is perhaps the most important part that might eventually, depending on the goal of the analysis, lead to i.e. prediction pipeline, causation inference or process optimization.
Data scientist is a storyteller and the interpretation is solving for value. There is no value in analysis without story behind it. That being said, a good data scientist should be able to communicate the plausible well-founded results in a simple yet meaningful way to the customer or product manager,
It is important to note, that not every analysis will (or can, actually) yield desired results. Therefore customer’s satisfaction is not guaranteed. However, disproved hypothesis or failed expectations does not imply bad analysis, on the contrary, good analysis should uncover all possible issues, if the issue is customer expectation, so be it.
No data scientist should be responsible for the way data is handled or served in production, nor for the pipeline of the software process.
Data Analyst
For me, this is by far the most confusing one. And to be honest, I have troubles differentiate them from data scientists. So here is my very simplified point of view:
Data Analyst poses questions. Data scientist answers them.
What that means is that data analyst should be competent enough to understand the data, to see patterns in them and provide insights. He is usually not expected to ‘get his hands dirty’ with the data itself or come up with statistical models, however, when given results of a research, forms, questionnaires etc., he should be able to interpret them and clearly understand them, draw conclusions or point out irregularities. I usually think of him as a data scientist junior or the ‘consulting data scientist’.
Data Engineer
If data scientists should have better statistical skills than software engineering skills, then this applies to data engineer exactly vice versa.
Data engineer should be professional in data handling. It is his responsibility not only to handle data for data scientist, but also to serve them in production in the most efficient way possible, thus creating pipelines for data flow, running big queries on them or storing them in a database.
Such operations might require professional skills in distributed computing, depending on the level of operation also database skills (as opposed to data scientist, who should know how to query the data in advanced SQL, data engineer should understand the way it is implemented at the backend).
As data engineers are also very often the ones actually integrating the algorithms, some statistical awareness might be beneficial, but should not be necessity.
The software engineer is responsible for the way the data is handled in production and for the software design, development pipeline and data flow.
Does Big Data fit into the scenario? Definitely! However sometimes specifically differentiated as ‘Big Data Engineers’ Data engineers usually deal with huge amount of data and it is their responsibility to implement a way to handle the volume and velocity of the data for both, the development team (ie. data scientists and analyst) and the customer.
The difference between data engineer and data scientist is usually drawn using Venn diagrams but I don’t find it as descriptive as the following awesome figure which I found in a great article by O’Reilly comparing the two roles.
Data Scientist vs Data Engineer
Machine Learning Engineer
As data science and machine learning slowly become democratized, and statistics truly is an inclusive discipline, one can imagine that the roles intersect quite a bit. And they really do. I bet that if you are a part of company where there is a data science / analytics team or even better, you are part of that team, you’ve been given tasks in the past that have included both data science and engineering.
I’ve included this type as it has become a terminology in itself. It is the war horse, the unicorn in the world of computer science and the hot applicant for the ‘sexiest job of the 21st century’. A machine learning engineer is supposed to handle both data science and engineering. It is very rare to find such person. Usually it happens either over the course of time when a data scientist has been tasked by so many engineering jobs, that he eventually learnt that as well or data engineer grows an interest in the statistics and machine learning and becomes the sexy one himself.
I am not necessarily saying that this is the role every aspiring data science geek should tend to. I’ve always respected the professionalism and always believed that one can truly master only one discipline. However, multidisciplinary knowledge is very important and helps you become more aware of your colleagues and also more valuable.
Conclusion
As a person who’s seen unhappy faces of those who’ve been tasked by the role they had not applied to, I am pleading with you, the next time when you ask your data engineering colleague to build a visualize the data for a customer presentation or your friend data scientist to implement a flask application to serve requests, to recall this article and think about whether he should actually do that. Not only he will be satisfied by the job, but he’ll do it poorly and in bad manners.
Also, both data scientists and engineers are vital. There is absolutely no reason one should be prioritized over another or one should be treated with more respect. You might say that being good data scientist requires higher education. This is a valid point, but to be a good data engineer requires huge amount of experience.
So there you go, this is my point of view to the problem of the roles in data science. We’ve gotten familiar with the roles in data science and the skillset required and learnt about the importance of differentiating them. Hopefully I’ve not confused you even more neither have I provided any false information and hopefully you can now easily answer the questions mentioned at the beginning.
Lets jump back to the beginning and try to answer those questions now.
Resources:
[1] https://towardsdatascience.com/introduction-to-statistics-e9d72d818745
[2] Barton Poulson, Data Science Foundations: Fundamentals, Lynda.com
[3] https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century
Pictures:
[fig. 1]: https://everett.wsu.edu/majorsdegrees/data_analytics/
| Introduction to Data Science | 2 | introduction-to-data-science-131f7d29c14f | 2018-07-06 | 2018-07-06 22:24:28 | https://medium.com/s/story/introduction-to-data-science-131f7d29c14f | false | 1,603 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Marek Cermak | https://www.linkedin.com/in/ai-mcermak/ | 979f037d0089 | marekermk | 2 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-09-26 | 2017-09-26 07:40:43 | 2017-09-26 | 2017-09-26 07:42:19 | 1 | false | en | 2017-09-26 | 2017-09-26 07:42:19 | 0 | 1320ccb1f633 | 1.875472 | 0 | 0 | 0 | Before the advent of social media, voicing documented opinion was the preserve of a few people, generally called the “opinion makers”.
But… | 4 | Likes, Shares and Sentiment
Before the advent of social media, voicing documented opinion was the preserve of a few people, generally called the “opinion makers”.
But in today’s day and age, anyone anywhere with an internet connection can register her likes, dislikes, endorsements at minimal cost and effort.
This information, when collected on a large enough scale, can help in building models that help gauge what these “opinion makers” are thinking.
And these people matter because they are the consumers — of everything from Hotstar subscriptions to airline tickets to financial products.
The product that we have conceived is web tool that lets users know the sentiment of viewers for a movie trailer on Youtube from the date of its launch to the date of release of the movie.
It will help people as well as production houses monitor what people are thinking about upcoming movies.
Though it sounds like a simple task, the complication with Youtube is that the comments are not necessarily structured using proper grammar and vocabulary.
This makes it tricky to deduce the sentiment from the comment. Also, in India most of the comments are in Hinglish, which coupled with the bad grammar and varied vocabulary make it even harder to wring the sentiment out of the comments.
The above graph depicts the behaviour of top five positive and top five negative words based on their respective L2 penalties over their weights.
L2 penalty is a proxy for estimating the affect a word has on the sentiment of sentence.
As you can see when L2 is close to zero the weights were spread over a large range for the respective negative and positive words.
But this is not scalable for finding complicated decision boundaries (a problem very intrinsic for sentiment analysis on Youtube comments) for large datasets.
The decision boundaries establish the difference between the positive and negative character of the words. This characteristic in turn lends the sentiment to a sentence.
As we move to the center region of the graph, the words have a smaller spread based on their weights. This is good for scaling up a model for large and complicated dataset such as the one we are dealing with and building a deeper neural net.
Also, as the words have lesser spread it becomes easier to extend our vocabulary and mark a large dataset for supervised sentiment analysis..
Meanwhile, the right hand side part of the graph depicts L2 weights converging to zero, which is as expected.
Although, this is the most popular version of regularisation used by AI practitioners we are working on other novel methods such as dynamically rotating penalty parameters using Langrange multipliers.
| Likes, Shares and Sentiment | 0 | likes-shares-and-sentiment-1320ccb1f633 | 2017-09-26 | 2017-09-26 08:13:28 | https://medium.com/s/story/likes-shares-and-sentiment-1320ccb1f633 | false | 444 | null | null | null | null | null | null | null | null | null | Movies | movies | Movies | 84,914 | B Sundaresan | Journalist with NewsRise. Making sense of AI at nullpointer.in. Ex @htTweets. All posts attributable to NullPointer which is Akshay Bharti, Ishaan Kapri and me | c7322be17e8 | bhakBala | 1 | 17 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-06 | 2018-08-06 15:34:03 | 2018-08-30 | 2018-08-30 13:36:01 | 1 | false | en | 2018-08-30 | 2018-08-30 13:36:01 | 1 | 13237db9ec6e | 3.215094 | 2 | 0 | 0 | Customer service agents are the unknown heroes of every company. Vigilantes from the shadows who silently do their daily job of saving the… | 5 | 6 Tips to Supercharge your Support
Customer service agents are the unknown heroes of every company. Vigilantes from the shadows who silently do their daily job of saving the company from angry, frustrated, and disappointed customers.
We’ve all seen the movies about superheroes with their incredible powers that help them fight the bad guys. Unfortunately, customer service agents do not have superpowers, but luckily they don’t have to fight the bad guy who wants to destroy the world.
Still, it would be nice if we could somehow hone their abilities and supercharge their skills to excel at customer support. We know a way to do that, six ways, to be exact, and we’ve decided to share them with you.
Here is the list of 6 tips that will help you improve your customer support.
1. Impress on the 1st impression
The way we respond to a customer when they contact us might be the first (or the only) impression they will have of our brand and the whole company behind it.
The first impression usually lasts longer, if it’s bad, they will probably have a similar opinion about the whole company. Remember this and think about it often.
What impression do you want to leave? How do you want to be perceived as a company? You may prefer a good-old professional and polite way, or you may choose to be that witty, fun company that accepted a different culture and sticks with it.
Either way, you probably (at least you should) know who is your target audience, what do they prefer and go for a way that’s suitable. Once you decide that, stick with it and always think about what image your brand and company will send out to the world.
Sometimes it’s adding a “sir” at the end of the sentence or including a hilarious gif to your response that sets the tone of the conversation.
2. Set expectations
Our heroes do fight on the front lines and are those who receive the first hit from a frustrated customer, but they have their limitations too. Make sure your customer is aware of that. Set some boundaries and let the customer know what is beyond the agents’ powers.
There are a few ways to do that:
Let them know they might need to wait a little longer until you get an insight and a deeper understanding of their problem. That way they will know you’re there and working on their issue and not just ignoring them.
Make sure they understand why some steps are necessary and that all you do is with an end-goal to solve their problem. Even though you want to make the experience as smooth as possible, sometimes you might have to walk them through some tricky steps to take care of the problem.
Update them on the progress. Don’t leave your customers in the dark, staring at the blank chat screen without any reply will make them feel forgotten or ignored.
3. Accuracy over speed
Yes, speed is important, but accuracy is always a priority. The wrong answer will make them call you again or get even more frustrated when their problem still did not go away, so make sure you always double check all the facts before sending a reply.
In the end, it will take more time than both sides wanted and you already know that’s not a good scenario.
4. Focus on the fundamentals
Have all the tools and resources you might need. If you often struggle to respond to a customer, follow the flow of the conversation, and switching tabs makes you confused, you might need an extra monitor. See if that’s possible.
Agents may need an additional software or a printed guide handy. Be prepared.
5. Learn from mistakes
Support agents have dozens or even hundreds of interactions every week. All those conversations are a possible lesson if you look back and see when, where, and how could you improve your work. When you notice an area where you or some of the agents could improve, see if the company can organize an additional training, or just ask the most experienced agent in the company to help you.
6. Ask questions, listen to the answers, and keep track of your work
Pay attention to what kind of questions you ask agents, customers or other colleagues. The right question is more likely to lead to the right answer. Keep track of all the answers you get and use them as a guide every time a similar issue shows up. You’ll be the hero in your office with all those information at one place, and it might be a crucial contribution to the knowledge base.
This article was originally posted on Jatana’s blog. Visit today to access more awesome content!
| 6 Tips to Supercharge your Support | 4 | 6-tips-to-supercharge-your-support-13237db9ec6e | 2018-08-30 | 2018-08-30 13:36:01 | https://medium.com/s/story/6-tips-to-supercharge-your-support-13237db9ec6e | false | 799 | null | null | null | null | null | null | null | null | null | Customer Service | customer-service | Customer Service | 18,984 | Giovanni Toschi | On a mission to empower 1M support teams with Ai - Founder @ BotSupply.ai & Jatana.ai | 5a927ef41003 | giovannitoschi | 1,367 | 587 | 20,181,104 | null | null | null | null | null | null |
0 | import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data",one_hot=True)
#Three hidden layers
n_nodes_hl1 = 1000
n_nodes_hl2 = 1000
n_nodes_hl3 = 1000
#Number of classes
n_classes = 10
#Will go through 100 features at a time
batch_size = 100
#Placeholder variables (height * width)
#These are placeholders for some values in the graph
x = tf.placeholder('float',[None,784])
y = tf.placeholder('float')
def neural_network_model(data):
#Define weights and biases with their dimensions
hidden_1_layer = {'weights' : tf.Variable(tf.random_normal([784,n_nodes_hl1])),
'biases' : tf.Variable(tf.random_normal([n_nodes_hl1]))}
#bias is used to make some neurons fire even if all inputs is 0
hidden_2_layer = {'weights' : tf.Variable(tf.random_normal([n_nodes_hl1,n_nodes_hl2])),
'biases' : tf.Variable(tf.random_normal([n_nodes_hl2]))}
hidden_3_layer = {'weights' : tf.Variable(tf.random_normal([n_nodes_hl2,n_nodes_hl3])),
'biases' : tf.Variable(tf.random_normal([n_nodes_hl3]))}
output_layer = {'weights' : tf.Variable(tf.random_normal([n_nodes_hl3,n_classes])),
'biases' : tf.Variable(tf.random_normal([n_classes]))}
# Layer values =(input_data*weights) + biases
l1 = tf.add(tf.matmul(data,hidden_1_layer['weights']),hidden_1_layer['biases'])
l1 = tf.nn.relu(l1)
l2 = tf.add(tf.matmul(l1,hidden_2_layer['weights']),hidden_2_layer['biases'])
l2 = tf.nn.relu(l2)
l3 = tf.add(tf.matmul(l2,hidden_3_layer['weights']),hidden_3_layer['biases'])
l3 = tf.nn.relu(l3)
output = tf.matmul(l3,output_layer['weights'])+ output_layer['biases']
return output
def train_neural_network(x):
prediction = neural_network_model(x)
#Cost function is cross entropy with logits
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))
#Choose the optimizer
optimizer = tf.train.AdamOptimizer().minimize(cost)
#Cycles feed forward + backprop
hm_epochs = 10
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#Training the network
for epoch in range(hm_epochs):
epoch_loss = 0
for _ in range(int(mnist.train.num_examples/batch_size)):
epoch_x,epoch_y = mnist.train.next_batch(batch_size)
_, c = sess.run([optimizer,cost], feed_dict = {x:epoch_x,y:epoch_y})
epoch_loss += c
print('Epoch',epoch,'Completed out of',hm_epochs,'loss:',epoch_loss)
correct = tf.equal(tf.argmax(prediction,1),tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct,'float'))
print('Accuracy:',accuracy.eval({x:mnist.test.images,y:mnist.test.labels}))
train_neural_network(x)
| 9 | null | 2018-05-31 | 2018-05-31 22:23:12 | 2018-06-01 | 2018-06-01 00:44:37 | 2 | false | en | 2018-06-06 | 2018-06-06 15:39:03 | 3 | 1323ed93c6f1 | 2.953145 | 4 | 0 | 1 | This is a non-mathematical introduction to neural network. I would highly recommend to also deep-dive into the mathematics behind it- as it… | 5 | Neural network in 50 lines of Python code
This is a non-mathematical introduction to neural network. I would highly recommend to also deep-dive into the mathematics behind it- as it would provide a holistic understanding. But it is super awesome to code this in Python and check the output. I’m sure you will understand why there is so much hype around deep learning !
Objective : Use the handwritten digits dataset (MNIST) which contains 60,000 examples of handwritten digits and it’s classification (0–9). Pass it on to a neural network to predict the correct digit.
MNIST Data
Step 1 : Import MNIST dataset using tensorflow
Step 2 : Create 3 hidden layers with 1000 nodes each, and initialize number of classes to 10 (which is the number of digits i.e 0–9). Also, create two placeholder variables x and y which will store values from the tensorflow graph. Further, set the batch size to 100- though we can handle the MNIST dataset completely, we will be doing the optimization in 100 different batches.
Step 3 : Modeling a neural network
i. Define weights and biases with their respective dimensions for all the layers. (3 hidden layers + 1 output layer)
ii. Input values into all the layers, which is nothing but : (input_data*weights) +bias
iii. Return the output from the output layer
Step 4 : Now that we have modeled our neural network, let’s run it !
i. First, retrieve the prediction from our model above
ii. Define the cost function (the variable which we are trying to minimize)
iii. Choose the optimizer for optimizing the cost function- we are using “AdamOptimizer”
iv. Initialize number of cycles of feed forward and back propagation (called as epoch) to 10
v. For each epoch and each batch of data (inside the for loop), run the optimizer and output the value of the cost function. The cost function should reduce massively initially and then it would be stagnant.
vi. Calculate the accuracy
Step 5 : Run your neural network !
Output : We will get the number of epochs completed along with the value of the loss function at every stage. Also, the accuracy from this network is a whopping 95.93%
Neural Network Output
Why should you care about neural network?
From a basic neural network, without any tuning, we got an accuracy of 95.93%. This is much higher than what we would achieve from any other traditional classification algorithm.
Other amazing articles and videos :
Sentdex videos on tensorflow and deep learning (Also, the source for the code)
3Blue1Brown videos on the math behind neural network
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Connect on LinkedIn
| Neural network in 50 lines of Python code | 53 | code-your-first-neural-network-in-python-1323ed93c6f1 | 2018-06-18 | 2018-06-18 05:50:11 | https://medium.com/s/story/code-your-first-neural-network-in-python-1323ed93c6f1 | false | 681 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Rohan Joseph | Operations Research and Data Science @ Virginia Tech https://www.linkedin.com/in/rohan-joseph-b39a86aa/ | a2819eeaf8c5 | rohanjoseph_91119 | 292 | 5 | 20,181,104 | null | null | null | null | null | null |
0 | pip install featuretools
import featuretools as ft
import numpy as np
import pandas as pd
train = pd.read_csv("Train_UWu5bXk.csv")
test = pd.read_csv("Test_u94Q5KV.csv")
# saving identifiers
test_Item_Identifier = test['Item_Identifier'] test_Outlet_Identifier = test['Outlet_Identifier']
sales = train['Item_Outlet_Sales']
train.drop(['Item_Outlet_Sales'], axis=1, inplace=True)
combi = train.append(test, ignore_index=True)
combi.isnull().sum()
# imputing missing data
combi['Item_Weight'].fillna(combi['Item_Weight'].mean(),
inplace = True)
combi['Outlet_Size'].fillna("missing", inplace = True)
combi['Item_Fat_Content'].value_counts()
# dictionary to replace the categories
fat_content_dict = {'Low Fat':0, 'Regular':1, 'LF':0, 'reg':1,
'low fat':0}
combi['Item_Fat_Content'] = combi['Item_Fat_Content'].replace(
fat_content_dict, regex=True)
combi['id'] = combi['Item_Identifier'] + combi['Outlet_Identifier']
combi.drop(['Item_Identifier'], axis=1, inplace=True)
# creating and entity set 'es'
es = ft.EntitySet(id = 'sales')
# adding a dataframe
es.entity_from_dataframe(entity_id = 'bigmart',
dataframe = combi,
index = 'id')
es.normalize_entity(base_entity_id='bigmart',
new_entity_id='outlet',
index = 'Outlet_Identifier',
additional_variables =
['Outlet_Establishment_Year', 'Outlet_Size',
'Outlet_Location_Type', 'Outlet_Type'])
print(es)
feature_matrix, feature_names = ft.dfs(entityset=es,
target_entity = 'bigmart',
max_depth = 2,
verbose = 1,
n_jobs = 3)
feature_matrix.columns
feature_matrix.head()
feature_matrix = feature_matrix.reindex(index=combi['id']) feature_matrix = feature_matrix.reset_index()
from catboost import CatBoostRegressor
categorical_features = np.where(feature_matrix.dtypes =='object')[0]
for i in categorical_features:
feature_matrix.iloc[:,i]=feature_matrix.iloc[:,i].astype('str')
feature_matrix.drop(['id'], axis=1, inplace=True)
train = feature_matrix[:8523]
test = feature_matrix[8523:]
# removing uneccesary variables
train.drop(['Outlet_Identifier'], axis=1, inplace=True) test.drop(['Outlet_Identifier'], axis=1, inplace=True)
# identifying categorical features categorical_features = np.where(train.dtypes == 'object')[0]
from sklearn.model_selection import train_test_split
# splitting train data into training and validation set
xtrain, xvalid, ytrain, yvalid = train_test_split(train, sales,
test_size=0.25,
random_state=11)
model_cat = CatBoostRegressor(iterations=100, learning_rate=0.3,
depth=6, eval_metric='RMSE',
random_seed=7)
# training model
model_cat.fit(xtrain, ytrain, cat_features=categorical_features,
use_best_model=True)
# validation score
model_cat.score(xvalid, yvalid)
| 36 | 7219b4dc6c4c | 2018-08-22 | 2018-08-22 15:31:23 | 2018-08-22 | 2018-08-22 11:14:53 | 17 | false | en | 2018-08-22 | 2018-08-22 16:28:12 | 4 | 13260eae9270 | 11.196226 | 21 | 0 | 3 | Anyone who has participated in machine learning hackathons and competitions can attest to how crucial feature engineering can be. It is… | 5 | A Hands-On Guide to Automated Feature Engineering in Python
Anyone who has participated in machine learning hackathons and competitions can attest to how crucial feature engineering can be. It is often the difference between getting into the top 10 of the leaderboard and finishing outside the top 50!
I have been a huge advocate of feature engineering ever since I realized it’s immense potential. But it can be a slow and arduous process when done manually. I have to spend time brainstorming over what features to come up, and analyze their usability them from different angles. Now, this entire FE process can be automated and I’m going to show you how in this article.
Source: VentureBeat
We will be using the Python feature engineering library called Featuretools to do this. But before we get into that, we will first look at the basic building blocks of FE, understand them with intuitive examples, and then finally dive into the awesome world of automated feature engineering using the BigMart Sales dataset.
Table of Contents
What is a feature?
What is Feature Engineering?
Why is Feature Engineering required?
Automating Feature Engineering
Introduction to Featuretools
Implementation of Featuretools
Featuretools Interpretability
1. What is a feature?
In the context of machine learning, a feature can be described as a characteristic, or a set of characteristics, that explains the occurrence of a phenomenon. When these characteristics are converted into some measurable form, they are called features.
For example, assume you have a list of students. This list contains the name of each student, number of hours they studied, their IQ, and their total marks in the previous examinations. Now you are given information about a new student — the number of hours he/she studied and his IQ, but his/her marks are missing. You have to estimate his/her probable marks.
Here, you’d use IQ and study_hours to build a predictive model to estimate these missing marks. So, IQ and study_hours are called the features for this model.
2. What is Feature Engineering?
Feature Engineering can simply be defined as the process of creating new features from the existing features in a dataset. Let’s consider a sample data that has details about a few items, such as their weight and price.
Now, to create a new feature we can use Item_Weight and Item_Price. So, let’s create a feature called Price_per_Weight. It is nothing but the price of the item divided by the weight of the item. This process is called feature engineering.
This was just a simple example to create a new feature from existing ones, but in practice, when we have quite a lot of features, feature engineering can become quite complex and cumbersome.
Let’s take another example. In the popular Titanic dataset, there is a passenger name feature and below are some of the names in the dataset:
Montvila, Rev. Juozas
Graham, Miss. Margaret Edith
Johnston, Miss. Catherine Helen “Carrie”
Behr, Mr. Karl Howell
Dooley, Mr. Patrick
These names can actually be broken down into additional meaningful features. For example, we can extract and group similar titles into single categories. Let’s have a look at the unique number of titles in the passenger names.
It turns out that titles like ‘Dona’, ‘Lady’, ‘the Countess’, ‘Capt’, ‘Col’, ‘Don’, ‘Dr’, ‘Major’, ‘Rev’, ‘Sir’, and ‘Jonkheer’ are quite rare and can be put under a single label. Let’s call it rare_title. Apart from this, the titles ‘Mlle’ and ‘Ms’ can be placed under ‘Miss’, and ‘Mme’ can be replaced with ‘Mrs’.
Hence, the new title feature would have only 5 unique values as shown below:
So, this is how we can extract useful information with the help of feature engineering, even from features like passenger names which initially seemed fairly pointless.
3. Why is Feature Engineering required?
The performance of a predictive model is heavily dependent on the quality of the features in the dataset used to train that model. If you are able to create new features which help in providing more information to the model about the target variable, it’s performance will go up. Hence, when we don’t have enough quality features in our dataset, we have to lean on feature engineering.
In one of the most popular Kaggle competitions, Bike Sharing Demand Prediction, the participants are asked to forecast the rental demand in Washington, D.C based on historical usage patterns in relation with weather, time and other data.
As explained in this article, smart feature engineering was instrumental in securing a place in the top 5 percentile of the leaderboard. Some of the features created are given below:
Hour Bins: A new feature was created by binning the hour feature with the help of a decision tree
Temp Bins: Similarly, a binned feature for the temperature variable
Year Bins: 8 quarterly bins were created for a period of 2 years
Day Type: Days were categorized as “weekday”, “weekend” or “holiday”
Creating such features is no cakewalk — it takes a great deal of brainstorming and extensive data exploration. Not everyone is good at feature engineering because it is not something that you can learn by reading books or watching videos. This is why feature engineering is also called an art. If you are good at it, then you have a major edge over the competition. Quite like Roger Federer, the master of feature engineering when it comes to Tennis shots.
4. Automating Feature Engineering
Analyze the two images shown above. The left one shows a car being assembled by a group of men during early 20th century, and the right picture shows robots doing the same job in today’s world. Automating any process has the potential to make it much more efficient and cost-effective. For similar reasons, feature engineering can, and has been, automated in machine learning.
Building machine learning models can often be a painstaking and tedious process. It involves many steps so if we are able to automate a certain percentage of feature engineering tasks, then the data scientists or the domain experts can focus on other aspects of the model. Sounds too good to be true, right?
Now that we have understood that automating feature engineering is the need of the hour, the next question to ask is — how is it going to happen? Well, we have a great tool to address this issue and it’s called Featuretools.
5. Introduction to Featuretools
Featuretools is an open source library for performing automated feature engineering. It is a great tool designed to fast-forward the feature generation process, thereby giving more time to focus on other aspects of machine learning model building. In other words, it makes your data “machine learning ready”.
Before taking Featuretools for a spin, there are three major components of the package that we should be aware of:
Entities
Deep Feature Synthesis (DFS)
Feature primitives
a) An Entity can be considered as a representation of a Pandas DataFrame. A collection of multiple entities is called an Entityset.
b) Deep Feature Synthesis (DFS) has got nothing to do with deep learning. Don’t worry. DFS is actually a Feature Engineering method and is the backbone of Featuretools. It enables the creation of new features from single, as well as multiple dataframes.
c) DFS create features by applying Feature primitives to the Entity-relationships in an EntitySet. These primitives are the often-used methods to generate features manually. For example, the primitive “mean” would find the mean of a variable at an aggregated level.
The best way to understand and become comfortable with Featuretools is by applying it on a dataset. So, we will use the dataset from our BigMart Sales practice problem in the next section to solidify our concepts.
6. Implementation of Featuretools
The objective of the BigMart Sales challenge is to build a predictive model to estimate the sales of each product at a particular store. This would help the decision makers at BigMart to find out the properties of any product or store, which play a key role in increasing the overall sales. Note that there are 1559 products across 10 stores in the given dataset.
The below table shows the features provided in our data:
You can download the data from here.
6.1. Installation
Featuretools is available for Python 2.7, 3.5, and 3.6. You can easily install Featuretools using pip.
6.2. Loading required Libraries and Data
6.3. Data Preparation
To start off, we’ll just store the target Item_Outlet_Sales in a variable called sales and id variables in test_Item_Identifier and test_Outlet_Identifier.
Then we will combine the train and test set as it saves us the trouble of performing the same step(s) twice.
Let’s check the missing values in the dataset.
Quite a lot of missing values in the Item_Weight and Outlet_size variables. Let’s quickly deal with them:
6.4. Data Preprocessing
I will not do an extensive preprocessing operation since the objective of this article is to get you started with Featuretools.
It seems Item_Fat_Content contains only two categories, i.e., “Low Fat” and “Regular” — the rest of them we will consider redundant. So, let’s convert it into a binary variable.
6.5. Feature Engineering using Featuretools
Now we can start using Featuretools to perform automated feature engineering! It is necessary to have a unique identifier feature in the dataset (our dataset doesn’t have any right now). So, we will create one unique ID for our combined dataset. If you notice, we have two IDs in our data — one for the item and another for the outlet. So, simply concatenating both will give us a unique ID.
Please note that I have dropped the feature Item_Identifier as it is no longer required. However, I have retained the feature Outlet_Identifier because I plan to use it later.
Now before proceeding, we will have to create an EntitySet. An EntitySet is a structure that contains multiple dataframes and relationships between them. So, let’s create an EntitySet and add the dataframe combination to it.
Our data contains information at two levels — item level and outlet level. Featuretools offers a functionality to split a dataset into multiple tables. We have created a new table ‘outlet’ from the BigMart table based on the outlet ID Outlet_Identifier.
Let’s check the summary of our EntitySet.
As you can see above, it contains two entities — bigmart and outlet. There is also a relationship formed between the two tables, connected by Outlet_Identifier. This relationship will play a key role in the generation of new features.
Now we will use Deep Feature Synthesis to create new features automatically. Recall that DFS uses Feature Primitives to create features using multiple tables present in the EntitySet.
target_entity is nothing but the entity ID for which we wish to create new features (in this case, it is the entity ‘bigmart’). The parameter max_depth controls the complexity of the features being generated by stacking the primitives. The parameter n_jobs helps in parallel feature computation by using multiple cores.
That’s all you have to do with Featuretools. It has generated a bunch of new features on its own.
Let’s have a look at these newly created features.
DFS has created 29 new features in such a quick time. It is phenomenal as it would have taken much longer to do it manually. If you have datasets with multiple interrelated tables, Featuretools would still work. In that case, you wouldn’t have to normalize a table as multiple tables will already be available.
Let’s print the first few rows of feature_matrix.
There is one issue with this dataframe — it is not sorted properly. We will have to sort it based on the id variable from the combi dataframe.
Now the dataframe feature_matrix is in proper order.
6.6. Model Building
It is time to check how useful these generated features actually are. We will use them to build a model and predict Item_Outlet_Sales. Since our final data (feature_matrix) has many categorical features, I decided to use the CatBoost algorithm. It can use categorical features directly and is scalable in nature. You can refer to this article to read more about CatBoost.
CatBoost requires all the categorical variables to be in the string format. So, we will convert the categorical variables in our data to string first:
Let’s split feature_matrix back into train and test sets.
Split the train data into training and validation set to check the model’s performance locally.
Finally, we can now train our model. The evaluation metric we will use is RMSE (Root Mean Squared Error).
1091.244
The RMSE score on the validation set is ~1092.24.
The same model got a score of 1155.12 on the public leaderboard. Without any feature engineering, the scores were ~1103 and ~1183 on the validation set and the public leaderboard, respectively. Hence, the features created by Featuretools are not just random features, they are valuable and useful. Most importantly, the amount of time it saves in feature engineering is incredible.
7. Featuretools Interpretability
Making our data science solutions interpretable is a very important aspect of performing machine learning. Features generated by Featuretools can be easily explained even to a non-technical person because they are based on the primitives, which are easy to understand.
For example, the features outlet.SUM(bigmart.Item_Weight) and outlet.STD(bigmart.Item_MRP) mean outlet-level sum of weight of the items and standard deviation of the cost of the items, respectively.
This makes it possible for those people who are not machine learning experts, to contribute as well in terms of their domain expertise.
End Notes
The featuretools package is truly a game-changer in machine learning. While it’s applications are understandably still limited in industry use cases, it has quickly become ultra popular in hackathons and ML competitions. The amount of time it saves, and the usefulness of feature it generates, has truly won me over.
Try it out next time you work on any dataset and let me know how it went in the comments section!
Originally published at www.analyticsvidhya.com on August 22, 2018.
| A Hands-On Guide to Automated Feature Engineering in Python | 165 | a-hands-on-guide-to-automated-feature-engineering-in-python-13260eae9270 | 2018-08-22 | 2018-08-22 16:28:13 | https://medium.com/s/story/a-hands-on-guide-to-automated-feature-engineering-in-python-13260eae9270 | false | 2,543 | Analytics Vidhya is a community of Analytics and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com | null | analyticsvidhya | null | Analytics Vidhya | analytics-vidhya | MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,DEEP LEARNING,DATA SCIENCE,PYTHON | analyticsvidhya | Machine Learning | machine-learning | Machine Learning | 51,320 | Prateek Joshi | Data Scientist (linkedin.com/in/prateek-joshi-iifmite) | 27c664cbb93c | prateekjoshi565 | 111 | 28 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-30 | 2018-05-30 19:44:00 | 2018-05-30 | 2018-05-30 19:52:48 | 1 | false | en | 2018-05-30 | 2018-05-30 20:44:52 | 0 | 13262e7dbefa | 4.754717 | 10 | 1 | 0 | Artificial intelligence is paving the way for the next leap in human advancement by bringing about a sea ofchange in the way we interpret… | 5 | Using Blockchain to Unlock Artificial Intelligence
Artificial intelligence is paving the way for the next leap in human advancement by bringing about a sea ofchange in the way we interpret data across technological, financial, medical, and commerce sectors.
The writing is on the wall in seemingly mundane ways: cruise control equipped vehicles jumped several generations and are now fully autonomous and self-driving. Siri no longer simply recognizes your voice, she adapts to it along with your speech patterns and is getting better at listening to you and making suggestions. Amazon, Netflix, Pandora, and Googleʼs predictive search are all anticipating your next purchase, film or song choice, and in the case of predictive search results, sometimes even your next thought.
Data is the New Oil
These everyday instances are moments during which we casually interact with artificial intelligence, but its progress hinges on our interacting with technology in ways that create stores of data for AI to interpret.
Competing forces such as Google and Amazon monopolize data ownership and its analysis. This creates artificial data scarcity and limits the ability for smaller tech outfits, medical research facilities, universities, individuals, and others who would otherwise progress the field of applied AI from accessing data and computational resources.
A recent technological innovation called blockchain may be exactly what AI has been waiting for in order to unlock the vast stores of centrally controlled data. Blockchain is a cryptographically secured, decentralized, and global ledger which allows people who do not know each other to trustfully share data on an immutable record of events.
How can blockchain unlock the data bottleneck and bring AI to life?
Oceans of Data and the Problem of Trust
Data streams created by people on a constant basis are locked away in data silos which are isolated ponds of potentially life-saving, world- changing information and make up small bits of the massive ocean of data weʼre unwittingly floating atop. But because automotive, social media, telecommunications, and e-commerce companies are centralized and compete with each other, the data they accumulate from their users is, firstly, not owned by the users themselves and secondly, not shared publicly which creates the aforementioned silos.
Our data oceans are doubling in size every 60 days. To put this into
perspective Eric Schmidt, former CEO of Google and Alphabet Inc., stated during a 2010 conference that every two days there was more data being generated than in all of human history up to 2003.
The main driving force behind such a quantum leap in data production was, and remains, user-generated data, and these types of data are increasing with unstoppable velocity.
Text messages, status updates, tweets, Google searches, documents, Youtube videos, financial and electronic health records, voice messages, shared photos, the list of user-generated data goes on. But until recently, such crude data has been largely uninterpretable, leaving the vast trove of human-generated data untapped.
Unstructured data contains many nuances that traditional computing systems are unable to glean information from. An important element in most user-generated data is context such as emotion or history between participants in an exchange of messages.
Analyzing and interpreting such context or the content of other data like photos, video, and medical records take an extraordinary amount of computational resources that, so far, only large corporations have at their disposal. The reward for accurately interpreting unstructured data sets and creating hypotheses from them is the development of predictive technologies and autonomous systems that have the power to reshape society as we know it.
The gold rush to collect, own, and analyze as much data as possible has led to a cold war-esque race in which each of the major technological players has set up a distributed computing outfit, such as Amazonʼs AWS, Googleʼs DeepMind, and IBMʼs Watson.
The perpetual news cycle of hacked personal data and data sold to the highest bidder in closed backroom deals, however, creates serious doubt over whether anyoneʼs personal data should be owned or stored by centralized services. The closed nature of centralized AI means you will simply have to trust that the algorithms and data-sets being used to come to decisions that affect your life, such as when youʼre riding in an autonomous vehicle, are good.
Bringing Together the Data Haves and AI Haves on the Blockchain
Combining AI with blockchain fundamentally changes the current AI milieu in several key ways:
Data is moved away from centralized black boxes like Facebook and onto open-source blockchains by the commodification of data, giving users ownership over their data and allowing them to be paid for it in a blockchainʼs native currency.
The current centralized data model means both the data and AI
algorithms used to interpret them belong to the same company. By moving user-owned data onto the blockchain, algorithms become the most important part of the equation and lead to the creation of marketplaces that join projects offering AI with users offering their data-sets.
Blockchain-based AI is inherently trustworthy due to the immutable, public nature of the blockchain ledger. Every event that takes place is transcribed onto the ledger in such a way that it can always be traced.
There are several notable blockchain projects already doing significant work in the AI space.
1. AI Crypto — Sungjae Lee, CEO of the blockchain-based marketplace AI Crypto, believes the monopolization of AI by global giants Google and Amazon have led to a stifling of the field. According to Lee, the AI Crypto Ecosystem will provide a “…global shelter for AI engineers, scientists, and small start-up companies,” by providing a marketplace wherein computational resources, user data, and AI models can be directly exchanged between participants with services compensated using AIC, the native token of the AI Crypto Ecosystem.
2. Ocean Protocol — Ocean Protocol is another notable effort in the blockchain-based AI realm. Centralized data exchanges “…lack fair and flexible pricing mechanisms, data providers lose control over their assets, and there is a lack of transparency over how data is used.” By
their own measure, 16 ZB of data was generated globally but only 1% of it was analyzed. They aim to change that by providing a middle ground between the data haves and the AI haves, unlocking what they estimate to be a trillion-dollar data sharing market.
3. SingularityNet — SingularityNet offers an AlaaS (AI-as-a-service) marketplace wherein owners of AI models offer their algorithms for rent to those with data-sets theyʼre looking to analyze. Singularityʼs inbuilt search function allows for users with data-sets to then find similar models to the ones they rented or are interested in, allowing for a dataset to be pored over by several AI models or for results from similar analyses to be compared.
Notably, in all three of the aforementioned blockchain-based AI solutions, users always retain ownership of the data sets they offer.
Blockchainʼs promise to restore ownership of data back into the hands of users is a major factor in itʼs potential to disrupt AI amongst many other verticals. As the move to commodify data and shift ownership back to users becomes more enticing, the true potential of blockchain- based AI will blossom in earnest.
| Using Blockchain to Unlock Artificial Intelligence | 172 | using-blockchain-to-unlock-artificial-intelligence-13262e7dbefa | 2018-06-02 | 2018-06-02 10:00:40 | https://medium.com/s/story/using-blockchain-to-unlock-artificial-intelligence-13262e7dbefa | false | 1,207 | null | null | null | null | null | null | null | null | null | Blockchain | blockchain | Blockchain | 265,164 | Bitcoin Bravado | Become a crypto insider! 💰Learn FA & TA 📊Diversified 📚Education so you can take advantage of this financial revolution. Follow us on telegram: TryBravado.com | 17aec33a3922 | bitcoinbravado1 | 119 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 55f7b4ba041f | 2018-06-08 | 2018-06-08 06:07:24 | 2018-06-08 | 2018-06-08 06:10:19 | 2 | false | en | 2018-06-08 | 2018-06-08 06:10:19 | 2 | 132761f3a5c8 | 1.341824 | 5 | 1 | 0 | I am happy to announce that OpenSpace.ai has come out of stealth! | 3 | OpenSpace is out of stealth!
I am happy to announce that OpenSpace.ai has come out of stealth!
So…what do we do? It starts here:
Affordable housing crises, outdated infrastructure, aging power grids: these are massive, pressing problems facing our country, and the world. But we’re not facing them down as fast as we should — in part because we haven’t provided the men and women in construction with the same powerful tools to manage their work that office workers take for granted.
It’s time to change that.
At OpenSpace, we’re using cutting-edge technology — think the perception and navigation AI systems you see in self-driving cars — to allow people who work out in the real world to efficiently capture their work, analyze it, and get things done. We’re building time machines for the job site.
If we can reduce the cost, time (and pain) it takes to build by even a couple percent, we will absolutely revolutionize our approach to construction, and revolutionize how we manage humanity’s built environment overall. And we can do a lot better than a couple percent.
And yes, we have some big customers, and we have scanned millions of square feet already — and have a lot more work to do!
We have open positions for Generalist Software Developers, Machine Vision Engineers, QA, Designers and Director of Engineering, all based out of San Francisco. IMHO, you’ll find our team to be incredibly impressive people.
Interested? You can apply at:
https://openspace.ai/about.html#careers
Onward!
#infrastructure #construction #buildings#software #hiring #ai
| OpenSpace is out of stealth! | 37 | openspace-is-out-of-stealth-132761f3a5c8 | 2018-06-18 | 2018-06-18 14:52:58 | https://medium.com/s/story/openspace-is-out-of-stealth-132761f3a5c8 | false | 254 | thoughts & musings from a technology person | null | null | null | jeevans-thoughts | null | jeevans-thoughts | TECHNOLOGY,DESIGN,ENTREPRENEURSHIP,HARDWARE | zoinkit | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Jeevan Kalanithi | null | 19ab048a9d61 | jeevank | 301 | 196 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-29 | 2018-07-29 08:37:47 | 2018-07-29 | 2018-07-29 10:22:33 | 3 | false | en | 2018-07-29 | 2018-07-29 11:04:47 | 17 | 1327cd5400b0 | 2.183962 | 0 | 0 | 0 | On July 25th, 2018, an introductory Machine Learning (ML) session took place at the Sharjah Entrepreneurship Center (Shera’a), which met… | 3 | Machine Learning & IBM Watson Studio Workshop
On July 25th, 2018, an introductory Machine Learning (ML) session took place at the Sharjah Entrepreneurship Center (Shera’a), which met the interest of many. As it was intended for beginners, it involved looking at understanding ML concepts and how to use IBM Watson Studio to apply what they learned during the session in a simplified fashion. This session was presented by Mahitab Hassan and Areej Essa from the IBM Cloud Developer Experience team.
Full house at the Machine Learning workshop at Shera’a
Mahitab Hassan (myself) kicked off the evening by defining AI and presenting some of the applications in the industry that utilize the technology. I then went ahead went over the various basic concepts in Machine Learning followed by the different types of learning used in different applications and the Machine Learning pipeline typically seen in any given project. After that, it was time to go over the code lab, where the IBM Watson Studio was explored in order to dive into working with the Machine Learning pipeline. This included performing activities from data cleansing using the IBM Data Refinery service to creating a simple machine learning model using the IBM Watson Machine Learning service and visualizing it through generating interactive dashboards using the IBM Cognos Dashboard Embedded service.
Mahitab introducing basic Machine Learning concepts
After the code lab, Areej Essa went ahead to first introduce the Call for Code Challenge for 2018. She explained how this is a competition that asks participants to out-think natural disasters and build the solutions that will significantly improve the current state of disaster preparedness in your community and around the world. She added that the project with the most impact will be implemented with the help of IBM, The Linux Foundation, UN Human Rights, and American Red Cross in addition to other benefits. After that, she presents the Startup with IBM program (formerly Global Entrepreneur Program or GEP), which aims to successfully help startups by providing access to free technology, developer tools, educational resources, technical support, and up to $120K IBM Cloud credits.
Areej introducing Call for Code
The day ended with exchanging contacts and addressing the various questions asked by those who attended the session.
Interested in introductory and advanced workshops on topics including AI, Blockchain, Containers and Microservices, Cloud, Data Science, IoT, and Machine Learning? And perhaps missed our last workshop? Join us at our meetup group to stay tuned for our future developer events.
Resources
Slides: https://ibm.box.com/v/introtoml
Sign-up/Login to IBM Cloud: http://ibm.biz/sheraaml
Code Lab Content: https://github.com/Kuroi-Yuki/IntroToML
More Code Content and Patterns on AI: https://developer.ibm.com/code/technologies/artificial-intelligence/
More Code Content and Patterns on Data Science: https://developer.ibm.com/code/technologies/data-science/
Explore IBM Watson AI Solutions: https://www.ibm.com/watson/products-services/
| Machine Learning & IBM Watson Studio Workshop | 0 | machine-learning-ibm-watson-studio-workshop-1327cd5400b0 | 2018-07-29 | 2018-07-29 11:04:47 | https://medium.com/s/story/machine-learning-ibm-watson-studio-workshop-1327cd5400b0 | false | 433 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Mahitab Hassan | null | 696e521f847f | mahitab.hassan | 5 | 6 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-25 | 2018-03-25 14:47:08 | 2018-03-27 | 2018-03-27 17:52:49 | 14 | false | en | 2018-03-27 | 2018-03-27 18:11:42 | 9 | 132810b8c1f | 9.729245 | 5 | 0 | 0 | Targeted audience : Any beginner who has never seen calculus or maybe touched it years ago and trying to learn optimization of machine… | 3 | Optimization in Machine Learning
Targeted audience : Any beginner who has never seen calculus or maybe touched it years ago and trying to learn optimization of machine learning algorithm.
Before diving into any details of optimization Lets first try to answer a simple question What is optimization ?
To, answer this question we need to understand two basic concepts
Maxima : maximum value of a function F(X) at some value X ,it can be either global or local but that is function dependent ,A function may have many local maxima but only one global maxima or it may be a case that there is a function which have no global and local maxima.
Minima: minimum value of a function F(X) at some value X ,it can be either global or local but that is function dependent ,A function may have many local minima but only one global minima or it may be a case that there is a function which have no global and local minima.
So,Optimization is finding the global minima or in some cases global maxima
Now,we understand what optimization means,the question which arises is how do we optimize a function or equation?
The answer to this question is calculus ,there are also some other methods but calculus is just so perfect that can’t be replaced with any other technique ,
Ok now lets understand what is calculus but not in the way you learned in your classrooms but like if you needed calculus you would have invented for you .
Calculus was parallelly and separately developed by two most fantastic Mathematicians one of which is Isaac Newton and other is Gottfried Lebintz , when it comes to machine learning there are essentially two concepts which we need to learn Differentiation, Limits and the best part is both concepts are interconnected, you must be thinking what about integration ,actually use of integration in machine learning is limited to very few places rest differentiation is used widely.
In this article we will try to answer following question,
What is differentiation ?
What are limits?
How it is created and how to derive any formula for differentiation .
How to use it to optimize our own machine learning algorithm .
So,Let’s start by understanding the concept of limits
Limits is one of the most essential and kind of common sense concept which is required to understand derivative,consider this function
F(X) = (X² -4) /(X-2),the value of this function is unknown at X= 2 as immediately after putting X=2 in above equation the function gets converted into (0/0) form which is indeterminate ok, so this is true that we cannot solve this function at X=2 but how about getting a approximate result i.e value of F(X) at a very close point to 2 but not exactly 2 from both ends left side and right side ,and if both F(left side point) and F(Right side point) are equal then what ever be the value at both side point then that will be our approximate answer ,before getting into math part of solving this function let’s visualize this function ,
formalizing our discussion ,
H : infinitely small positive number close to zero,but not zero
Left hand limit (LHL): a point which is infinitely close to our required point from left side , which in our case is ‘2’. left hand limit is denoted as (X-H).
Right hand limit (RHL): a point which is infinitely close to our required point from right side , which in our case is ‘2’. right hand limit is denoted as (X+H).
So,LHL : F(2-H) => [(2-H)² -4]/[2-H] where is ‘H’ is infinitely close to zero but not zero so term (2-H) becomes infinitely close to 2 but not 2 and also smaller in magnitude than 2.
solving ,
=>[4 + H² — 4H -4]/[2-H-2]
=>-H[4-H]/-H (simplified directly but if you are unsure do it yourself)
=> we get 4-H but due to fact ‘H’ is infinitely small we can ignore it ,as it won’t affect our results much.
Now,RHL : F(2+H) => [(2+H)² -4]/[2+H] where is ‘H’ is infinitely close to zero but not zero so term (2+H) becomes infinitely close to 2 but not 2 and also larger in magnitude than 2.
solving,
=> [4 + H²+4H -4]/[2+H-2]
=> H[4+H]/H (cross check it once again for complete confidence)
=>we get 4+H but due to fact ‘H’ is infinitely small we can ignore it ,as it won’t affect our results much.
now,our LHL and RHL both are equal which is 4 so we can say approximate value of F(2) is 4,
This is called concept of limits,simple isn’t it and if you think intuitively this all makes sense as how we are trying to get an approximate answer .
Now we have basic idea of limits let’s use it to understand derivatives and concept of differentiation,
if you are in your high school or maybe in your undergraduate course , chances are you have already heard about this term and also used some of the formulas but in spite of using it you still aren’t confident about what actually a derivative actually means both geometrically and intuitively ,after reading this article you will be that much confident that even if calculus was not invented you would have been invented by yourself.
formal definition of differentiation which you have have heard or read in lots of text books ‘ differential calculus is a subfield of calculus concerned with the study of the rates at which quantities change’ , but this definition does not tells us about why at the first place we need derivative or even if needed then how it was so well formalized ,let’s answer all the question one by one ,our first question.
Why do we even need derivative ?
Let’s consider two function both of which are given below,
In the left figure the function is linear f(X) = X so the slope(rate of change of f(x) with respect to x) can be calculated as follows (Y2-Y1)/(X2-X1) the main reason it is correct its because it is a linear function i.e no independent variable has any power on it.
but can this direct method can be used to study the change in the function behavior which is on right side ,the answer is yes but this will not work directly we have to tweak it and the reason why we have to tweak it , is because of the void which is present , ok, so our biggest barrier in getting slope of the function is the void but let’s understand why there is a void ,the answer to this question is the large distance between two points x1 and x2 and if we can reduce it to that level that the void becomes negligible in that case our figure will some how look like this ,
Now you observe that the void becomes so small that the line which previously was passing the curve ,now is looking like a tangent drawn at a certain point on the curve itself,now to calculate the slope of that small line passing just put all the values in the formula for slope but before that read the following definition
dx : distance between small X1 and X2
dy: distance between f(x1) and f(x1 + dx)
Note: writing dx and dy is just a notation you can even write your name ;)
Slope : dy/dx
put values ,
slope = [f(x1+dx)-f(x1)]/dx
can we write slope as dy/dx ,why not isn’t it
dy/dx = [(x1+dx)²-x1² ]/dx
dy/dx = [x1² + dx² +2*dx*x1- x1²]/dx
dy/dx=[dx(dx + 2*x1)]/dx
dy/dx = dx + 2*x1
dy/dx = 2*x1 (let’s just ignore dx as it is infinitely small quantity or close to zero just as we learned in limits).
final answer is dy/dx = 2*x1 (put x1 = x as our function is in x only)
dy/dx = 2*x
ok, this was our graph based method let’s use some geometric figure to validate our result,
Consider a square whose length are of size ‘x’ like given below,
now increase the sides of the square by a infinitely small quantity dx on both side as we do not want to destroy the property of the square,consider the image below
now,calculate the area of only the increased portion which is
df = dx*x + dx*x + dx*dx
df = 2*dx *x (let’s just ignore dx *dx as it will be even much smaller quantity)
we get df/dx = 2*x as our final result
after generalizing it,we get
derivative of x^n = (n)*x^(n-1) ,congratulations you have just derived power rule of differentiation .
So, now you can see differentiation is not very complex theory but a simple concept which gives us an approximate result ,I would suggest you try to find out derivative of f(x) = x³ by your self so to develop differentiation muscles in your brain ;).
Understanding derivatives of addition, multiplication and composition of functions
for the purpose of this discussion we will use following functions,
addition : Sin(x) + x²
multiplication : Sin(x)*x²
composition: Sin(x²)
Derivatives of Addition of function
the graph of f(x)=Sin(x) + x²
In figure above df is slight change in height of sin(x) + slight change in height of x² which geometrically makes perfect sense,which can easily be calculated as follows,
df => derivative(sin(x)) + derivative(x²)
df => cos(x) + 2*x
Generalizing it,we get
derivative(f(x)+g(x)) => derivative(f(x))+derivative(g(x))
Derivative of Multiplication of function
for this derivation we will again use rectangle in our method ,consider a rectangle with width sin(x) and height x² like given below,
let’s increase height and width of rectangle by some small quantity dx
now ,We use area of rectangle which is height * width
Area of increased portion of rectangle is
df = sin(x)*d(x²) + x²*d(sin(x)) + dx²*d(sin(x))
but d(x²) * d(sin(x)) is infinitely small so we can ignore it,
df = sin(x)*d(x²) + x²*d(sin(x))
by Generalizing it we get,
Derivative(f(x)*g(x)) => derivative(f(x))*g(x)+f(x)*derivative(g(x))
Derivative of composite function
Study of change in Composite function is bit complicated but don’t worry
For this we will use following function f(x)=sin(x²).
observe the diagram carefully below,
by this we can break our function into two separate function
g(x) = x²
f(x) = sin(g(x)),
Solving,
df/dg = cos(g(x))
df = cos(g(x))*dg
df =cos(x²)*2*x [putting g(x)=x² and dg=2*x]
so let’s see what just happend
derivative (f(x))= [df/dg]* [dg/dx]
which is the chain rule of differentiation ,This is one of the most powerful technique in whole calculus and chain rule is the foundation for the creation of backpropagation algorithm which is the algorithm to train neural networks,if you are not familiar with these terms then don’t worry it is not needed as of now.
Till now we have learn’t all the required concepts to learn how to optimize an equation ,So lets optimize it,
We will talk only about finding the minimum value of f(x) as finding maximum value is same except some minor changes
For finding global maxima or just maxima the method is quite straight forward first we find derivative of that function then equate it to zero as slope of the tangent passing through a minima or maxima will be 0,Like in the image given below
But when it comes to machine learning we generally don’t have that well defined function with so less variables ,so this process is not practically applicable in real life situations ,But there must be a solution to this problem and the solution is Gradient descent algorithm , but before studying gradient descent we have to learn about vector differentiation.
Vector Differentiation
The concept is very simple ,consider a vector in which stores derivatives of a function with respect to every variable present in the function just like in the image given below
This vector or sometimes matrix is called Jacoian Matrix ,but the derivatives in a jacobian matrix are partial derivatives i.e derivative of a function with respect to only one variable at a time .
Let’s connect all the parts and understand gradient descent
This is an iterative algorithm i.e its start from some initial point do some updation for some ‘N’ times and then stops.
Gradient descent algorithm
Choose some random starting point
calculate the derivative of function at that point
calculate the next point by using following equation
4. if difference between previous point and current point is very small then stops otherwise jump to step 2
The algorithm is so simple that it does not need much explanation but still we will expand it more ,the concept behind the algorithm is simple take any random starting point and by taking small steps we will try to reach global minima .
But you must be thinking what is ‘L’ in the equation ,it is the learning rate i.e size of each step.But for simplicity of this article consider L=0.5 which is not 100% correct but still acceptable .Now we have completed all the essential steps to start deriving Machine learning algorithm from scratch ,but that will be covered in next article till then
Happy Machine Learning!
| Optimization in Machine Learning | 104 | optimization-in-machine-learning-132810b8c1f | 2018-03-28 | 2018-03-28 05:56:45 | https://medium.com/s/story/optimization-in-machine-learning-132810b8c1f | false | 2,194 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Abhinav Singh | just a guy who wants to become a scientist | d0fe2ea90f2b | abhinav199530singh | 12 | 5 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-18 | 2017-12-18 17:42:16 | 2017-12-18 | 2017-12-18 20:43:33 | 5 | false | en | 2017-12-18 | 2017-12-18 21:04:02 | 5 | 132a6fba114a | 3.240881 | 1 | 0 | 0 | Machine learning is not new and maybe has been overhyped but it’s an area that I’ve been interested in for awhile now. For me, the end goal… | 5 | Exploring Machine Learning: The beginning
Photo by Alex Knight on Unsplash
Machine learning is not new and maybe has been overhyped but it’s an area that I’ve been interested in for awhile now. For me, the end goal is to learn how to use/create technology that enables human beings to do more meaningful work.
After being relieved of reading for Accounting professional (ACCA) exams about a week ago I finally had time to delve into it. It only made sense to try to learn what I was really interested in at the moment.
Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.
Here are a few widely publicized examples of machine learning applications you may be familiar with:
The heavily hyped, self-driving Google car? The essence of machine learning.
Online recommendation offers such as those from Amazon and Netflix? Machine learning applications for everyday life.
Knowing what customers are saying about you on Twitter? Machine learning combined with linguistic rule creation.
Fraud detection? One of the more obvious, important uses in our world today.
My plan was simple, do 3 things.
Learn Python
It’s an easy to understand programming language and relevant for Machine learning. I started on codecademy just to get an overview, I also got an ebook just in case internet connection fails me, I’d still be able to go through the basic curriculum. So far it’s been quite friendly, and a knowledge of the basics has been quite helpful in relating with the syntax on Octave.
I should be done before by the end of the year given my level of laziness stays low
Take a Tutorial online
For the sake of structured learning, I was either going to take a YouTube Tutorial or a MOOC. I found a good one on YouTube but for starters, I decided to go with taking the quite popular Machine Learning Course on Coursera by Stanford University after reading a couple of reviews. I had to download the first 5 weeks overnight just so I’m not bothered about internet issues. So far all I’ve been doing is maths related stuff — Linear Regression and Matrices, watching the videos in transit and checking up the meaning/explanation of various terms. I haven’t gotten to the fun part where I can see the correlation between what I’m doing and real life application but it’s only necessary I pass through this stage. I don’t fully understand what I’ve learned so far but when learning something new it’s only normal that the learning curve is steep.
Making progress
For practicals we’re either to use Matlab or Octave, I initially wanted to go for Matlab because it’s user interface looked more appealing but there’s some error showing up anytime I try to download the additional resources needed to run the program. So I would download octave later in the middle of the night as I’ve had two failed attempts due to it’s file size and internet speed.
I just had to take a screenshot to complain
Work on a real project
This part was supposed to be done later after I’ve was done with the online course but I recently signified interest to join a group of coursemates who also want to work on real projects, things haven’t kicked off yet, when it does I would follow the conversations and contribute anywhere I can. It’s pretty much a diverse a group of people from different countries with more experience than I do so I’m hoping to learn and connect.
For the next one year I would be learning as much as I can by the side, I would always keep my learning style simple.
Maybe I’d end up falling in love with this or maybe not, it would definitely help in learning how to use/create technology to enable human beings to do more meaningful work.
I’m open to any advice or help from anyone experienced or knowledgeable AI/ML.
I would be back to give an update on my progress.
| Exploring Machine Learning: The beginning | 20 | exploring-machine-learning-the-beginning-132a6fba114a | 2018-03-27 | 2018-03-27 00:10:06 | https://medium.com/s/story/exploring-machine-learning-the-beginning-132a6fba114a | false | 638 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Daniel Adeyemi | Friend. Change Agent | Interested in Technology, Design & Human Behavior | Find me @danieltadeyemi or on danieladeyemi.com.ng | c5d70a21419c | danieltadeyemi | 248 | 195 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-22 | 2018-05-22 02:12:23 | 2018-05-22 | 2018-05-22 02:18:25 | 3 | false | en | 2018-05-22 | 2018-05-22 02:18:25 | 1 | 132b2447d9db | 5.942453 | 1 | 0 | 0 | Part 1. Background | 4 | Predicting the Outcome of NFL Games (Part 3)
Part 1. Background
Two previous articles looked at attempts to train neural networks to predict the outcomes of National Football League (NFL) games. Up to 62 game statistics were fed into various architectures of neural networks and trained. The resulting trained neural networks were used to predict the outcomes of out-of-training games. The success rate of the neural networks in predicting the outcomes of out-of-training games to within 2 points was only near 25% regardless of how much training data was used (which varied between 25% of total games and 65% of total games).
Some questions that were left unanswered included 1) Why were so many out-of-training games not predicted well by the trained neural network, 2) What was the confidence level in the out-of-training predictions, and 3) What was the sensitivity of the outcome of games due to slight-to-moderate changes in a team’s statistical performance for an upcoming game?
Part 2. Change in Direction of Methodology
After the less than satisfying performance of the neural networks to predict the results of NFL games, we decided to re-examine our analysis methodology.
Our original methodology utilized 62 team performance statistics for each game in order to train the neural networks and/or predict the results of out-of-training games. These statistics ranges from number of first downs for and against, average yards per rush play for and against, interceptions for and against, and a number of other “typical” team statistic indicators.
The first thing that we did was to eliminate some duplicity or “colinearity” in the statistics. For example, there were separate statistics for fumbles, interceptions, and turnovers. So we removed the fumbles and interceptions and just used turnovers. We also eliminated some subcomponents of the complicated passer rating statistic and just used the actual passer rating itself. Thus we were able to drop down to 45 unique team statistics for each game.
The major change that we made was to implement a Principal Component Analysis (PCA) as described below.
Part 3. Principal Component Analysis
Principal Component Analysis computes the eigenvalues and eigenvectors of the covariance matrix to arrive at a different set of parameters (or basis functions) to provide as inputs to the neural network.
Recall that we previously fed in 62 team statistics (or 45 team statistics after removal of colinearity) for each game that was played and used the resulting game points differential as the desired output of the neural network. Other than normalizing the team statistics to fit with a sigmoidal neuron transfer function, there was no processing performed on the raw team statistics. The set of statistics for a single game consists of its basis function. That is, the neural network doesn’t “see” the number of first downs or the number of rushing touchdowns. Rather the neural network “sees” a vector of the statistics stacked together. Or, in other words, a basis function. The neural network processes the basis functions through its series of hidden layers and hidden neurons to produce an output (that we want to match the actual output).
With PCA, the eigenvectors of the team statistics covariance transform the team statistics into a different set of basis functions to present as inputs to the neural network. These transformed basis functions are combinations of the team statistics.
But why utilize the PCA to transform the basis functions if you still have to use them to train the neural network and make future game predictions? Specifically because the PCA basis functions can be ordered according to the amount of variance that they capture in the team statistics. Thus a reduced size of basis functions can be used as inputs to the neural network resulting in reduced training time. Furthermore, the PCA basis functions give an indication of the variance captured and hence the confidence levels in the training and predictions.
Figure 1 shows the cumulative variance captured based upon the number of PCA basis functions one keeps. We were previously presenting 45 team statistics (or number of components) to the neural network for each game for training and subsequent prediction. The principal component analysis has reduced this number based upon the variance level that we want to capture. For example, we only need to input 10 PCA-derived basis functions to the neural network for each game in order to capture 80% of the variance. And about 20 PCA-derived basis functions would capture 95% of the variance. We thus reduced the number of inputs to the neural network by somewhere between a quarter and a half based upon 80% or 95% variance capture.
Figure 1. Captured Variance against Number of Basis Functions
Part 4. Intermediate Results
We have not gone and utilized the PCA-derived basis functions to train a neural network yet. But we have been examining the results of the PCA transformations to see if they make sense. After all, applying advanced algorithms of any sort to nonsensical data is not going to lead to realistic data, is it?
Figure 2 shows the results of trying to predict game outcomes with only two PCA-derived basis functions. It’s a planar plot of PCA component 1 along the horizontal axis and PCA component 2 along the vertical axis. The points differential for each game is loosely shown in various colors.
Already the games with big positive points differentials shown in red are clustered towards the right of the graph and games with big negative points differentials shown in black or blue are clustered towards the left of the graph. And in soft focus, the graph shows big positive points differentials through close games down to big negative points differential games from right to left on the graph.
This gives us promise that we have found some basis functions that will represent outputs of points differential (whether positive, near zero, or negative) related to a derived basis function. The promise is there, but you can also see some “outliers” with some red points well to the left of the graph. These “outliers” are the events or games that drive neural networks and other learning algorithms crazy. How did a big positive outcome come about when the team statistics and PCA-derived basis functions suggested that they should not have?
Figure 2. Prediction Results
Figure 3 shows the makeup of the first ten PCA-derived basis function coefficients for each team statistic. Thus, as expected, PCA-derived basis functions are NOT a single team statistic but a combination of team statistics which are an ordered set of combinations of statistics to capture variance.
Figure 3 shows that high absolute coefficients exist related to the statistics of net turnovers, rushing attempts, rushing touchdowns, and passer rating for PC0. (The sign of the coefficient doesn’t matter since the neural network can later decide whether it should be positive or negative. But like-minded statistics should have the same sign.) Thus PC0 is very closely tied to a team’s control of the game.
PC1 and PC2 are related to a team’s offensive performance and defensive performance, respectively. You can see that the columns for PC1 and PC2 are mutually exclusive for highlighting — one column related to positive offensive performance and one column related to positive defensive performance.
PCA3 is related to how well a team is able to maintain the ball (i.e., game control) with 4th down conversions as well as generate negative yardage for the opposing team with sacking of the quarterback.
By the time that you get to PC5 and higher, you are down to 5% variance capture. But it’s interesting and puzzling to note that for PC7, with a variance capture of 5%, the number of quarterback sacks for and against and the sack yardage lost for and against are approximately equivalent (with the same sign). A very puzzling basis function!
It’s also interesting to note that time of possession for or against is relatively insignificant in the first ten PCA-derived basis functions. This indicates a difference between game control and time of possession.
Figure 3. PCA-Derived Basis Function Makeup
Part 5. Next Steps
We’re going to continue pursuing the PCA-derived basis functions and go through the neural network training process. We’ll progress starting with 25% of the total games as training games up to 65% as we did in the previous effort.
We’re still trying to answer the questions of what wins football games, how well you can predict the outcome of a football game based upon team statistics, and what is the confidence level of those predictions.
Another fun article for sports enthusiasts and algorithm geeks will follow shortly.
| Predicting the Outcome of NFL Games (Part 3) | 1 | predicting-the-outcome-of-nfl-games-part-3-132b2447d9db | 2018-05-22 | 2018-05-22 05:24:36 | https://medium.com/s/story/predicting-the-outcome-of-nfl-games-part-3-132b2447d9db | false | 1,429 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Ray Manning | Algorithm geek | 53510c981de5 | ray.90807 | 8 | 6 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-19 | 2018-09-19 00:54:12 | 2018-08-31 | 2018-08-31 08:00:51 | 1 | false | en | 2018-09-19 | 2018-09-19 00:54:52 | 1 | 132e126c3d45 | 1.833962 | 0 | 0 | 0 | Two years ago I founded Hyperpilot with the mission to enable autopilot for container infrastructure. We learned a lot about data center… | 3 | Bringing AIOps to Machine Learning & Analytics
Two years ago I founded Hyperpilot with the mission to enable autopilot for container infrastructure. We learned a lot about data center automation based on real-time application and diagnostic feedback using applied machine learning. Last month, I joined Cloudera along with former team members Xiaoyun Zhu and Che-Yuan Liang to bring our expertise in intelligent automation to Cloudera’s modern platform for machine learning and analytics. We’re excited about this unique opportunity to push the boundary of what’s possible with data and set a new benchmark for ease of operations and cost efficiency of machine learning and analytics at scale.
Cloud-native architectures, fueled by large-scale workloads including big data and machine learning, are creating growing challenges for IT in the configuration and optimization of supporting infrastructure. Without the ability to automate configuration choices, teams create infrastructure silos supported by custom configuration arrived at by trial and error, resulting in inefficient utilization and cluster sprawl. At Hyperpilot, we witnessed these challenges in every public and private cloud customer we engaged. Despite the promise of cloud native architectures, customers lacked the necessary tools to effectively optimize their infrastructure, resulting in millions in dollars of losses due to low utilization and lost productivity. Witnessing these challenges, we focused on solving them through machine learning applied to workload and cluster optimization. We leveraged our team’s involvement in academic and industry research that demonstrated 4–5X efficiency improvement in large-scale clusters at Google, building on techniques such as configuration auto-tuning, performance isolation, and machine learning based scheduling.
Large-scale data and machine learning lie at the heart of Cloudera’s platform. Today, Cloudera’s platform runs on millions of nodes across on-premise, private cloud, public cloud, and hybrid cloud deployments. This scale represents a tremendous opportunity to apply our experience from Hyperpilot to deliver value to customers. We believe Cloudera is well positioned to unlock the potential of automation based on a strong history of innovation along with a compelling vision for a data platform for machine learning and analytics, optimized for the cloud. The Hyperpilot team is also well at home with Cloudera’s strong engineering culture and focus on providing customers an open platform for innovation.
As my team joins Cloudera, we’re excited to build on our proven ability to bring the latest research in infrastructure automation to Cloudera’s platform. We look forward to empowering Cloudera’s customers to own the future of their business, making what is impossible today, possible tomorrow.
Author — Timothy Chen, CEO and Cofounder, Hyperpilot
Originally published at vision.cloudera.com on August 31, 2018.
| Bringing AIOps to Machine Learning & Analytics | 0 | bringing-aiops-to-machine-learning-analytics-132e126c3d45 | 2018-09-19 | 2018-09-19 00:54:52 | https://medium.com/s/story/bringing-aiops-to-machine-learning-analytics-132e126c3d45 | false | 433 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Cloudera | At Cloudera, we believe that data can make what is impossible today, possible tomorrow. We deliver the modern platform for machine learning and analytics. | 89133b725092 | cloudera | 11,530 | 274 | 20,181,104 | null | null | null | null | null | null |
0 | ` Y = w0 + w1*X ` ## Linear Equation
| 1 | null | 2018-09-24 | 2018-09-24 13:22:18 | 2018-09-24 | 2018-09-24 13:42:52 | 8 | false | en | 2018-09-24 | 2018-09-24 13:42:52 | 0 | 132f7a4ddf7b | 3.563522 | 0 | 0 | 0 | How would you describe the difference between gradient descent and normal equations as two methods of fitting a linear regression? | 4 | Gradient Descent and Normal Equation
How would you describe the difference between gradient descent and normal equations as two methods of fitting a linear regression?
First of all let’s understand what is Linear Regression
Linear Regression:
Regression is a method of modelling a target value Y based on independent predictors X. Regression techniques mostly differ based on the number of independent variables and the type of relationship between the independent and dependent variables.
Simple Linear Regression Model
Simple linear regression is a type of regression analysis where the number of independent variables is one and there is a linear relationship between the independent(X) and dependent(Y) variable. The red line in above diagram refers as the best fit line and the equation of this line is known as Hypothesis Function.
where w0 and w1 are weight which we need to optimize so that we can fit best line to reduce loss.
Gradient Descent is techniques to find the value of w0 and w1 using iterative process so that overall loss can be minimized
Gradient Descent:
Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function (Loss function). To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or approximate gradient) of the function at the current point. If instead one takes steps proportional to the positive of the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent.
Loss function for Linear function is quadratic equation 1/2(y_predicted - y_real)**2 whose shape looks like a bowl or convex as shown below.
Source: Google
Convex function has only one minimum value. For that value of w cost will be minimum. Finding minimum value of cost by using every value of w is very time consuming process, a better approach is Gradient Descent
The gradient descent algorithm takes a step in the direction of the negative gradient in order to reduce loss as quickly as possible.
Source: Google
To determine the next point along the loss function curve, the gradient descent algorithm adds some fraction of the gradient’s magnitude to the starting point as shown in the following figure:
Source: Google
After that it will calculate derivative of cost function at that point which defines in which direction we need to move to reduce the cost.
Source:Google
The gradient descent then repeats this process, edging ever closer to the minimum.
Build an intuitive picture of how fitting a line corresponds to finding the (global) minimum of a function (the cost function).
Below are the diagram which i made to explain how Hypothesis function tries to fit the points as we reach to global minimum of cost function.
Normal Equation:
Normal equation is another approach for finding the global minimum or the weights (W) for which cost is minimum.
For some linear regression problems Normal equation provides better solution.
Gradient descent is iterative process while Normal equation solve W analytically.
Basic Step of Normal Equation is as below.
Take derivative of Cost function w.r.t to W
Set the derivative equal to 0.
Allow you to solve for the value of W which minimizes Cost function.
To implement Normal Equation
Take examples
Add an extra column (x0 feature)
Construct a matrix (X — the design matrix) which contains all the training data features in an [m x n+1] matrix.
Do something similar for Y
Construct a column vector y [m x 1] matrix
Using the following equation (X transpose * X) inverse times X transpose Y.
W = (X_transpose*X)inverse*X_transpose*Y
If you compute above equation you will get value of W which minimize cost function.
Comparision Between Gradient descent and Normal Equation.
So these are basics comparision of Gradient descent and Normal Equation. Hope you will get some basic idea how it works. Thanks a lot for reading :)
| Gradient Descent and Normal Equation | 0 | gradient-descent-and-normal-equation-132f7a4ddf7b | 2018-09-24 | 2018-09-24 13:42:52 | https://medium.com/s/story/gradient-descent-and-normal-equation-132f7a4ddf7b | false | 644 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Prince Yadav | Learning Autonomous Vehicle Technology | 80a299a2c643 | mail2princeyadav | 6 | 12 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-21 | 2018-09-21 19:33:01 | 2018-09-21 | 2018-09-21 22:00:50 | 5 | false | en | 2018-09-27 | 2018-09-27 09:44:53 | 3 | 132fb1cb1878 | 3.338994 | 2 | 0 | 0 | Today I was interviewed about potential topics for an invited talk I am giving to a local data user group. We talked about my career arc… | 5 | A medical writer and a data scientist walk into a bar…
Today I was interviewed about potential topics for an invited talk I am giving to a local data user group. We talked about my career arc and I was asked how I arrived at data science and visualization. I had never thought of it before but I realized why I morphed a career in medical writing to a more data focused analytics gig. In my ‘n of 1’ experience combined with what I observe around me — a freelance medical writer can be a liability if she is able to analyze the data allegedly intended to inform the narrative.
For example, I was writing a health economics piece on hematopoietic stem cell transplantation in pediatric patients. While looking at the data — if I don’t look at the data what exactly am I writing about — I noticed contradictions in where the outline was guiding the writing. When I mentioned this instead of receiving accolades for my keen eye I was asked to do two things. First, don’t worry about the data, it is too technical — let the analysts worry about that. Second, “the client doesn’t want to include that in the paper” so don’t submit that section.
I finished the project but felt unsettled. We both probably did. I guess certain consulting firms don’t want a writer that shakes the bushes and questions the data but I was hooked. No going back now.
So here I am minding my own business — literally — I started a data company. The data professionals are far more open and interactive. I was recently reading and writing about a wonderful statistical paper, Statistical paradises and paradoxes in big data (i): law of large populations, big data paradox, and the 2016 us presidential election. Read it. It is worth your time.
On a lark, I reached out to the author, the esteemed Xiao-Li Meng, PhD Whipple V.N. Jones Professor of Statistics at Harvard University, to see if the part II of paper was circulating about somewhere. He promptly replied with links to topics he will be covering in next paper but humbly apologized it was not yet ready for publication. Mind. Blown.
All data and no play makes anyone dull right? So this little gem crossed my social media stream and it led to thoughts of another paper pivotal in how I approach data visualization. My first clients were skittish when presented with charts and graphics a whole lot more informative than the ever present bar chart. I once created an interactive data visualization in Tableau that could return the drug regimens of patients from 1st line to 4th line and beyond — all from an EHR relational database with a few lines of SQL and a hail Mary or two. Nope. They wanted the stacked bar charts they had requested. Oof.
The paper Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm accompanies my teaching in most data literacy workshops. Here is why:
Remember your categorical variables? They are qualitative “fixed” variables describing characteristics like disease symptoms, sex, blood type etc. Quantitative variables on the other hand, are numeric either discrete or continuous. You can count them.
The problem is when we assign quantities to categorical variables. Think about fasting glucose levels. Although quantitative variables are numeric they are often converted to categorical by creating binary labels like “diabetic” or “not diabetic.” Here is where you hit your head on the bar. Where is the granularity? There is an important level of detail omitted. Categorical variables don’t become quantitative just because you have a numeric value assigned.
Many different datasets can lead to the same bar graph. Bar graphs are designed for categorical variables; yet they are commonly used to present continuous data in laboratory research, animal studies, and human studies with small sample sizes. Bar and line graphs of continuous data are “visual tables” that typically show the mean and standard error (SE) or standard deviation (SD). — Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015)
The bar graph (mean ± SE) suggests that the groups are independent and provides no information about whether changes are consistent across individuals (Panel A). The scatterplots shown in the Panels B–D clearly demonstrate that the data are paired. Each scatterplot reveals very different patterns of change, even though the means and SEs differ by less than 0.3 units. The lower scatterplots showing the differences between measurements allow readers to quickly assess the direction, magnitude, and distribution of the changes. The solid lines show the median difference. In Panel B, values for every subject are higher in the second condition. In Panel C, there are no consistent differences between the two conditions. Panel D suggests that there may be distinct subgroups of “responders” and “nonresponders — Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015)
Bar graphs and scatterplots convey very different information. While scatterplots prompt the reader to critically evaluate the statistical tests and the authors’ interpretation of the data, bar graphs discourage the reader from thinking about these issues. …Showing SE rather than SD magnifies the apparent visual differences between groups, and this is exacerbated by the fact that SE obscures any effect of unequal sample size. The scatterplot (Panel C) clearly shows that the sample sizes are small, group one has a much larger variance than the other groups, and there is an outlier in group three. These problems are not apparent in the bar graphs shown in Panels A and B. — Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015)
This was one of the first articles that clarified for me the effectiveness of different charts to display different types of information. Alberto Cairo describes charts as “little arguments” — we aren’t intended to just look at them. You need to read them. Notice what is missing. Pay attention…stay away from the bars.
| A medical writer and a data scientist walk into a bar… | 51 | a-medical-writer-and-a-data-scientist-walk-into-a-bar-132fb1cb1878 | 2018-09-27 | 2018-09-27 09:44:53 | https://medium.com/s/story/a-medical-writer-and-a-data-scientist-walk-into-a-bar-132fb1cb1878 | false | 664 | null | null | null | null | null | null | null | null | null | Data Visualization | data-visualization | Data Visualization | 11,755 | Bonny P McClain | healthcare visualized 🚑 data architect 📊 data don’t lie | c81ecdfa3588 | datamongerbonny | 202 | 280 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-13 | 2018-04-13 18:41:21 | 2018-04-13 | 2018-04-13 18:44:17 | 2 | false | en | 2018-04-13 | 2018-04-13 18:44:17 | 0 | 1331be26b9bf | 22.998428 | 0 | 1 | 0 | With Max Siegrist | 3 | Where’s the Oomph? :Economic Versus Statistical Significance in Empirical Worker-Owned Cooperative Studies
With Max Siegrist
Since the early twentieth century, statistical significance has been considered the ultimate determination in validating the importance of relationships. Birthed by William Sealy Gosset, also known as “Student,” statistical significance quite literally measures the likelihood deviations from a null hypothesis. Student’s T-statistic is conventionally used to determine whether the null hypothesis will be rejected or fail to be rejected. According to Ronald Fisher, any T-value less than two is deemed “statistically insignificant” (McCloskey and Ziliak, 2009, 2303). Fisher sees statistical significance in a study as all that matters. That is the null hypothesis will be rejected (or fail to be rejected). Since Fisher (1925) has paved this road of significance, researchers have accepted this framework as the proper way to do scientific investigations (McCloskey and Ziliak, 2009, 2303).
In terms of the Fisherian framework of running regression, the standard way of practicing science and looking at the significance of the result with regard to a null hypothesis is not without its critics. Two prominent critics are Deirdre McCloskey and Stephen Ziliak. In a book and a series of papers over the last 25 years, they have been influential in helping steer the debate towards a broader, more substantial look at what “significance” means. The two authors argue that statistical significance is not the final arbiter of what matters in using economic science to explore the world. In most econometric studies and texts, according to McCloskey and Ziliak, “there is no mention of economic as against statistical significance” (McCloskey and Ziliak, 1996, 99). In fact, by statistical definition, the coefficients of the variables are not considered important(!) unless the variation is small enough: “‘significant’ means ‘signifying a characteristic of the population from which the sample is drawn,’ regardless of whether the characteristic is important” (Wallis and Roberts, 1965 quoted in McCloskey, Ziliak, 1996, 102). For example, a pill that would grant exactly an extra year of life to everyone who took it would be prefered in economic investigation over a pill that gave its users an average of five years with a spread of four years on either side of that mean. The variation for the second pill is too great and the results would not have statistical significance. The results would however be significant for the users as the minimum life gained would be the one year guaranteed by the first pill. In economics, a practice that prides itself on rational thinking, the difference between statistical significance and practical significance too often fails to be recognized as in the above example.
Economic researchers need to look at their information. Instead of trying to find something that is statistically significant, what is truly important is finding results that are economically significant. When differentiating economic and statistical significance, McCloskey and Ziliak stress that significance is arbitrary, or “the odds should be set depends on the importance of the issues at stake. In their phrasing the difference is in looking for “‘oomph’ compared to precision” (McCloskey and Ziliak, 2009, 2303). Although a p-value is small, its economic significance might not be as important to the research. What really matters to investigations is more the magnitude of the coefficients and less some bright-line arbitrary level of mere statistical significance.
In their 1996 paper, “The Standard Error of Regressions,” McCloskey and Ziliak developed the argument against solely focusing on statistical significance in economic papers. Importantly they also created a nineteen-point checklist with which to evaluate how economic scientists approach their data and results with regard to the idea that statistical significance is not the only thing that matters in social science. With messy data sets and incomplete information and a reality guided by individual heterogeneous human actions, overlaying the statistical methods created for the physical sciences is not the proper way of looking at the human world if we want to understand it and make policy recommendations. McCloskey and Ziliak extended their analytic checklist to over 180 papers published in one of the top journals in economics, viewing “The American Economic Review as an unbiased selection of best practice” (McCloskey and Ziliak, 1996, 101), to apply their framework. We will be using the framework developed for that examination to explore the use and abuse of statistics in papers focusing on the economic performance of worker-owned cooperatives.
Worker-owned coops are less known in capitalist structures, thus it is interesting to see if their successes — or failures — are economically significant in markets. Coops create a different market perspective than competitive or unionized firms. Since each cooperative member has equal ownership to a firm, there is less of a hierarchical scale than that is seen in competitive firms. Unemployment rates are not directly reflected due to industry changes. The cooperative structure of firms create different incentives for worker-owners than free market firms. The difference in structure leads researchers to ask questions about which sort of firm will perform better. Importantly, there are many axes that could be defined “better” in terms of for each stakeholder. What are the differences in employment, market share, or profits? Ultimately, we want to be able to recommend policy to support coop formation or to favor more traditional firm structures based on the economic significance of our examinations. For these purposes, the literature on cooperative performance is thinner than we would like. For our examination we look at two econometric studies from Craig and Pencavel, where in both studies they examined the plywood industry in the American northwest; a paper from Arando et al. on Mondragon’s coop industry; and Burdin & Dean’s comparison of coops and capitalist firms in Uruguay. For the most part, the economists find strong arguments in favor of coops, but are they based on actual economic or statistical significance? The papers are at different levels of analysis, from looking at variation within a larger firm in Mondragon to Craig and Pencavel’s industry-level examination and ending with Dean and Burdin exploring the data for a whole country. Specifically as we evaluate the papers, we ask: do papers specifically show us what we’re looking for — are the stats showing us Oomph instead of a focus on precision in the framework of McCloskey and Ziliak or are they relying on a Fisherian bright-line rule to include or exclude results? To do so, we have adopted McCloskey and Ziliak’s nineteen-point paper evaluation, looking at each of the papers point-by-point.
Authors Gabriel Burdin and Andres Dean, like Craig and Pencavel, are able to compare cooperative firms with more traditional capitalist firms in their paper “New Evidence on Wages and Employment in Worker Cooperatives Compared with Capitalist Firms” (2009). They are able to do the examination through a “comprehensive panel data set based on monthly Uruguayan Social Security records. The data set covers the entire population of Uruguayan worker cooperatives and capitalist firms in 31 economic sectors from April 1996 to December 2005” (Burdin & Dean, 2009, 517–8). The data set is the most comprehensive of any of the papers we examine, and interestingly provides a natural experiment in how the different types of business organization respond to shocks as the data set encompasses a downturn from 1999–2002, “with the greatest downturn in 2002” (Burdin & Dean, 2009, 519). The experimenters find that : “The evidence we presented is broadly consistent with our initial hypotheses as well as with the previous empirical work. The effect of output price changes on wage variations is positive for both types of firms, but larger in WCs than in CFs. CFs exhibit a well-defined and negative relationship between wages and employment. By contrast, WCs display a well-defined and positive relationship between wages and employment” (Burdin & Dean, 2009, 526). By taking the whole population, statistical significance is assumed, but the authors do not speak of the relative strength as well as different responses to economic shocks, with capitalist firms found to lay off workers more while coops adjusted wages.
Economists Ben Craig and John Pencavel wrote two econometric studies that examined different aspects of the plywood worker-owned cooperatives in the Pacific Northwest, titled “The Behavior of Worker Cooperatives: The Plywood Companies of the Pacific Northwest,” and “Participation and Productivity: A Comparison of Worker Cooperatives and Conventional Firms in the Plywood Industry.” In the former, Craig and Pencavel research the responses of coops to economic environments and the true value of each membership; the latter focuses on productivity and output levels in coops in comparison to conventional firms. The plywood industry is more responsive to recessions and surging economies, compared to other markets observed. For example, as Craig and Pencavel note, “a cooperative is more likely to adjust earnings and less likely to adjust employment to changes in output and input prices than is a conventional firm” (Craig and Pencavel, 1992). When severe changes in the market occur (i.e. housing market crash), the plywood industry is affected much stronger than the supermarket industry.
In “Efficiency in Employee-Owned Enterprises: An Econometric Case Study of Mondragon, authors Saioa Arando, Monica Gago, Derek C. Jones, and Takao Kato explore the efficiency of a retail chain under the larger Mondragon umbrella in Spain. The authors were able to look at data from two different types of markets in the organization. One group was “cooperative with significant [sic] employee ownership and voice,” while the other group, GESPAs, are stores where there is “modest employee ownership and limited voice” (Arando et al., Abstract). The chain also owns supermarkets organized on capitalistic lines. The gradations of ownership and involvement of employees allow the authors to examine how and if there are differences between the output growth of the stores as well as worker satisfaction amongst store employees and worker-owners. The main results are that for the largest of the stores, Coops do perform at a higher level, but the results are more ambiguous for smaller stores. In fact, the authors have to torture their data at a more granular level to find a statistically significant effect in looking at “Supermarket City” stores, those that are “a group of supermarket stores that are smaller than other supermarkets” (Arando et al., 21). Further, the authors find the surprising result that working at a coop results in “significantly lower” (Arando et al., 27) worker satisfaction than in a store where the workers have less control.
1) Does the paper use a small number of observations, such that statistically significant differences are not found at the conventional levels merely by choosing a large number of observations?
The data set for the Burdin and Dean paper includes a “long micro-panel at the firm level based on social security records,” and it includes “monthly wages and employment and observations between April 1996 and December 2005 and covers the entire population of registered producer cooperatives and capitalist firm of 31 industries” (Burdin & Dean, 2009, 521), so the data is widespread and long lasting. The size of the data set has the effect of driving down standard error as this leads to over 860,000 individual data points (Burdin & Dean, 2009, 528) essentially ensuring statistical significance for regressions, being the population as a whole and not a sample. Because it it the whole population, the authors should not have to do a sample test because their data set is the population.
Craig and Pencavel’s two papers on coops in the plywood industry were the only ones to use a small number of observations: 41 in 1992 and 34 in 1995. The small sample size still accurately reflects the data observations.
The data set for the Mondragon paper includes observations over the course of more than two years, from February of 2006 to May 2008. Of these the authors highlight the “109 hypermarket and 7 supermarket stores” as being at “center stage” (Arando et al., 5). There is enough observations that p values can be reduced and T values increased based on the sample size.
2) Are the units and descriptive statistics for all regression variables included?
The Burdin and Dean paper on Uruguay includes description of all the regression variables both in the text of the paper as the authors describe their models (Burdin & Dean, 2009, 522–6) as well as in a chart with the rest of the data (Burdin & Dean, 2009, 527).
While explaining their regression models, Craig and Pencavel carefully describe the units and the importance to the model (Craig & Pencavel, 1992, 1093–1095).
In the Mondragon paper the variables are described in the text below the equations in a easy to read format (Arando et al., 18–24), and the results are well-specified.
3) Are coefficients reported in elasticity form, or in some interpretable form?
Though there is no place in the Burdin and Dean paper where elasticity is tied directly to the results of the coefficients, with some second-level thinking the reader can see how the coefficients affect the equation. For example, in discussing the wage changes in response to demand shocks in capitalist versus cooperative firms, the authors do state “The y0 coefficient directly captures the effect of price changes on capitalist firms’ wages” (Burdin & Dean, 2009, 524).
Craig and Pencavel use both plywood and log prices in their regression model for input and output prices (Craig and Pencavel, 1992, 1093). In their least-squares estimates, they clearly label the values for all the firms observed — classical, union and coops. Moreover, they describe the relative values of the coefficients: “for each type of firm, the least-squares estimate from equation (1) of the proportionate response of the variable listed in each column to the proportionate changes in output prices (in the upper panel) or proportionate changes in log input prices (in the lower panel)” (Craig and Pencavel, 1992, 1093).
In the Mondragon paper, the coefficients easily identifiable in the tables. The authors also explain the meanings in the text. In the section looking at the growth rates of the coops versus the stores with more limited worker voice, we see it explained: “The estimated coefficients on the control variables have the expected signs and are statistically significant. Specifically, the estimated coefficient on ΔlnLit is positive and significant at the 1 percent level, and the size of the coefficient implies that the output elasticity of labor in the underlying Cobb-Douglas production function is a little over 0.5” (Arando et al., 20). The authors hint at economic significance, but their focus is more on the statistical significance.
4) Are the proper null hypotheses specified?
The Burdin and Dean paper specifically lays out five hypotheses (Burdin & Dean, 2009, 521) which drive the rest of the paper. The empirical models designed to test the hypotheses are laid out with the section that introduces the results (Burdin & Dean, 2009, 523–6).
Craig & Pencavel determine their null hypotheses clearly in both of their plywood examinations. In 1992, they believe cooperative “share prices give the appearance of being undervalued” (Craig and Pencavel, 1084). Then, in 1995, the economists state that “this paper addresses the question of whether productivity differences are evident between conventional firms and worker cooperatives…” (Craig and Pencavel, 122).
Though the Mondragon hypotheses are buried in the text of the “Theory and Previous Empirical Work” section and not broken out as in the other papers, the salient comparisons are between similar firms with different structures in the same organization. Difference from the null of zero will be compared in terms of their magnitudes.
5) Are coefficients carefully interpreted?
In the same section in which the Burdin and Dean authors introduce their equations, the main results of the equations are discussed (Burdin & Dean, 2009, 523–6). The paper is well organized so that the introduced hypotheses being tested are clear and the models are clear on the dependent variables. The interpretation of the coefficients is more based on the sign and the significance, for example in examining how wages change with employment, the reader learns “The coefficients for CCFs have the expected signs and are significantly different from zero. Thus, changes in employment are negatively related to changes in wages, and changes in output prices are positively correlated with adjustments in the employment level” (Burdin & Dean, 2009 524–5). What this discussion is missing in terms of Ziliak and McCloskey’s framework is a look into the magnitudes of these coefficients.
Craig and Pencavel explain the proportionate values of each coefficient, and their standard errors, for the regression models. Specifically, the economists use plywood and log units, clearly labeling both within the model.
As seen above, the results in the Mondragon paper are discussed carefully immediately following the descriptions of the models. The results are clear and precise and easy to read.
6) Does the paper eschew reporting all t- or F-statistics or standard errors, regardless of whether a significance test is appropriate?
The Burdin and Dean paper includes in the tables all the p values of the measured regressions (Burdin & Dean, 2009, 528–32). The values do not find inclusion into the body of the text.
In one of their models, Craig and Pencavel only list three years that they deemed important for their study, neglecting the rest of their gathered information. The years listed: “the first, 1972, was a prosperous year for the firms, with production and prices surging above their previous year’s values. By contrast, in 1980 production and prices were falling as the industry entered a severe recession. By 1984, the industry was gradually coming out of the recession, though production and prices still had not attained the levels of the late 1970’s” (Craig and Pencavel, 1992, 1089). To better understand the industry, it would be best if the economists included the years in between to observe “normal” years of productivity.
All measured variables have a listed t-statistic in the tables at the end of the paper, though within the text of the Mondragon paper, it is the levels that are reported on.
7) Is statistical significance at the first use, commonly the scientific crescendo of the paper, the only criterion of “importance”?
In the papers examined, passing the hurdle of statistical significance is pretty much the end of the discussion if the result is “significant” in any real understanding of the word. For example, in testing how output effects wages, once we see p is small enough, then “The estimated responses of both types of firms differ in a statistically significant way: wages are more flexible in WCs than in CFs. Also, the estimates show that the relationship between current wage changes and its lag are significantly greater for WCs” (Burdin & Dean, 2009, 524).
Craig and Pencavel do not mention statistical significance as the “only criterion of ‘importance’” in their paper. In fact, they talk about economic significances, “we relate to firms’ decision variables to the economic environment facing them” (Craig and Pencavel, 1992, 1092). The economists recognize economic significance but cloud their judgement based on statistical evidence.
What is interesting about the Mondragon paper as compared to the Burdin and Dean text is that it does discuss the size of the coefficient: “Reassuringly the estimated coefficient on COOP is now positive and statistically significant at the 5 percent level. The size of the estimated coefficient implies a considerable 0.7 percentage-point growth rate advantage enjoyed by COOP stores in the market segment of Supermarket City over conventional stores in the same market segment (Arando et al., 21). Similarly, when a null is not rejectable, the authors note as much, “we find no evidence for the growth rate advantage of COOP stores — the estimated coefficient on COOP is very small (actually negative) and highly insignificant” (Arando et al., 21). The in-paper evaluation of the results allows them to reevaluate their original hypothesis and look more at a subgroup of the supermarkets, a smaller group called a “Supermarket City,” and perform their evaluation on these alone. The move is transparent so that even though it might trigger worries of data mining, the authors take the reader through the steps of their reasoning.
8) Does the paper mention the power of the tests?
The Burdin and Dean paper did not mention the power of the tests. None of the papers studied mentioned the power of the tests, which increases the likelihood of accepting the false null hypothesis. Reasoning behind this could be that the null hypotheses stated are not described numerically relative to the alternative hypothesis, rendering the calculating the power of tests difficult.
9) If the paper mentions power, does it do anything about it?
Not applicable for the papers — mentioning the power of the test seems to be an incredibly uncommon thing to do.
10) Does the paper eschew “asterisk econometrics”?
In the Burdin and Dean paper, the coefficients’ magnitude is not stressed so there is no ranking.
Craig and Pencavel do not rank their data points based on T-values.
Within the text of the Mondragon paper, as long as the significance threshold is surpassed, the level of significance is not ordered by just how significant it its. However, in the tables, the significance level is clearly marked by the literal asterisks and at what level they’re significant at: “* p<0.10, ** p<0.05, and *** p<0.01” (Arando et al., 37). The authors seem to vary what even counts as statistically significant within the paper, hinting at why this examination is available as a working paper and unplaced in a journal.
11) Does the paper eschew “sign econometrics”?
The signs are mentioned in the Burdin and Dean, but the sign is important only if it meets with the expected direction of the hypothesis.
Craig and Pencavel recognize the values of signs in the model and briefly explain the data points.
The sign is only mentioned in the discussion where the authors are testing the hypothesis that coops in the Mondragon family will grow at a faster rate than firms with more employees, the GESPAs. In a parenthetical aside noting the size and significance of the coefficient, the authors note that it was “(actually negative)” (Arando et al., 21).
12) Does the paper discuss the size of the coefficients?
For our investigative framework, the magnitudes of the coefficients are the one of the most interesting things about Burdin and Dean paper. In the examination of the regressions, most of the results reflect the hypothesis within a significant level since there are so many observation. The large n drives the statistical significance. However, there is one section where the discussion of the size of the coefficient is salient for our purposes, in the paper never quantified, only given relative magnitudes, but the authors explain it thus: “By contrast, for WCs the wage–employment relationship is significantly positive. But, the effect of output price changes on employment is not significantly different from zero. For WCs, we cannot reject the hypothesis that employment responds inelastically to output prices changes” (Burdin & Dean, 2009, 525). Here we have the authors finding a relationship with a lot of oomph, but little precision. The problem with this discussion and the paper at large is that if a reader is interested just how much the measured coefficient responds to price changes, they have to go to one of a number of tables in the back. The fact that the p value is too high necessitates a non-ejection of the hypothesis, but we don’t see the discussion of what this might mean economically.
As noted above, within the Mondragon paper, the size of the coefficients is brought into to the discussions of all estimations, and not just limited to the tables in the back for the curious reader to seek out. These discussions are a strength to the paper and make it accessible to more readers.
13) Does the paper discuss the scientific conversation within which a coefficient would be judged “large” or “small”?
None of the observed papers specifically mention the judgement of their coefficients’ sizes. The lack of discussion is troubling as it leaves the data and the analysis to the tables for only the interested to find and analyze. Skimming the abstract and flipping to the conclusion leaves out so much. For example, in Craig and Pencavel (1992), Table 5 (1093) includes strong evidence of economic significance of a pass through to worker wages for coops when prices go up but almost none at all for traditionally capitalist firms. The power of the “oomph” there is unmistakable, but the errors are too large so the incredible results fall from notice in the body of the paper.
14) Does the paper avoid choosing variables for inclusion solely on the basis of statistical significance?
In the Burdin and Dean paper on Uruguay, there were limitations to the data that might be important. Perhaps through what was absent some of the questions could have been answered. As it, they are absent, and noted in the conclusion as a place for more research: “For example, information on work hours and inputs was unavailable. Additionally, due to lack of information on capital, we could not analyze the interdependence between wages, employment, and investment decisions within CFs and WCs” (Burdin & Dean, 2009, 526).
The variable choice and the regressions are well defined in the Mondragon paper. However, the choice in narrowing the dataset once the first regression is drawn on the supermarkets to only look at the “Supermarket City” stores, “stores that are smaller than other supermarkets and are still somewhat reminiscent of intimate, small neighborhood groceries” (Arando et al., 21) does seem to be a second-best choice after the disappointment in finding no difference between the coops and the GESPAs.
15) After the crescendo, does the paper avoid using statistical significance as the criterion of importance?
For Burdin and Dean, if significance is not reached with a regression result, and are set aside with “evidence is inconclusive” (Burdin & Dean, 2009, 526) with no exploration of possible oomph if precision is not on the table.
As mentioned before, Craig and Pencavel continue to use regression models and tables in the conclusion section of their paper.
Though the authors of the Mondragon paper do try to tease out just what the results mean, the discussion is based more than just on the size of the coefficients. For example, in looking at worker satisfaction, the theoretical and past empirical work points to some ambiguity on if workers are less happy in coops or more conventional stores, they find that “The OLS estimates of Eq. (4) are presented in Table 4. Column (i) confirms that the level of overall worker satisfaction is indeed significantly lower in COOP than in GESPA, after controlling for a variety of individual and store characteristics” (Arando et al., 27). This result is somewhat surprising to the authors, so that the discussion is more looking at the why of the results: ‘An alternative interpretation of the finding is that by being significant stakeholders, COOP workers at Mondragon probably expect more from their work, resulting in high expectations and a higher likelihood of disappointment” (Arando et al., 28) and admitting that even though their results are statistically significant, “In sum, our evidence on low job satisfaction in cooperatives points to a need for somewhat nuanced understanding of cooperatives as a HPWS [High Performance Work System]” (Arando et al., 28).
16) Is statistical significance decisive, the conversation stopper, conveying the sense of an ending?
In the Burdin and Dean paper, the results section is back-loaded. Once these are finished, the paper moves onto the conclusion, which summarizes the remarks seen in the results section and looks at the weaknesses of the data set as a potential place for more research (Burdin & Dean, 2009, 526).
Craig and Pencavel examined the change of input and output rates in response to log and plywood prices. The coefficients they examine are explained in clear economic terms: when the price of inputs (logs) decreases, output will increase; when the price of output increase, output will increase as well. However, McCloskey and Ziliak point out a particular parameter that is forgotten, “are coefficients reported in elasticity form, or in some of interpretable form relevant for the problem at hand…?” (McCloskey and Ziliak, 1996, 102). The economists explain this logic clearly, but they fail to address the contextual value of the coefficients. Moreover, Craig and Pencavel continue onto say, “the discussion in the previous two paragraphs focuses upon the economic significance of the estimates reported in [the table]. Differences across the firm in the statistical significance of these estimates are much less clear” (Craig and Pencavel, 1992, 1094). The researchers come to economical conclusions, associate the findings with statistical significance and neglect to explain what their data specifically entails.
As above in exploring the Mondragon paper, even though the data returned significant results at different levels, there is a recognition of the limitation of reality as against the theoretical framework, and the discussion is done in that context: “Even though the formal arrangements in Eroski cooperatives are somewhat short of what is envisaged in the pure theory case of the LMF, Eroski cooperatives do provide high levels of ownership and participation as well as substantial job security for members” (Arando et al., 14). Though as we have seen, there is the need to for the bright line, as the authors have to refigure their original thesis on output by more narrowly focusing the data set when the bright line is not crossed.
17) Does the paper ever use a simulation (as against a use of the regression into further argument) to determine whether the coefficients are reasonable?
Simulations are not specifically mentioned in any of the observations for papers examined.
18) In the “conclusions” and “implications” sections, is statistical significance kept separate from economic, policy, and scientific significance?
The Burdin and Dean paper comes across as an entirely positive empirical paper, not one that has a policy agenda explicitly written into it. Though an implicit agenda may be inferred by the hypotheses being tested, there is no claim for one type of firm over another though there is a measured difference in how capitalist firms respond to shocks compared to how worker cooperatives do (Burdin & Dean, 2009, 526).
Surprisingly, in the conclusion section, Craig and Pencavel continue to use regression models and coefficient tables to summarize their findings. Specifically, they use the model to describe the share values of cooperatives. This is interesting since the share value evaluation is one of the null hypotheses.
The results that had proven significant in the Mondragon paper are assumed as true in the conclusions section. The authors claim that : “Overall our findings tend to lend support to those who cast doubt on the unconditional supremacy of the Anglo-American shareholder model of corporate governance and advocate employee ownership and shared capitalism as a viable and possibly even superior alternative to the Anglo-American shareholder model” (p. 30) but even though the results cleared the minimum statistical significance level, they do admit that their results are not the end of the conversation: “[W]e do not believe that our findings imply that employee -owned enterprises are a universal panacea” (Arando et al., 30).
19) Does the paper avoid using the word “significance” in ambiguous ways, meaning “statistically significant” in one sentence and “large enough to matter for science in another?
Most of the of the papers do not introduce confusion to readers in terms of ambiguous renderings of the term. However, in Arando et al., the authors use significance both in statistical terms, but also to describe the level of connectedness with the co-op. For example, in discussing worker satisfaction, the reader learns “An alternative interpretation of the finding is that by being significant stakeholders, COOP workers at Mondragon probably expect more from their work, resulting in high expectations and a higher likelihood of disappointment” (Arando et al., 28). What the authors mean by significance in this manner is not well defined in the text of the paper.
Two things are of note in the papers we examined. First of all, by looking at different levels of analysis from the firm to the industry to the state level, a cooperative form of organization is clearly not a panacea in terms of improving organization performance by standard capitalist benchmarks. Though the worker-owners have a greater stake in the organization, that does not clearly translate to greater growth from period to period. On top of that consideration, the job satisfaction in the one study that examined satisfaction fails to show that coop workers are happier. In fact, they found the opposite.
The other thing to note is that coops have a strong attractive component in terms of employment. Worker-owners are more strongly attached to the organization. When the business cycle turns downward, instead of being dislocated from employment, the adjustment for workers in coops is in terms pay. The workers hold their jobs but the pay is decreased. Interestingly, in two of our papers, these results were found to have large coefficients, but were not statistically significant at the proper levels, dropping this out from the analysis. So there is a large enough variance to make mainline statisticians question the results and those caring about economic significance do not get to learn about the oomph of the effect. What needs to be more deeply examined is the subjective experience of workers in how they experience these adjustments. For example, the Mondragon paper surprisingly fiound the worker-owners in the coops had a smaller satisfaction with respect to more traditional capitalists. Also noted within the plywood industry, Craig and Pencavel concluded — with no statistical evidence — that the pattern of labor within a worker-owned coop was influential on whether or not that firm survived. As they put it: “the pattern of co-op mills has more to do with the initiative and attitudes of groups of working people when presented with opportunities that were not specific to one organizational form or the other” (Craig and Pencavel, 1995, 135). Worker-owned coops require a certain type of labor force to see success, which can be reflected in economic notations (productivity, output, etc.), but it is not possible to assign a P-value to determine whether this is “significant.” Craig and Pencavel admit that the “type” of worker is significant to a coop, but also recognize finding numerical value “is difficult, however, to find solid evidence suggesting that an idiosyncratic productivity element is an important component of the explanation for the incidence of cooperatives” (Craig and Pencavel, 1995, 134). As economist trying to understand the world and change it through the understanding, these ambiguities need to be teased out through further research.
Bibliography
Arando Lasagabaster, Saioa, Gago, Monica, Jones, Derek and Kato, Takao. “Efficiency in Employee-Owned Enterprises: An Econometric Case Study of Mondragon.” No 5711, IZA Discussion Papers, Institute for the Study of Labor (IZA) (2011): 1–40.
Biau, David Jean, Brigitte M. Jolles, and Raphaël Porcher. “P Value and the Theory of Hypothesis Testing: An Explanation for New Researchers.” Clinical Orthopaedics and Related Research® 468, no. 3 (2009): 885–92. doi:10.1007/s11999–009–1164–4.
Burdín, Gabriel, and Andrés Dean. “New Evidence on Wages and Employment in Worker Cooperatives Compared with Capitalist Firms.” Journal of Comparative Economics 37, no. 4 (2009): 517–33. doi:10.1016/j.jce.2009.08.001.
Craig, Ben, and John Pencavel. “The Behavior of Worker Cooperatives: The Plywood Companies of the Pacific Northwest.” The American Economic Review 82, no. 5 (December 1992): 1083–105.
Craig, Ben, and John Pencavel. “The Objectives of Worker Cooperatives.” Journal of Comparative Economics 17, no. 2 (1993): 288–308. doi:10.1006/jcec.1993.1027.
McCloskey, Deirdre N., and Stephen T. Ziliak. “The Standard Error of Regressions.” Journal of Economic Literature, 34, no. 1 (March 1996): 97–114.
Ziliak, Stephen T., and Deirdre N. McCloskey. “The Cult of Statistical Significance.” Proceedings of Joint Statistical Meetings, Washington, DC. JSM, 2009. 2302–316.
| Where’s the Oomph? | 0 | wheres-the-oomph-1331be26b9bf | 2018-04-13 | 2018-04-13 18:44:18 | https://medium.com/s/story/wheres-the-oomph-1331be26b9bf | false | 5,993 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | J Edgar Mihelic | There is only one true answer to any economic question: “It depends”. | 5c83273a4625 | jedgarmihelic | 74 | 121 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-09-15 | 2017-09-15 18:49:39 | 2017-09-15 | 2017-09-15 00:00:00 | 11 | false | en | 2017-09-15 | 2017-09-15 19:07:34 | 10 | 133281f56f28 | 4.216981 | 47 | 2 | 0 | I’m getting all misty-eyed over here, probably because I’ve progressed to the fourth stage of grief over the looming end to the Udacity… | 5 | Udacity Self-Driving Car Nanodegree Project 12 — Semantic Segmentation
I’m getting all misty-eyed over here, probably because I’ve progressed to the fourth stage of grief over the looming end to the Udacity Self-Driving Car Engineer Nanodegree program. As I write this my team has written roughly a quarter of the code for the final project. I can’t believe it. When I look back on my life, 2017 will be defined by this experience. What will 2018 bring? Hopefully an exciting new career in this field, thanks in no small part to Udacity. (And trust me, I’m working on it! I expect to have some good news to report in, oh, six weeks or so.)
As I mentioned before, the Path Planning project turned out to be a bit of a beast (somewhat of my own creation) and the final project is shaping up to be similarly hairy, even tackling it as a group. Thankfully the Semantic Segmentation, aka Advanced Deep Learning, project was relative respite. That’s not to say semantic segmentation is simple, by any means — just that Udacity (and their partner Nvidia) did a good job of distilling this project down to its key concepts and giving us straightforward steps to implement ourselves. They didn’t give us everything we needed, even after releasing a walk-through video, but perusing the Slack channel and implementing the suggestions of other students made it quite manageable (hint hint, to those coming after me).
OK, so what the heck is semantic segmentation, anyway? Well, a picture is worth a thousand words, right?
We’ve learned before that convolutional neural networks can detect and classify objects in images. The Traffic Sign Classifier project (from waaaay back in January!) is an example of one of the earliest (and simplest) neural net architectures (LeNet) for classifying images. We’ve come a long way since then, and the architectures have become quite complex but also capable of identifying a thousand different types of objects in an image.
Well, someone somewhere somehow had the idea that you can take one of these image classification architectures and just change the final fully-connected layer to a 1x1 convolution, add a few skip layers, some de-convolutions (no, I’m not going to take you down that rabbit hole — if you’re interested read the paper), and SHAZAM the neural net doesn’t just tell you there’s a dog in that image of yours, but it can tell you which pixels of that image belong to the dog. Seriously?! Well, yes… but you also have to feed it images with individual pixels labeled according to their class (which sounds like a pain in the ass, but the results are striking).
The objective of this project was to identify and label just the drivable area of the road in an image. Even using a single class/label, the training times verged on ridiculous and using a powerful GPU was all but mandatory (I believe some students reported training for upwards of days using only a CPU). Going into the project, my main misconception was that the neural net should be symmetrical, and that each convolution and pooling layer from the original architecture (we were using pre-trained VGG-16, in this case) necessitated a corresponding upsampling layer. We were instructed to only build skip layers for layers 3 and 4 of the VGG architecture (the final fully-connected layer, 7, was also converted to a 1x1 convolution), and my final architecture includes only a single de-convolution from layer 7 to layer 4, another from layer 4 to layer 3, and one more before the final softmax output (as depicted in the image below), though some other students reported even better results with additional de-convolution layers.
The result, at first, was awful. The output is meant to be the original image with the drivable area overlaid in semi-transparent green, but mine was producing what looked like the original image with a bunch of faint green static all over. Definitely not correct. It was after adding a kernel initializer and regularizer to each layer, and bumping up the number of epochs considerably that I was able to achieve results like these:
Not bad, eh? It was a fun exercise — always nice to get a little more practice with TensorFlow. And I just have that much more respect for the guys and gals out there in the trenches advancing this incredible technology.
The code for this project, as well as a short but more technical write-up, can be found on my GitHub.
Originally published at http://jeremyshannon.com/2017/09/15/udacity-sdcnd-semantic-segmentation.html on September 15, 2017.
| Udacity Self-Driving Car Nanodegree Project 12 — Semantic Segmentation | 147 | udacity-self-driving-car-nanodegree-project-12-semantic-segmentation-133281f56f28 | 2018-06-01 | 2018-06-01 03:08:32 | https://medium.com/s/story/udacity-self-driving-car-nanodegree-project-12-semantic-segmentation-133281f56f28 | false | 773 | null | null | null | null | null | null | null | null | null | Udacity | udacity | Udacity | 3,005 | Jeremy Shannon | Self-Driving Cars - MSECE - Detroit - more at jeremyshannon.com | 716549009cc4 | jeremyeshannon | 1,168 | 438 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-12 | 2017-11-12 22:47:27 | 2017-11-11 | 2017-11-11 08:00:30 | 5 | true | en | 2017-11-12 | 2017-11-12 22:50:28 | 3 | 13336291d85c | 2.886164 | 6 | 0 | 0 | “May you live in interesting times” | 3 | Why humans will always stay ahead of AI
“May you live in interesting times”
A little research attributes this to Sir Austen Chamberlain in 1936, soon before his brother was Prime Minister of the United Kingdom and had to do the best with the tools he had to politically address the challenge that was Adolf Hitler. Suffice to say that he had limited success, as his brother had somehow unconsciously predicted with what has since been embellished to be called “the Chinese curse”.
So, we definitely now live in interesting times. Things are changing faster than ever and we won’t be able to cope at all with current leadership models.
In recent articles I wrote about :
Zen and Teaching your son to drive
In which I discuss the “competence model” of four stages of learning, showing how we learn new things in phases and the keys to successful teaching, or transference of knowledge.
Forget knowledge, simply lead
Even with the above, we cannot possibly keep up with the rate of growth in knowledge in the world, so we must shift leadership paradigms to that of #OpenLeadership, Lead, don’t seek to be the omnipotent expert, nor educate our children on that model. We have google for raw knowledge !
Now take the core “competence model” and then the curve version, and imagine just how impossible it is to imagine us learning and teaching each new piece of knowledge up our “Moore’s Law” exponential curve of learning as the speed continues to increase.
The first one is the core model :
This is how it looks energetically as we process each new learning :
Now imagine that dip, trough, effort to climb up the curve all the way up an exponential learning curve :
In short :
Is the world as we know it over ? Will be be taken over by machines ? Are we already living in the Matrix ?
I cannot be sure, but I am hugely positive about the innate magic of the human race, and I feel we have a massive untapped resource available to us that is one that AI may perhaps never replicate.
What is that secret, you say ? Energy, emotion, and so Empathy. The ability to feel our own feelings and to feel those of others.
Stay with me on this one. Leadership is all about reading and managing energy. Without boring you on the science, humans make ALL decisions based on our feelings, and NONE based on our rationality and intellect.
As this post is published I am at the Kilkenomics festival and am at odds with many trained economists here who feel we make decisions rationally. I also heard an ex central banker of high reputation (yes, a good fellow!) call Behavioural Economics “a quirk”. Not one of the many economists appearing identifies themselves at all as a behavioural economist. I guess it isn’t my echo chamber 😜
Anyway, from all I have come to learn as first a chartered accountant then business builder then coach and sounding board, we do things because they make us feel good, we only THINK (get it!) we rationalise.
Perhaps the themes of #OpenLeadership can show us the way ?
Originally published at tommccallum.com on November 11, 2017.
| Why humans will always stay ahead of AI | 79 | why-humans-will-always-stay-ahead-of-ai-13336291d85c | 2017-11-22 | 2017-11-22 18:14:22 | https://medium.com/s/story/why-humans-will-always-stay-ahead-of-ai-13336291d85c | false | 544 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Tom McCallum | I help visionary leaders see beyond their own vision. Are you ready to #BeMoreYou? https://tommccallum.com/be-more-you/. Daily writing reposted here. | 1043390bc0ae | tommccallum | 312 | 108 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 8b5620c36355 | 2018-05-19 | 2018-05-19 14:15:19 | 2018-05-19 | 2018-05-19 14:44:15 | 4 | false | en | 2018-05-22 | 2018-05-22 14:30:25 | 6 | 1333dfa53f4c | 5.466038 | 45 | 0 | 0 | Google’s recent introduction of the ML Kit for Firebase beta at Google I/O 2018 brings on-device and in-the-cloud machine learning to their… | 5 | Exploring Text Recognition and Face Detection with Google’s ML Kit for Firebase on iOS
Google’s recent introduction of the ML Kit for Firebase beta at Google I/O 2018 brings on-device and in-the-cloud machine learning to their mobile development platform, Firebase. ML Kit for Firebase includes “production-ready” support for several common use cases:
Recognizing text
Detecting faces
Scanning barcodes
Labeling images
Recognizing landmarks
Landmark recognition is available exclusively from the cloud API, while text recognition and image labeling are available on device or from the cloud, and face detection and barcode scanning are only available on device. On-device use cases may be used free of charge, while cloud-based use cases will incur a cost when the number of uses is above a threshold.
In this article, we will explore ML Kit for Firebase by building two small proof of concept iOS apps (ML Kit is available for both Android and iOS):
Face Replace: Uses face detection to find faces in an image and superimpose an emoji which represents how much each person is smiling.
Credit Card Scanner: Uses text recognition to extract the credit card number, the credit card holder’s name and the expiration date from the front of a credit card.
Face Replace
Our Face Replace app will begin by capturing an image which includes one or more faces. We’ll use UIImagePickerController with the source type set to .camera to provide our app with our target image.
Meet our test subjects: The Beatles
Once we have the image, it’s very straightforward to detect any faces in that image. Before we do, however, it is important to make sure that the imageOrientation property of the UIImage is .up. If it is not .up, then be sure to correct the orientation of the image before you attempt to process it or ML Kit will fail to detect any faces (you can make use of code similar to this before passing it to ML Kit).
The heart of this code is the call to VisionFaceDetector detect(in:) which does the heavy lifting of finding faces in the image. Before we get to that, there are a few other steps:
We instantiate an instance of Firebase Vision service and configure our VisionFaceDetectorOptions.
We instantiate a VisionFaceDetector with our options. Note that this is a property instead of a local variable as the local variable will otherwise be nil by the time we need it.
We create the VisionImage which will be passed to the VisionFaceDetector
The call to VisionFaceDetector detect(in:) does the heavy lifting of scanning the specified image for faces. The completion handler is invoked with an optional array of VisionFace objects as well as an optional error object. Any error is checked and presented to the user.
We iterate through each VisionFace and check to see if each one has smiling probability data associated with it. If it does, we use the CGRect from the VisionFace to create a label which will superimpose the appropriate emoji over the face in the image (because the image is scaled when rendered onscreen, we scale the frame for the emoji so it is placed in the correct location). For simplicity, we’ve encapsulated the logic to convert the smilingProbability to the appropriate emoji in the FaceReplaceViewController.newEmojiLabel(frame:, smilingProbability:) function.
If the face in the image is rotated relative to the Y-axis, this is indicated by the VisionFace.headEulerAngleY, in degrees. We’ll create a CGAffineTransform to rotate the label by the appropriate number of radians and then add the label as a subview of the UIImageView.
What we end up with is a new image with the appropriate emoji placed over each face:
Meet “The Beatles”
As is clear from the code snippet, above face detection and smile detection are both quite straightforward with ML Kit for Firebase. The only difficulty was in determining which smilingProbability values mapped to the various emojis, and those were determined by trial and error on a few test images.
Credit Card Scanner
As was the case with our Face Replace app, our credit card scanner will begin by capturing an image of the front of the credit card using UIImagePickerController.
Our “credit card”
Once we have the credit card image, it’s very straightforward to detect text in that image. Again, it is important to make sure that the imageOrientation property on the UIImage is .up or the results will be gibberish.
The heart of this code is the call to VisionTextDetector detect(in:) which detects text in the image. As with face detection, before we can call this function, a few other steps are necessary:
We instantiate an instance of Firebase Vision service and use it to create a VisionTextDetector. Note that this is assigned to a property so it does not become nil.
The call to VisionTextDetector detect(in:) uses a VisionImage created from the UIImage. and is what actually performs the text detection. The completion handler is invoked with an optional array of VisionText objects as well as an optional error object. Any error is checked and presented to the user. Each VisionText object includes a String with the detected text and a CGRect with the location of the text in the image.
Now that we have all the text in the image, we just have to figure out which text is the credit card number, which is the credit card holder’s name and which is the expiration date. This logic is encapsulated in the CreditCardInfoExtractor.cardholderName(in:, imageSize:), CreditCardInfoExtractor.creditCardNumber(in:, imageSize:) and CreditCardInfoExtractor.creditCardExpirationDate(in:, imageSize:) methods which use regular expressions and the rough location of the detected text in the image to classify the detected text (i.e., credit card numbers are towards the middle of the image while credit card holder names and expiration dates are towards the bottom).
This process was somewhat successful in determining the information we wanted:
Note that the app incorrectly identified “VALID THRU” as the cardholder’s name in this case. The credit card holder’s name was properly detected, but our app selected the wrong two word text string as the name as it had no way to differentiate between the two of them. To improve this, we could consider including some information about the width of the detected text, as credit numbers tend to be quite wide, names a bit narrower and expiration dates even more narrow. This may help somewhat, but it is unlikely to be 100% accurate across all types of credit cards.
But the biggest issue is that ML Kit does not always recognize text accurately. For example, the number 0 in the credit card number and expiration date are sometimes reported as the letters D or O. This fails to match our regular expression, which was coded to detect a series of four digits for the credit card number. We could make the regular expressions look for letters or numbers, but that increases the likelihood of false matches.
Conclusion
Google’s announcement of ML Kit for Firebase at Google I/O 2018, while still in beta, was definitely a boon to mobile developers. It can enable some pretty amazing machine learning-based apps with very little code.
The cross-platform nature of ML Kit for Firebase, unlike, obviously, Apple’s Core ML, is also a step forward for this larger audience of Android and iOS developers as it will enable even more of these developers to take advantage of machine learning in their apps (including those built for both Android and iOS). ML Kit for Firebase will provide a powerful foundation on which developers can explore new ideas and new techniques, all built upon the power of Google’s work in artificial intelligence, and now included in every developer’s arsenal.
| Exploring Text Recognition and Face Detection with Google’s ML Kit for Firebase on iOS | 250 | exploring-text-recognition-and-face-detection-googles-ml-kit-for-firebase-on-ios-1333dfa53f4c | 2018-06-18 | 2018-06-18 08:21:32 | https://medium.com/s/story/exploring-text-recognition-and-face-detection-googles-ml-kit-for-firebase-on-ios-1333dfa53f4c | false | 1,263 | Engineering blog showcasing some innovation and creativity | null | null | null | Y Media Labs Innovation | ymedialabs-innovation | INNOVATION | ymedialabs | Google | google | Google | 35,754 | Adam Talcott | Software Engineering Manager and Technical Lead at Y Media Labs. Formerly computer architecture at Sun, IBM and Cisco. | 47328832cd42 | adamtalcott | 311 | 406 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-01-22 | 2018-01-22 11:17:44 | 2018-01-22 | 2018-01-22 11:18:36 | 0 | false | en | 2018-01-22 | 2018-01-22 11:18:36 | 0 | 1334884bc519 | 0.588679 | 0 | 0 | 0 | Both L1 and L2 regularization penalize the size of the weights ensuring that several medium sized weights contribute to a large output… | 2 | Regularization : L1 vs L2
Both L1 and L2 regularization penalize the size of the weights ensuring that several medium sized weights contribute to a large output value (implying high confidence), and not just a single large weight. The regularization techniques thus discourage strong opinions from a single unit (in case of neural networks).
The L1 regularization punishes the absolute size of the weight while the L2 punishes the squared size of the weights. Because of the use of squared weights, the L2 pushes the network to solve the problem with several small weights. Thus, the prediction is made by almost all the weights which reduced the bias. However, in L1, even small weights can accumulate to a large L1 penalty. Hence, the L1 pushes the network to have mostly zero weights barring a few medium to large weights. Since, very few weights are non-zero, the final network should be highly confident about its predictions.
| Regularization : L1 vs L2 | 0 | regularization-l1-vs-l2-1334884bc519 | 2018-04-28 | 2018-04-28 20:36:39 | https://medium.com/s/story/regularization-l1-vs-l2-1334884bc519 | false | 156 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Gaurav Kumar Singh | AI Speaker | Machine and Deep Learning Researcher @Ford | ac26a510f266 | gauravksinghCS | 3 | 12 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-30 | 2018-07-30 17:54:35 | 2018-07-30 | 2018-07-30 17:56:48 | 1 | false | en | 2018-07-30 | 2018-07-30 18:00:31 | 6 | 1335933b5c08 | 5.226415 | 0 | 0 | 0 | The conversation about Artificial Intelligence has become so muddy lately that it is important to be very clear about what we mean when we… | 2 | What it Means to be AI-Driven
The conversation about Artificial Intelligence has become so muddy lately that it is important to be very clear about what we mean when we say an organization is “using AI” or is “AI driven”. This post will clear up those definitions and some of the implications of each classification.
There are three levels at which an organization can adopt artificial intelligence:
AI-Assisted: AI is a technical bolt-on to existing processes, usually through the use of AI-created tools, for example AI assisted sales or team management tools.
AI-Enabled / AI-Augmented: AI is applied to existing processes or product data to make the product or service better or more useful, for example the recommendation engines behind Netflix, Spotify, Amazon, etc. This is often in the form of one or more AI-driven features or products.
AI-Driven: AI is literally the lifesblood of the company or initiative, for example self-driving technologies and the companies providing tools to AI-Assisted companies. This is truly an AI-Driven company.
Each of these cases has different implications from the perspective of the implementing organization.
AI-Assisted
Most organizations are already being assisted by AI and they may not even know it. They see only that the tools they use are getting better over time.
Maybe that chat box on the customer service app now suggests smarter things to say or the ad analytics are beginning to make recommendations. Regardless, the tools they use are continuously improving and they will continue to need to make purchasing decisions based on the best available tools but AI doesn’t represent a core strategic goal for the company.
Still, AI-Assisted companies should consider:
AI-driven tools will generally be more data-hungry and so care should be taken about exposure of proprietarily valuable data (especially in a post-GDPR world)
The lock-in for AI-driven tools is usually in the form of an improved user experience over the course of usage, as mentioned in The Virtuous Cycle of AI Products. It means that a tool you’ve been using for the last 12 months may look much better now than a newer tool but keep in mind that the new tool might have a significantly stronger growth curve over the next 12 months.
Ultimately, though, AI-Assisted companies should be most wary of disruption from companies that serve similar use cases in an AI-Enabled or AI-Driven way.
AI-Enabled / AI-Augmented
Organizations which use AI technologies to drive features and products are at a tricky middle ground. Most organizations with AI on the mind will fall into this category because it really represents the whole middle of the spectrum between “AI-Assisted” and “AI-Driven”. An AI-Augmented organization is one which:
Uses AI to drive specific products or features that aren’t core to their business model
Uses AI to improve core products (but without providing the majority of those products’ value)
Uses AI as just one factor among many to derive its core competitive advantage
To be successful, an AI-Augmented organization needs to:
Streamline the flows of data within the organization so they are accessible by the teams and products that use it for their models. Often that means effectively implementing unified data warehouses with easy access across teams.
Build a data science function that integrates with the engineering and product teams who need it.
Provide organizational air cover for the data science function to use engineering resources and, if possible, own the data infrastructure. This avoids their models collecting dust while they collect their paychecks.
Leaders also need to understand exactly how much of their competitive advantage is derived from the benefits of these technologies so they can invest their resources accordingly.
For many applications, it is perfectly sufficient to take well-tested off-the-shelf methodologies and productionalize them for the particular environment at hand. For example, recommendation systems have been around for many years and are well tested by now. It doesn’t require a world-class data science team to implement them, though the engineering work to support them may still be substantial.
For other applications, an AI-driven product might be used to solve a specific business problem within the organization — fraud detection systems, SPAM filtering systems, network security systems and so on all fall into this category. In this case, the investment needs to match the business value of the application.
For organizations that derive a significant portion of their competitive advantage from AI-driven products or who might be disrupted by technologies that enable their competition to harness AI-Driven products, they need to quickly break down silos and move towards creating business units that look more AI-Driven themselves.
AI-Driven
If an organization which is training its own models and releasing products with them is still only “AI-Augmented”, what does “AI-Driven” mean? An AI-Driven organization is one which:
Derives its core competitive advantage from the use of AI
Derives a majority of its product’s user experience from AI technologies
To be successful, an AI-Driven organization needs to:
Treat high quality and ongoing data acquisition as a core strategic priority
Streamline the org chart to allow the rapid and iterative productionalization of models through integrated teams across data science, engineering and product.
Arrange team priorities and OKRs to prioritize cooperation in support of the data science.
Build the tightest feedback loops possible between product usage and retraining of models
Build iterative processes to enable rapid model releases and experimentation
The difference carries through every layer of the organization. In the case of AI-Driven, the organization doesn’t just use AI, it is built entirely around it.
Most enterprises will not make the transition to “Core AI” because they still live in a world where data and teams are highly siloed, usually because their data has been produced as exhaust from existing processes or to serve specific organizational goals along the course of the company’s development. Even businesses which are currently “data-driven” (they have successfully unified their data warehouse and have functions which derive insights and predictive value from it) have a large gap to cross in order to rearrange their processes to treat data as the first class citizen rather than a byproduct.
A Core AI company, on the other hand, is wrapped around the central data pipeline and every function is designed to improve the efficiency and quality of this pipeline. An example is Clara Labs, which creates an email-based meeting scheduling assistant powered by AI. Their entire company structure is built around improving the ongoing quality of their annotations and cycling feedback on the quality of their results back into the model and worker training functions. The first question they ask is “how can we change our processes to improve the quality of our data?” rather than “how can we get better data from our existing processes?”
Core AI companies are those who drive the forefront of the AI Revolution. They are the ones who will benefit from the Virtuous Cycle of AI Products and who often build the products that support AI Assisted companies. They know that their competitive advantage is derived from the quality of their research and technical talent and invest accordingly to support them with great tooling, infrastructure and data.
Closing Thoughts
Every company trying to raise money or sound good in the press will claim to be “AI-Driven” in some way but the metrics above should help separate the wheat from the chaff. It’s important to get the optics right but it’s even more important that executives and leaders don’t conflate what it means for their organizations to be in one category versus another because it can lead to overspending, poor execution or, ultimately, disruption at the hands of startups with much more clarity and focus.
Want to discuss? Hit me up on Twitter. This post was originally published on eriktrautman.com.
| What it Means to be AI-Driven | 0 | what-it-means-to-be-ai-driven-1335933b5c08 | 2018-07-30 | 2018-07-30 18:00:31 | https://medium.com/s/story/what-it-means-to-be-ai-driven-1335933b5c08 | false | 1,332 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Erik Trautman | Building a sharded, mobile-first blockchain at @nearprotocol. Realistic optimist. Fmr @vikingeducation, @theodinproject. | b8cc06bbe006 | ErikTrautman | 437 | 394 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | adf40d2bd8fb | 2018-05-07 | 2018-05-07 06:45:16 | 2018-05-07 | 2018-05-07 06:49:12 | 3 | false | en | 2018-05-07 | 2018-05-07 06:49:12 | 6 | 1336b61a5021 | 1.731132 | 5 | 0 | 0 | (Image Source: Christos) | 5 | Designing a Chatbot — UX process
(Image Source: Christos)
In today’s era companies transitioning from app based or web based interfaces to ‘Conversation based platforms’ (Chatbots). Chatbots are a computer program designed to simulate conversation with human users, especially over the Internet. Chatbots are not new invention, history of chatbots can be found during 1950–60s when Alan Turing & Joseph Weizenbaum contemplated the concept of computers communicating like humans do with experiments like the Turing Test and the invention of the first chatterbot program, Eliza.
Today Chatbots are disrupting the overall world. Many platforms like Facebook, Twitter, Telegram etc. has given their platforms available & open for chatbots integration which has given major shift in customer’s interacting with the Companies and their Products.
Today I wanted to share Chatbot design process which I have synthesized based upon my experience on designing a Chatbot. My main goal from this article is to spread awareness about ‘How to design a Chatbot’, ‘What steps to be taken in designing a Chatbot’ from a UX Designer’s point of view. I am open for recommendations & suggestions to improve my process.
Chatbot Design Process
Lets see each step in more detail,
1. Scope & Requirements
Why Chatbot! Understand context of a chatbot
List features or offerings your chatbot going to offer
Be specific and limit your offerings
Platform to launch chatbot
2. Identify Inputs
Every goal in a Chatbot interaction can be divided in to logical steps (conversations).
Primary ways a Chatbot gets input,
From user (queries, info from users — text or voice)
From device (device sensors like geo-location etc.)
From intelligence systems (Insights from user’s historic data)
3. Understand Chatbot UI elements
There are multiple platforms like Facebook, twitter, telegram, WeChat etc. who support chatbots. https://developers.facebook.com/docs/messenger-platform/send-messages/templates
Continue Reading >>
About Author
Abhishek Jain
User Experience Designer & Researcher
Twitter | LinkedIn | Email
| Designing a Chatbot — UX process | 6 | designing-a-chatbot-ux-process-1336b61a5021 | 2018-06-21 | 2018-06-21 10:56:57 | https://medium.com/s/story/designing-a-chatbot-ux-process-1336b61a5021 | false | 313 | UXness is a place for all UX designers, enthusiasts to learn about UX, Usability, Design, Web design. Find articles, UX event, UX books, UX templates. | null | uxness | null | UXness | uxness | UX DESIGN,UX BOOKS,UX ARTICLES,USER EXPERIENCE,UX APPS | uxness | Chatbots | chatbots | Chatbots | 15,820 | UXness | A place for all UX designers, enthusiasts to learn about UX, Usability, Design, Web design. Find articles, UX event, UX books, UX templates. www.uxness.in | dcab2996815a | uxness | 42 | 8 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 3db3a67cb648 | 2017-12-07 | 2017-12-07 14:19:16 | 2017-12-14 | 2017-12-14 14:33:42 | 2 | false | en | 2018-10-17 | 2018-10-17 19:04:23 | 2 | 133865f8369b | 4.519182 | 14 | 0 | 0 | Music production in a Capital One blog? Absolutely. | 5 |
How We Got Alexa to Join Our Band at TechCrunch Disrupt
By Ardon Bailey, Software Engineer; Nagkumar Arkalgud, Software Engineer;
Timothy Street, Associate Software Engineer, Capital One
Music production in a Capital One blog? Absolutely. We have a number of software engineers at Capital One who are also passionate about using their technical skills to produce great music.
Earlier this year, we (Tim Street, Nagkumar Arkalgud, and Ardon Bailey) attended the TechCrunch Disrupt Hackathon in San Francisco in hopes of designing something that allows recording artists to create music — using, wait for it…only their voice. The result was Odis, a platform that interfaces with Amazon Alexa and converts natural language instructions into MIDI (Music Instrument Digital Interface) commands. With Odis music producers can record, playback, and edit tracks (songs) in studio recording software in real time.
Odis was so well received at the hackathon that it was not only featured on TechCrunch.com, but also won the Amazon Alexa award.
There Has to be a Better Way
Before Ardon Bailey became a software engineer at Capital One, he was a DJ/Tablist who regularly competed in DJ battles and performed at concerts and parties. When creating certain tracks, he found it awkward to record with an instrument in hand. For example, when playing guitar, the steps to record a sound are as follows:
Hold guitar, ready to play
Begin recording by physically triggering record in recording software
Reposition as quickly as possible to play guitar
Play until you’re satisfied with the sound
Press stop recording as quickly as possible
The back and forth routine between the computer and instrument can be quite tedious, especially during long sessions. And if that wasn’t enough, often times the instrument physically gets in the way. Unless there’s someone around to lend a helping hand, these steps are done yourself.
Enter the idea for Odis. As we brainstormed for our hackathon project, we knew we wanted to create something that made the track creation process as easy as possible. Our number one priority: make it more convenient for artists to create content in their chosen environment. To satisfy this constraint, we needed something that was easy to integrate into already existing recording setups. So, our solution was to create a hands-free, voice-controlled recording assistant.
Using Odis, recording sounds on a guitar is simplified:
Hold guitar, ready to play
Say: “Alexa, ask Odis to start recording”
Play until you’re satisfied with the sound
Say: “Alexa, ask Odis to stop recording”
Because the recording session is entirely voice controlled, artists can focus on what matters: the music. Control of the studio recording software is literally taken out of the artists’ hands and offloaded onto Alexa, who does all the heavy lifting.
Side note: Odis also implements other common features seen in studio recording software, such as sound playback, adding and removing MIDI effects, etc. Theoretically, anything supported in the MIDI library is fair game.
The Architecture
The architecture behind Odis is simple, and can be implemented across many different DAWs (Digital Audio Workstations). Since Odis’ Alexa voice commands resolve down to MIDI instructions, it is easily integrated with most DAWs.
Sketch of the Odis Architecture
Odis has three main components:
1. Alexa Skill (Python)
We decided to use an Alexa Skills Kit triggered Lambda as our interface to Odis. The Lambda is written in Python and handles interpreting natural language instructions, formatting HTTP request payloads, and then sending them to the client application (running on the user’s computer) using Pusher.
2. Pusher
Pusher was central part of the architecture behind Odis. At a high level, it basically translates RESTful web requests into messages over a WebSocket. Because Pusher handles all the logic associated with opening and closing connections over WebSocket, all we had to do was add calls to Pusher in the Alexa Skills Kit Lambda and add functionality to the MIDI client so it can listen to incoming requests over the Internet. Because Pusher is free and does most of the work for us, it was an easy decision to make this our communication tool between Alexa and the client.
3. MIDI Client (Node.js)
The MIDI client runs alongside studio recording software on the user’s computer. The client wraps the MIDI library and listens over a WebSocket for incoming requests from the Alexa Skills Kit Lambda. When a request is received, depending on the content, a specific MIDI instruction is sent to whatever recording software the client is using.
Overcoming Challenges
We faced a number of roadblocks when creating Odis.
1. Creating a deployment package for AWS Lambda
While Amazon’s documentation to create deployment packages for Lambda seems simple and straightforward, it wasn’t quite thorough enough for us. We had trouble including some of the 3rd party libraries and configurations necessary to run Odis the way we wanted. Thankfully the team was able to contact a colleague who had already worked with creating deployment packages in Production for Lambda. We called him up and he was happy to help us crush this roadblock. Thanks Chey!
2. SSL issue in Python 2.x for AWS Lambda
When we created the deployment package and tried to send a push notification via Pusher, (i.e. send out a RESTful HTTPS request from Lambda) the request could not be completed. After troubleshooting, we found out that Lambda uses its own security modules instead of the ones uploaded in the deployment package. We solved this issue by converting our 2.x Python codebase to 3.6, which follows different standards for dependencies in deployment packages. After the change, everything “magically” worked!
3. Add cool
In our opinion, one of the coolest features within Odis is the ability to add predefined effects to previously recorded music. We wanted it to work on the command “Alexa, ask Odis to add effect”. The triggers for that phrase wouldn’t work no matter how much we tested it. As it turns out, Alexa gets confused between the pronunciations for ‘affect’ and ‘effect’ in its language detection algorithm and we had to change the phrase, last-minute, to “Alexa, ask Odis to add cool”.
The Future of Odis
Moving forward, we hope to explore other DAW’s (Digital Audio Workstations) and possibly a desktop application that will run a script for different DAW’s to configure the mapping of MIDI controls.
These opinions are those of the author. Unless noted otherwise in this post, Capital One is not affiliated with, nor is it endorsed by, any of the companies mentioned. All trademarks and other intellectual property used or displayed are the ownership of their respective owners. This article is © 2017 Capital One.
| How We Got Alexa to Join Our Band at TechCrunch Disrupt | 92 | how-we-got-alexa-to-join-our-band-at-techcrunch-disrupt-133865f8369b | 2018-10-17 | 2018-10-17 19:04:23 | https://medium.com/s/story/how-we-got-alexa-to-join-our-band-at-techcrunch-disrupt-133865f8369b | false | 1,096 | Inspiration and ideas from the tech builders and influencers at Capital One. An opportunity for us to explore and share our insights as a tech-focused industry leader. | null | capitalone | null | Capital One Tech | null | capital-one-tech | DEVELOPERS,TECH,FINTECH,SOFTWARE DEVELOPMENT,SOFTWARE ENGINEERING | CapitalOneTech | Amazon Echo | amazon-echo | Amazon Echo | 3,511 | Capital One Tech | null | 73d9ee9c08c8 | CapitalOneTech | 684 | 101 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-09-12 | 2017-09-12 09:53:19 | 2017-09-12 | 2017-09-12 09:53:28 | 0 | false | en | 2017-09-12 | 2017-09-12 09:53:28 | 0 | 1338f9696c2c | 2.124528 | 0 | 0 | 0 | It will hardly come as a surprise to those of you familiar with my work, whether through this blog or my role as co-founder of the Tej… | 1 | How AI is Poised to Transform the Legal System
It will hardly come as a surprise to those of you familiar with my work, whether through this blog or my role as co-founder of the Tej Kohli Foundation, that I have a keen personal interest in the field of AI and robotics. As a firm believer in its potential as a force for good, news of its application in different fields always catches my interest. Now, it seems that AI may have a part to play in helping to transform our legal system.
Recently, the Serious Fraud Office (SFO) used AI technology to sort through the enormous volumes of data, files and documents they had amassed during a major fraud case. Investigators tasked with uncovering the truth about fraud allegations made against Rolls Royce turned to the robotics start-up RAVN for help, seeking their experience of building data-sifting robots.
Using their innovative mixture of AI techniques and more conventional data management, RAVN were able to process 600,000 documents daily, compared to the 3,000 barristers were previously getting through by hand. But RAVN’s revolutionising of the SFO’s efforts wasn’t just limited to increasing time-efficiency — the technology being used can process the documents given to it with such a degree of accuracy that it can even perform tasks including detecting individual document pages that were accidentally entered upside down. RAVN’s efforts were a resounding success, with Rolls-Royce landing a £671 million fine.
The skill of these robots in isolating useful and necessary information is without a doubt impressive, but it shouldn’t come as too much of a surprise — deep learning algorithms have also been used successfully as complex medical diagnosis tools for some time now, with an incredible degree of accuracy. With this same proficiency and eye for intricate detail turned towards the legal system, AI could become a powerful tool in investigating crimes — especially ones that involve large amounts of difficult information and data. Moreover, it could help to eradicate the sadly prevalent problem of human error — a frightening and serious, but currently unavoidable issue.
However, the benefits of AI developments in the legal system may not be limited to sorting data. As they become more advanced and more capable of making increasingly intelligent and complex distinctions, they may even play a part in the application of the law itself. Just recently, Larry Kamener, the Chairman of the Centre for Public Impact, highlighted AI’s potential for analysing cases and evaluating possible resolutions in cases with undisputed facts and strong legal precedents. Kamener even suggested that in such cases, AI could be used to produce a draft document, which a judge would then review. Although this vision for the future of the legal system makes sure to retain human oversight as the final arbiter of all decisions, it demonstrates the increasing support for the inclusion of AI at multiple levels of the legal process.
Although it is doubtful that we’ll be placing the full power of the legal system in AI’s hands just yet, it seems likely that we will see intelligent technology and advanced robots playing a larger part in assisting the application of the law in coming years. As long as the correct oversight and planning is applied to these changes, there is no reason they cannot improve the legal system for the better.
| How AI is Poised to Transform the Legal System | 0 | how-ai-is-poised-to-transform-the-legal-system-1338f9696c2c | 2018-04-01 | 2018-04-01 15:56:53 | https://medium.com/s/story/how-ai-is-poised-to-transform-the-legal-system-1338f9696c2c | false | 563 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Tej Kohli | null | af62807b5c2a | mrtejkohli | 12 | 1 | 20,181,104 | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.