markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Initialize service code data structures Service code / service name map Service code histogram
h_file = open("./serviceCodesCount.tsv","r") code_name_map = {} code_histogram = {} patternobj = re.compile('^([0-9a-z]+)\s\|\s([0-9a-z\s]+)$') for fields in csv.reader(h_file, delimiter="\t"): matchobj = patternobj.match(fields[0]) cur_code = matchobj.group(1) code_name_map[cur_code] = matchobj.group(2) code_histogram[cur_code] = float(fields[1]) h_file.close()
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Plot Cincinnati 311 Service Code Statistics References Descending Array Sort Change Plot Font Size
total_count_fraction = code_histogram.values() total_count_fraction.sort() total_count_fraction = total_count_fraction[::-1] total_count_fraction /= np.sum(total_count_fraction) total_count_fraction = np.cumsum(total_count_fraction) sns.set(font_scale=2) f,h_ax = plt.subplots(1,2,figsize=(12,6)) h_ax[0].bar(range(0,len(code_histogram.values())), code_histogram.values()) h_ax[0].set_xlim((0,len(total_count_fraction))) h_ax[0].set_xlabel('Service Code #') h_ax[0].set_ylabel('Service Code Count') h_ax[0].set_title('Cincinnati 311\nService Code Histogram') h_ax[1].plot(total_count_fraction, linewidth=4) h_ax[1].set_xlim((0,len(total_count_fraction))) h_ax[1].set_xlabel('Sorted Service Code #') h_ax[1].set_ylabel('Total Count Fraction') f.tight_layout() plt.savefig("./cincinatti311Stats.png")
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Cluster service code names Compute Term Frequency Inverse Document Frequency (TF-IDF) feature vectors Apply the K-means algorithm to cluster service code names based on their TF-IDF feature vector References: Rose, B. "Document Clustering in Python" Text pre-processing to reduce dictionary size
from nltk.stem.snowball import SnowballStemmer def tokenize(text): """ Extracts unigrams (i.e. words) from a string that contains a service code name. Args: text: String that stores a service code name Returns: filtered_tokens: List of words contained in a service code name""" tokens = [word.lower() for word in nltk.word_tokenize(text)] filtered_tokens =\ filter(lambda elem: re.match('^[a-z]+$', elem) != None, tokens) filtered_tokens =\ map(lambda elem: re.sub("\s+"," ", elem), filtered_tokens) return filtered_tokens def tokenize_and_stem(text): """ Applies the Snowball stemmer to unigrams (i.e. words) extracted from a string that contains a service code name. Args: text: String that stores a service code name Returns: filtered_tokens: List of words contained in a service code name""" stemmer = SnowballStemmer('english') tokens = [word.lower() for word in nltk.word_tokenize(text)] filtered_tokens =\ filter(lambda elem: re.match('^[a-z]+$', elem) != None, tokens) filtered_tokens =\ map(lambda elem: re.sub("\s+"," ", elem), filtered_tokens) filtered_tokens = [stemmer.stem(token) for token in filtered_tokens] return filtered_tokens def compute_tfidf_features(code_name_map, tokenizer, params): """ Constructs a Term Frequency Inverse Document Frequency (TF-IDF) matrix for the Cincinnati 311 service code names. Args: code_name_map: Dictionary that stores the mapping of service codes to service names tokenizer: Function that transforms a string into a list of words params: Dictionary that stores parameters that configure the TfidfVectorizer class constructor - mindocumentcount: Minimum number of term occurrences in separate service code names - maxdocumentfrequency: Maximum document frequency Returns: Tuple that stores a TF-IDF matrix and a TfidfVectorizer class object. Index: Description: ----- ----------- 0 TF-IDF matrix 1 TfidfVectorizer class object""" token_count = 0 for key in code_name_map.keys(): token_count += len(tokenize(code_name_map[key])) num_codes = len(code_name_map.keys()) min_df = float(params['mindocumentcount']) / num_codes tfidf_vectorizer =\ TfidfVectorizer(max_df=params['maxdocumentfrequency'], min_df=min_df, stop_words = 'english', max_features = token_count, use_idf=True, tokenizer=tokenizer, ngram_range=(1,1)) tfidf_matrix =\ tfidf_vectorizer.fit_transform(code_name_map.values()) return (tfidf_matrix, tfidf_vectorizer) def cluster_311_services(tfidf_matrix, num_clusters, random_seed): """Applies the K-means algorithm to cluster Cincinnati 311 service codes based on their service name Term Frequency Inverse Document Frequency (TF-IDF) feature vector. Args: tfidf_matrix: Cincinnati 311 service names TF-IDF feature matrix num_clusters: K-means algorithm number of clusters input random_seed: K-means algorithm random seed input: Returns: clusterid_code_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service code clusterid_name_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service name""" km = KMeans(n_clusters = num_clusters, random_state=np.random.RandomState(seed=random_seed)) km.fit(tfidf_matrix) clusters = km.labels_.tolist() clusterid_code_map = defaultdict(list) clusterid_name_map = defaultdict(list) codes = code_name_map.keys() names = code_name_map.values() for idx in range(0, len(codes)): clusterid_code_map[clusters[idx]].append(codes[idx]) clusterid_name_map[clusters[idx]].append(names[idx]) return (clusterid_code_map, clusterid_name_map) def compute_clusterid_totalcounts(clusterid_code_map, code_histogram): """ Computes the total Cincinnati 311 requests / service names cluster Args: clusterid_code_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service code code_histogram: Dictionary that stores the number of occurrences for each Cincinnati 311 service code Returns: clusterid_total_count: Dictionary that stores the total Cincinnati 311 requests / service names cluster""" clusterid_total_count = defaultdict(int) num_clusters = len(clusterid_code_map.keys()) for cur_cluster_id in range(0, num_clusters): for cur_code in clusterid_code_map[cur_cluster_id]: clusterid_total_count[cur_cluster_id] +=\ code_histogram[cur_code] return clusterid_total_count def print_cluster_stats(clusterid_name_map, clusterid_total_count): """ Prints the total number of codes and total requests count for each Cincinnati 311 service names cluster. Args: clusterid_name_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service name clusterid_total_count: Dictionary that stores the total Cincinnati 311 requests / service names cluster Returns: None""" num_clusters = len(clusterid_total_count.keys()) for cur_cluster_id in range(0, num_clusters): print "clusterid %d | # of codes: %d | total count: %d" %\ (cur_cluster_id, len(clusterid_name_map[cur_cluster_id]), clusterid_total_count[cur_cluster_id]) def eval_maxcount_clusterid(clusterid_code_map, clusterid_total_count, code_histogram): """ This function performs the following two operations: 1.) Plots the requests count for each service name in the maximum count service names cluster. 2. Prints the maximum count service name in the maximum count service names cluster Args: clusterid_name_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service name clusterid_total_count: Dictionary that stores the total Cincinnati 311 requests / service names cluster code_histogram: Dictionary that stores the number of occurrences for each Cincinnati 311 service code Returns: None""" num_clusters = len(clusterid_code_map.keys()) contains_multiple_codes = np.empty(num_clusters, dtype=bool) for idx in range(0, num_clusters): contains_multiple_codes[idx] = len(clusterid_code_map[idx]) > 1 filtered_clusterid =\ np.array(clusterid_total_count.keys()) filtered_total_counts =\ np.array(clusterid_total_count.values()) filtered_clusterid =\ filtered_clusterid[contains_multiple_codes] filtered_total_counts =\ filtered_total_counts[contains_multiple_codes] max_count_idx = np.argmax(filtered_total_counts) maxcount_clusterid = filtered_clusterid[max_count_idx] cluster_code_counts =\ np.zeros(len(clusterid_code_map[maxcount_clusterid])) for idx in range(0, len(cluster_code_counts)): key = clusterid_code_map[maxcount_clusterid][idx] cluster_code_counts[idx] = code_histogram[key] plt.bar(range(0,len(cluster_code_counts)),cluster_code_counts) plt.grid(True) plt.xlabel('Service Code #') plt.ylabel('Service Code Count') plt.title('Cluster #%d Service Code Histogram' %\ (maxcount_clusterid)) max_idx = np.argmax(cluster_code_counts) print "max count code: %s" %\ (clusterid_code_map[maxcount_clusterid][max_idx]) def add_new_cluster(from_clusterid, service_code, clusterid_total_count, clusterid_code_map, clusterid_name_map): """Creates a new service name(s) cluster Args: from_clusterid: Integer that refers to a service names cluster that is being split servicecode: String that refers to a 311 service code clusterid_code_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service code clusterid_name_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service name Returns: None - Service names cluster data structures are updated in place""" code_idx =\ np.argwhere(np.array(clusterid_code_map[from_clusterid]) ==\ service_code)[0][0] service_name = clusterid_name_map[from_clusterid][code_idx] next_clusterid = (clusterid_code_map.keys()[-1])+1 clusterid_code_map[from_clusterid] =\ filter(lambda elem: elem != service_code, clusterid_code_map[from_clusterid]) clusterid_name_map[from_clusterid] =\ filter(lambda elem: elem != service_name, clusterid_name_map[from_clusterid]) clusterid_code_map[next_clusterid] = [service_code] clusterid_name_map[next_clusterid] = [service_name] def print_clustered_servicenames(cur_clusterid, clusterid_name_map): """Prints the Cincinnati 311 service names(s) for a specific Cincinnati 311 service names cluster Args: cur_clusterid: Integer that refers to a specific Cincinnati 311 service names cluster clusterid_name_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service name""" for cur_name in clusterid_name_map[cur_clusterid]: print "%s" % (cur_name) def plot_cluster_stats(clusterid_code_map, clusterid_total_count): """Plots the following service name(s) cluster statistics: - Number of service code(s) / service name(s) cluster - Total number of requests / service name(s) cluster Args: clusterid_name_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service name clusterid_total_count: Dictionary that stores the total Cincinnati 311 requests / service names cluster Returns: None""" codes_per_cluster =\ map(lambda elem: len(elem), clusterid_code_map.values()) num_clusters = len(codes_per_cluster) f,h_ax = plt.subplots(1,2,figsize=(12,6)) h_ax[0].bar(range(0,num_clusters), codes_per_cluster) h_ax[0].set_xlabel('Service Name(s) cluster id') h_ax[0].set_ylabel('Number of service codes / cluster') h_ax[1].bar(range(0,num_clusters), clusterid_total_count.values()) h_ax[1].set_xlabel('Service Name(s) cluster id') h_ax[1].set_ylabel('Total number of requests') plt.tight_layout()
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Apply a word tokenizer to the service names and construct a TF-IDF feature matrix
params = {'maxdocumentfrequency': 0.25, 'mindocumentcount': 10} (tfidf_matrix, tfidf_vectorizer) = compute_tfidf_features(code_name_map, tokenize, params) print "# of terms: %d" % (tfidf_matrix.shape[1]) print tfidf_vectorizer.get_feature_names()
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Apply the K-means algorithm to cluster the Cincinnati 311 service names based on their TF-IDF feature vector
num_clusters = 20 kmeans_seed = 3806933558 (clusterid_code_map, clusterid_name_map) = cluster_311_services(tfidf_matrix, num_clusters, kmeans_seed) clusterid_total_count =\ compute_clusterid_totalcounts(clusterid_code_map, code_histogram) print_cluster_stats(clusterid_name_map, clusterid_total_count)
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Plot the service code histogram for the maximum size cluster
eval_maxcount_clusterid(clusterid_code_map, clusterid_total_count, code_histogram)
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Apply a word tokenizer (with stemming) to the service names and construct a TF-IDF feature matrix
params = {'maxdocumentfrequency': 0.25, 'mindocumentcount': 10} (tfidf_matrix, tfidf_vectorizer) = compute_tfidf_features(code_name_map, tokenize_and_stem, params) print "# of terms: %d" % (tfidf_matrix.shape[1]) print tfidf_vectorizer.get_feature_names()
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Apply the K-means algorithm to cluster the Cincinnati 311 service names based on their TF-IDF feature vector
num_clusters = 20 kmeans_seed = 3806933558 (clusterid_code_map, clusterid_name_map) = cluster_311_services(tfidf_matrix, num_clusters, kmeans_seed) clusterid_total_count =\ compute_clusterid_totalcounts(clusterid_code_map, code_histogram) print_cluster_stats(clusterid_name_map, clusterid_total_count) plot_cluster_stats(clusterid_code_map, clusterid_total_count)
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Create a separate service name(s) cluster for the 'mtlfrn' service code
add_new_cluster(1, 'mtlfrn', clusterid_total_count, clusterid_code_map, clusterid_name_map)
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Evaluate the service name(s) cluster statistics
clusterid_total_count =\ compute_clusterid_totalcounts(clusterid_code_map, code_histogram) print_cluster_stats(clusterid_name_map, clusterid_total_count)
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Create a separate service name(s) cluster for the 'ydwstaj' service code
add_new_cluster(1, 'ydwstaj', clusterid_total_count, clusterid_code_map, clusterid_name_map)
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Create a separate service name(s) cluster for the 'grfiti' service code
add_new_cluster(1, 'grfiti', clusterid_total_count, clusterid_code_map, clusterid_name_map)
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Create a separate service name(s) cluster for the 'dapub1' service code
add_new_cluster(1, 'dapub1', clusterid_total_count, clusterid_code_map, clusterid_name_map)
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Evaluate the service name(s) cluster statistics
clusterid_total_count =\ compute_clusterid_totalcounts(clusterid_code_map, code_histogram) print_cluster_stats(clusterid_name_map, clusterid_total_count) plot_cluster_stats(clusterid_code_map, clusterid_total_count)
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Label each service name(s) cluster
cur_clusterid = 0 clusterid_category_map = {} clusterid_category_map[cur_clusterid] = 'streetmaintenance' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'miscellaneous' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'trashcart' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'buildinghazzard' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'buildingcomplaint' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'repairrequest' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'propertymaintenance' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'defaultrequest' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'propertycomplaint' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'trashcomplaint' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'servicecompliment' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'inspection' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'servicecomplaint' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'buildinginspection' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'buildingcomplaint' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'signmaintenance' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'requestforservice' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'litter' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'recycling' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid +=1 clusterid_category_map[cur_clusterid] = 'treemaintenance' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'metalfurniturecollection' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'yardwaste' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'graffitiremoval' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'deadanimal' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Plot Cincinnati 311 Service Name Categories
import pandas as pd category_totalcountdf =\ pd.DataFrame({'totalcount': clusterid_total_count.values()}, index=clusterid_category_map.values()) sns.set(font_scale=1.5) category_totalcountdf.plot(kind='barh')
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Write service code / category map to disk Storing Python Dictionaries
servicecode_category_map = {} for clusterid in clusterid_name_map.keys(): cur_category = clusterid_category_map[clusterid] for servicecode in clusterid_code_map[clusterid]: servicecode_category_map[servicecode] = cur_category with open('serviceCodeCategory.txt', 'w') as fp: num_names = len(servicecode_category_map) keys = servicecode_category_map.keys() values = servicecode_category_map.values() for idx in range(0, num_names): if idx == 0: fp.write("%s{\"%s\": \"%s\",\n" % (" " * 12, keys[idx], values[idx])) #---------------------------------------- elif idx > 0 and idx < num_names-1: fp.write("%s\"%s\": \"%s\",\n" % (" " * 13, keys[idx], values[idx])) #---------------------------------------- else: fp.write("%s\"%s\": \"%s\"}" % (" " * 13, keys[idx], values[idx]))
ClusterServiceCodes.ipynb
mspcvsp/cincinnati311Data
gpl-3.0
Tabular data
df = pd.read_csv("data/titanic.csv") df.head()
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Starting from reading this dataset, to answering questions about this data in a few lines of code: What is the age distribution of the passengers?
df['Age'].hist()
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
How does the survival rate of the passengers differ between sexes?
df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Or how does it differ between the different classes?
df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar')
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Are young people more likely to survive?
df['Survived'].sum() / df['Survived'].count() df25 = df[df['Age'] <= 25] df25['Survived'].sum() / len(df25['Survived'])
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
All the needed functionality for the above examples will be explained throughout this tutorial. Data structures Pandas provides two fundamental data objects, for 1D (Series) and 2D data (DataFrame). Series A Series is a basic holder for one-dimensional labeled data. It can be created much as a NumPy array is created:
s = pd.Series([0.1, 0.2, 0.3, 0.4]) s
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Attributes of a Series: index and values The series has a built-in concept of an index, which by default is the numbers 0 through N - 1
s.index
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
You can access the underlying numpy array representation with the .values attribute:
s.values
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
We can access series values via the index, just like for NumPy arrays:
s[0]
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Unlike the NumPy array, though, this index can be something other than integers:
s2 = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd']) s2 s2['c']
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
In this way, a Series object can be thought of as similar to an ordered dictionary mapping one typed value to another typed value. In fact, it's possible to construct a series directly from a Python dictionary:
pop_dict = {'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3, 'United Kingdom': 64.9, 'Netherlands': 16.9} population = pd.Series(pop_dict) population
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
We can index the populations like a dict as expected:
population['France']
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
but with the power of numpy arrays:
population * 1000
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
DataFrames: Multi-dimensional Data A DataFrame is a tablular data structure (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index. <img src="img/dataframe.png" width=110%> One of the most common ways of creating a dataframe is from a dictionary of arrays or lists. Note that in the IPython notebook, the dataframe will display in a rich HTML view:
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'], 'population': [11.3, 64.3, 81.3, 16.9, 64.9], 'area': [30510, 671308, 357050, 41526, 244820], 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']} countries = pd.DataFrame(data) countries
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Attributes of the DataFrame A DataFrame has besides a index attribute, also a columns attribute:
countries.index countries.columns
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
To check the data types of the different columns:
countries.dtypes
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
An overview of that information can be given with the info() method:
countries.info()
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Also a DataFrame has a values attribute, but attention: when you have heterogeneous data, all values will be upcasted:
countries.values
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
If we don't like what the index looks like, we can reset it and set one of our columns:
countries = countries.set_index('country') countries
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
To access a Series representing a column in the data, use typical indexing syntax:
countries['area']
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Basic operations on Series/Dataframes As you play around with DataFrames, you'll notice that many operations which work on NumPy arrays will also work on dataframes.
# redefining the example objects population = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3, 'United Kingdom': 64.9, 'Netherlands': 16.9}) countries = pd.DataFrame({'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'], 'population': [11.3, 64.3, 81.3, 16.9, 64.9], 'area': [30510, 671308, 357050, 41526, 244820], 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']})
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Elementwise-operations (like numpy) Just like with numpy arrays, many operations are element-wise:
population / 100 countries['population'] / countries['area']
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Alignment! (unlike numpy) Only, pay attention to alignment: operations between series will align on the index:
s1 = population[['Belgium', 'France']] s2 = population[['France', 'Germany']] s1 s2 s1 + s2
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Reductions (like numpy) The average population number:
population.mean()
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
The minimum area:
countries['area'].min()
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
For dataframes, often only the numeric columns are included in the result:
countries.median()
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
<div class="alert alert-success"> <b>EXERCISE</b>: Calculate the population numbers relative to Belgium </div>
population / population['Belgium'].mean()
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
<div class="alert alert-success"> <b>EXERCISE</b>: Calculate the population density for each country and add this as a new column to the dataframe. </div>
countries['population']*1000000 / countries['area'] countries['density'] = countries['population']*1000000 / countries['area'] countries
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Some other useful methods Sorting the rows of the DataFrame according to the values in a column:
countries.sort_values('density', ascending=False)
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
One useful method to use is the describe method, which computes summary statistics for each column:
countries.describe()
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
The plot method can be used to quickly visualize the data in different ways:
countries.plot()
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
However, for this dataset, it does not say that much:
countries['population'].plot(kind='bar')
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
You can play with the kind keyword: 'line', 'bar', 'hist', 'density', 'area', 'pie', 'scatter', 'hexbin' Importing and exporting data A wide range of input/output formats are natively supported by pandas: CSV, text SQL database Excel HDF5 json html pickle ...
pd.read states.to
solved - 02 - Data structures.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Exercise 1: Finding Correlations of Non-Linear Relationships a. Traditional (Pearson) Correlation Find the correlation coefficient for the relationship between x and y.
n = 100 x = np.linspace(1, n, n) y = x**5 #Your code goes here
notebooks/lectures/Spearman_Rank_Correlation/questions/notebook.ipynb
quantopian/research_public
apache-2.0
b. Spearman Rank Correlation Find the Spearman rank correlation coefficient for the relationship between x and y using the stats.rankdata function and the formula $$r_S = 1 - \frac{6 \sum_{i=1}^n d_i^2}{n(n^2 - 1)}$$ where $d_i$ is the difference in rank of the ith pair of x and y values.
#Your code goes here
notebooks/lectures/Spearman_Rank_Correlation/questions/notebook.ipynb
quantopian/research_public
apache-2.0
Check your results against scipy's Spearman rank function. stats.spearmanr
# Your code goes here
notebooks/lectures/Spearman_Rank_Correlation/questions/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 2: Limitations of Spearman Rank Correlation a. Lagged Relationships First, create a series b that is identical to a but lagged one step (b[i] = a[i-1]). Then, find the Spearman rank correlation coefficient of the relationship between a and b.
n = 100 a = np.random.normal(0, 1, n) #Your code goes here
notebooks/lectures/Spearman_Rank_Correlation/questions/notebook.ipynb
quantopian/research_public
apache-2.0
b. Non-Monotonic Relationships First, create a series d using the relationship $d=10c^2 - c + 2$. Then, find the Spearman rank rorrelation coefficient of the relationship between c and d.
n = 100 c = np.random.normal(0, 2, n) #Your code goes here
notebooks/lectures/Spearman_Rank_Correlation/questions/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 3: Real World Example a. Factor and Forward Returns Here we'll define a simple momentum factor (model). To evaluate it we'd need to look at how its predictions correlate with future returns over many days. We'll start by just evaluating the Spearman rank correlation between our factor values and forward returns on just one day. Compute the Spearman rank correlation between factor values and 10 trading day forward returns on 2015-1-2. For help on the pipeline API, see this tutorial: https://www.quantopian.com/tutorials/pipeline
#Pipeline Setup from quantopian.research import run_pipeline from quantopian.pipeline import Pipeline from quantopian.pipeline.data.builtin import USEquityPricing from quantopian.pipeline.factors import CustomFactor, Returns, RollingLinearRegressionOfReturns from quantopian.pipeline.classifiers.morningstar import Sector from quantopian.pipeline.filters import QTradableStocksUS from time import time #MyFactor is our custom factor, based off of asset price momentum class MyFactor(CustomFactor): """ Momentum factor """ inputs = [USEquityPricing.close] window_length = 60 def compute(self, today, assets, out, close): out[:] = close[-1]/close[0] universe = QTradableStocksUS() pipe = Pipeline( columns = { 'MyFactor' : MyFactor(mask=universe), }, screen=universe ) start_timer = time() results = run_pipeline(pipe, '2015-01-01', '2015-06-01') end_timer = time() results.fillna(value=0); print "Time to run pipeline %.2f secs" % (end_timer - start_timer) my_factor = results['MyFactor'] n = len(my_factor) asset_list = results.index.levels[1].unique() prices_df = get_pricing(asset_list, start_date='2015-01-01', end_date='2016-01-01', fields='price') # Compute 10-day forward returns, then shift the dataframe back by 10 forward_returns_df = prices_df.pct_change(10).shift(-10) # The first trading day is actually 2015-1-2 single_day_factor_values = my_factor['2015-1-2'] # Because prices are indexed over the total time period, while the factor values dataframe # has a dynamic universe that excludes hard to trade stocks, each day there may be assets in # the returns dataframe that are not present in the factor values dataframe. We have to filter down # as a result. single_day_forward_returns = forward_returns_df.loc['2015-1-2'][single_day_factor_values.index] #Your code goes here
notebooks/lectures/Spearman_Rank_Correlation/questions/notebook.ipynb
quantopian/research_public
apache-2.0
b. Rolling Spearman Rank Correlation Repeat the above correlation for the first 60 days in the dataframe as opposed to just a single day. You should get a time series of Spearman rank correlations. From this we can start getting a better sense of how the factor correlates with forward returns. What we're driving towards is known as an information coefficient. This is a very common way of measuring how predictive a model is. All of this plus much more is automated in our open source alphalens library. In order to see alphalens in action you can check out these resources: A basic tutorial: https://www.quantopian.com/tutorials/getting-started#lesson4 An in-depth lecture: https://www.quantopian.com/lectures/factor-analysis
rolling_corr = pd.Series(index=None, data=None) #Your code goes here
notebooks/lectures/Spearman_Rank_Correlation/questions/notebook.ipynb
quantopian/research_public
apache-2.0
b. Rolling Spearman Rank Correlation Plot out the rolling correlation as a time series, and compute the mean and standard deviation.
# Your code goes here
notebooks/lectures/Spearman_Rank_Correlation/questions/notebook.ipynb
quantopian/research_public
apache-2.0
Loading Data First, we load the data. NOTE, targets with the starting name of PSR are radio scans of known pulsars PSR_B0355+54_0013. But, files with HIP65960 cataloged targets that shouldn't have pulsar characteristics. If you wish to learn more about the data check out https://ui.adsabs.harvard.edu/abs/2019PASP..131l4505L/abstract The header information gives vital information about the observational setup of the telescope. For example, the coarse channel width or the observation time and duration, etc.
from blimpy import Waterfall import pylab as plt import numpy as np import math from scipy import stats, interpolate from copy import deepcopy %matplotlib inline obs = Waterfall('/content/spliced_blc40414243444546o7o0515253545556o7o061626364656667_guppi_58837_86186_PSR_B0355+54_0013.gpuspec.8.0001.fil', t_start=0,t_stop= 80000,max_load=10) obs.info() # Loads data into numpy array data = obs.data data.shape coarse_channel_width = np.int(np.round(187.5/64/abs(obs.header['foff']))) # Here we plot the integrated signal over time. obs.plot_spectrum() fig = plt.figure(figsize=(10,8)) plt.title('Spectrogram With Bandpass') plt.xlabel("Fchans") plt.ylabel("Time") plt.imshow(data[:3000,0,1500:3000], aspect='auto') plt.colorbar()
GBT/pulsar_searches/Pulsar_Search/Pulsar_DedisperseV3.ipynb
UCBerkeleySETI/breakthrough
gpl-3.0
Band Pass Removal The goal of this process is to clean the data of its artifacts created by combining multiple bands. Our data is created by taking sliding windows of the raw voltage data and computing an FFT of that sliding window. With these FFTs (each containing frequency information about a timestamp) for each coarse channel, we use a bandpass filter to cut off frequencies that don’t belong to that coarse channel’s frequency range. But we can’t achieve a perfect cut, and that’s why there's a falling off at the edges. They’re called band-pass because they only allow signals in a particular frequency range, called a band, to pass-through. When we assemble the products we see these dips in the spectrogram. In other words - they aren't real signals. To remove the bandpass features, we use spline lines to fit each channel to get a model of the bandpass of that channel. By using splines, we can fit the bandpass without fitting the more significant signals. If you want more details on this check out https://github.com/FX196/SETI-Energy-Detection for a detailed explanation.
average_power = np.zeros((data.shape[2])) shifted_power = np.zeros((int(data.shape[2]/8))) x=[] spl_order = 2 print("Fitting Spline") data_adjust = np.zeros(data.shape) average_power = data.mean(axis=0) # Note the value 8 is the COARSE CHANNEL WIDTH # We adjust each coarse channel to correct the bandpass artifacts for i in range(0, data.shape[2], 8): average_channel = average_power[0,i:i+8] x = np.arange(0,coarse_channel_width,1) knots = np.arange(0, coarse_channel_width, coarse_channel_width//spl_order+1) tck = interpolate.splrep(x, average_channel, s=knots[1:]) xnew = np.arange(0, coarse_channel_width,1) ynew = interpolate.splev(xnew, tck, der=0) data_adjust[:,0,i:i+8] = data[:,0,i:i+8] - ynew plt.figure() plt.plot( data_adjust.mean(axis=0)[0,:]) plt.title('Spline Fit - adjusted') plt.xlabel("Fchans") plt.ylabel("Power") fig = plt.figure(figsize=(10,8)) plt.title('After bandpass correction') plt.imshow(data_adjust[:3000,0,:], aspect='auto') plt.colorbar()
GBT/pulsar_searches/Pulsar_Search/Pulsar_DedisperseV3.ipynb
UCBerkeleySETI/breakthrough
gpl-3.0
Dedispersion When pulses reach Earth they reach the observer at different times due to dispersion. This dispersion is the result of the interstellar medium causing time delays. This creates a "swooping curve" on the radio spectrogram instead of plane waves. If we are going to fold the pulses to increase the SNR then we're making the assumption that the pulses arrive at the same time. Thus we need to correct the dispersion by shifting each channel down a certain time delay relative to its frequency channel. We index a frequency column in the spectrogram. Then we split it between a time delay and original data and swap the positions. However, the problem is, we don't know the dispersion measure DM of the signal. The DM is the path integral of the signal through the interstellar medium with an electron density measure of. $$DM =\int_0^d n_e dl$$ What we do is we brute force the DM by executing multiple trials DMs and we take the highest SNR created by the dedispersion with the given trial DM.
def delay_from_DM(DM, freq_emitted): if (type(freq_emitted) == type(0.0)): if (freq_emitted > 0.0): return DM / (0.000241 * freq_emitted * freq_emitted) else: return 0.0 else: return Num.where(freq_emitted > 0.0, DM / (0.000241 * freq_emitted * freq_emitted), 0.0) def de_disperse(data,DM,fchan,width,tsamp): clean = deepcopy(data) for i in range(clean.shape[1]): end = clean.shape[0] freq_emitted = i*width+ fchan time = int((delay_from_DM(DM, freq_emitted))/tsamp) if time!=0 and time<clean.shape[0]: # zero_block = np.zeros((time)) zero_block = clean[:time,i] shift_block = clean[:end-time,i] clean[time:end,i]= shift_block clean[:time,i]= zero_block elif time!=0: clean[:,i]= np.zeros(clean[:,i].shape) return clean def DM_can(data, data_base, sens, DM_base, candidates, fchan,width,tsamp ): snrs = np.zeros((candidates,2)) for i in range(candidates): DM = DM_base+sens*i data = de_disperse(data, DM, fchan,width,tsamp) time_series = data.sum(axis=1) snrs[i,1] = SNR(time_series) snrs[i,0] =DM if int((delay_from_DM(DM, fchan))/tsamp)+1 > data.shape[0]: break if i %1==0: print("Candidate "+str(i)+"\t SNR: "+str(round(snrs[i,1],4)) + "\t Largest Time Delay: "+str(round(delay_from_DM(DM, fchan), 6))+' seconds'+"\t DM val:"+ str(DM)+"pc/cm^3") data = data_base return snrs # Functions to determine SNR and TOP candidates def SNR(arr): index = np.argmax(arr) average_noise = abs(arr.mean(axis=0)) return math.log(arr[index]/average_noise) def top(arr, top = 10): candidate = [] # Delete the first and second element fourier transform arr[0]=0 arr[1]=0 for i in range(top): # We add 1 as the 0th index = period of 1 not 0 index = np.argmax(arr) candidate.append(index+1) arr[index]=0 return candidate
GBT/pulsar_searches/Pulsar_Search/Pulsar_DedisperseV3.ipynb
UCBerkeleySETI/breakthrough
gpl-3.0
Dedispersion Trials The computer now checks multiple DM values and adjust each frequency channel where it records its SNR. We increment the trial DM by a tunable parameter sens. After the trials, we take the largest SNR created by adjusting the time delays. We use that data to perform the FFT's and record the folded profiles.
small_data = data_adjust[:,0,:] data_base = data_adjust[:,0,:] sens =0.05 DM_base = 6.4 candidates = 50 fchan = obs.header['fch1'] width = obs.header['foff'] tsamp = obs.header['tsamp'] fchan = fchan+ width*small_data.shape[1] snrs = DM_can(small_data, data_base, sens, DM_base, candidates, fchan, abs(width),tsamp) plt.plot(snrs[:,0], snrs[:,1]) plt.title('DM values vs SNR') plt.xlabel("DM values") plt.ylabel("SNR of Dedispersion") DM = snrs[np.argmax(snrs[:,1]),0] print(DM) fchan = fchan+ width*small_data.shape[1] data_adjust[:,0,:] = de_disperse(data_adjust[:,0,:], DM, fchan,abs(width),tsamp) fig = plt.figure(figsize=(10, 8)) plt.imshow(data_adjust[:,0,:], aspect='auto') plt.title('De-dispersed Data') plt.xlabel("Fchans") plt.ylabel("Time") plt.colorbar() plt.show()
GBT/pulsar_searches/Pulsar_Search/Pulsar_DedisperseV3.ipynb
UCBerkeleySETI/breakthrough
gpl-3.0
Detecting Pulses - Fourier Transforms and Folding Next, we apply the discrete Fourier transform on the data to detect periodic pulses. To do so, we look for the greatest magnitude of the Fourier transform. This indicates potential periods within the data. Then we check for consistency by folding the data by the period which the Fourier transform indicates. The folding algorithm is simple. You take each period and you fold the signals on top of itself. If the period you guessed matches the true period then by the law of superposition it will increase the SNR. This spike in signal to noise ratio appears in the following graph. This algorithm is the following equation.
# Preforming the fourier transform. %matplotlib inline import scipy.fftpack from scipy.fft import fft N = 1000 T = 1.0 / 800.0 x = np.linspace(0.0, N*T, N) y = abs(data_adjust[:,0,:].mean(axis=1)) yf = fft(y) xf = np.linspace(0.0, 1.0/(2.0*T), N//2) # Magintude of the fourier transform # Between 0.00035 and 3.5 seconds mag = np.abs(yf[:60000]) candidates = top(mag, top=15) plt.plot(2.0/N * mag[1:]) plt.grid() plt.title('Fourier Transform of Signal') plt.xlabel("Periods") plt.ylabel("Magnitude of Fourier Transform") plt.show() print("Signal To Noise Ratio for the Fourier Transform is: "+str(SNR(mag))) print("Most likely Candidates are: "+str(candidates))
GBT/pulsar_searches/Pulsar_Search/Pulsar_DedisperseV3.ipynb
UCBerkeleySETI/breakthrough
gpl-3.0
Folding Algorithm The idea of the folding algorithm is to see if the signal forms a consistent profile as you fold/integrate the values together. If the profile appears consistent/stable then you're looking at an accurate reading of the pulsar's period. This confirms the implications drawn from the Fourier transform. This is profiling the pulsar. When folding the pulses it forms a "fingerprint" of the pulsar. These folds are unique to the pulsar detected. $$s_j = \sum^{N/P-1}{K=0} D{j+kP} $$ We are suming over the regular intervals of period P. This is implemented below.
# Lets take an example of such a period! # The 0th candidate is the top ranked candidate by the FFT period = 895 fold = np.zeros((period, data.shape[2])) multiples = int(data.data.shape[0]/period) results = np.zeros((period)) for i in range(multiples-1): fold[:,:]=data_adjust[i*period:(i+1)*period,0,:]+ fold results = fold.mean(axis=1) results = results - results.min() results = results / results.max() print(SNR(results)) plt.plot(results) plt.title('Folded Signal Profile With Period: '+str(round(period*0.000349,5))) plt.xlabel("Time (Multiples of 0.00035s)") plt.ylabel("Normalized Integrated Signal") # Lets take an example of such a period! # The 0th candidate is the top ranked candidate by the FFT can_snr =[] for k in range(len(candidates)): period = candidates[k] fold = np.zeros((period, data.shape[2])) multiples = int(data.data.shape[0]/period) results = np.zeros((period)) for i in range(multiples-1): fold[:,:]=data[i*period:(i+1)*period,0,:]+ fold results = fold.mean(axis=1) results = results - results.min() results = results / results.max() can_snr.append(SNR(results)) # print(SNR(results)) print("Max SNR of Fold Candidates: "+ str(max(can_snr))) # Generates multiple images saved to create a GIF from scipy import stats data = data period = candidates[0] fold = np.zeros((period, data.shape[2])) multiples = int(data.data.shape[0]/period) results = np.zeros((period)) for i in range(multiples-1): fold[:,:]=data[i*period:(i+1)*period,0,:]+ fold results = fold.mean(axis=1) results = results - results.min() results = results / results.max() # Generates multiple frames of the graph as it folds! plt.plot(results) plt.title('Folded Signal Period '+str(period*0.000349)+" seconds| Fold Iteration: "+str(i)) plt.xlabel("Time (Multiples of 0.00035s)") plt.ylabel("Normalized Integrated Signal") plt.savefig('/content/drive/My Drive/Deeplearning/Pulsars/output/candidates/CAN_3/multi_chan_'+str(period)+'_'+str(i)+'.png') plt.close() results = fold.mean(axis=1) results = results - results.min() results = results / results.max() print("The Signal To Noise of the Fold is: "+str(SNR(results))) plt.plot(results)
GBT/pulsar_searches/Pulsar_Search/Pulsar_DedisperseV3.ipynb
UCBerkeleySETI/breakthrough
gpl-3.0
What Happens If The Data Doesn't Contain Pulses? Below we will show you that this algorithm detects pulses and excludes targets that do not include this feature. We will do so by loading a target that isn't known to be a pulsar. HIP65960 is a target that doesn't contain repeating signals. Below we will repeat and apply the same algorithm but on a target that isn't a pulsar. We won't reiterate the explanations again.
!wget http://blpd13.ssl.berkeley.edu/dl/GBT_58402_66282_HIP65960_time.h5 from blimpy import Waterfall import pylab as plt import numpy as np import math from scipy import stats, interpolate %matplotlib inline obs = Waterfall('/content/GBT_58402_66282_HIP65960_time.h5', f_start=0,f_stop= 361408,max_load=5) obs.info() # Loads data into numpy array data = obs.data coarse_channel_width = np.int(np.round(187.5/64/abs(obs.header['foff']))) obs.plot_spectrum() average_power = np.zeros((data.shape[2])) shifted_power = np.zeros((int(data.shape[2]/8))) x=[] spl_order = 2 print("Fitting Spline") data_adjust = np.zeros(data.shape) average_power = data.mean(axis=0) # Note the value 8 is the COARSE CHANNEL WIDTH # We adjust each coarse channel to correct the bandpass artifacts for i in range(0, data.shape[2], coarse_channel_width): average_channel = average_power[0,i:i+coarse_channel_width] x = np.arange(0,coarse_channel_width,1) knots = np.arange(0, coarse_channel_width, coarse_channel_width//spl_order+1) tck = interpolate.splrep(x, average_channel, s=knots[1:]) xnew = np.arange(0, coarse_channel_width,1) ynew = interpolate.splev(xnew, tck, der=0) data_adjust[:,0,i:i+coarse_channel_width] = data[:,0,i:i+coarse_channel_width] - ynew from copy import deepcopy small_data = data[:,0,:] data_base = data[:,0,:] sens =0.05 DM_base = 6.4 candidates = 50 fchan = obs.header['fch1'] width = obs.header['foff'] tsamp = obs.header['tsamp'] # fchan = fchan+ width*small_data.shape[1] fchan = 7501.28173828125 snrs = DM_can(small_data, data_base, sens, DM_base, candidates, fchan, abs(width),tsamp) plt.plot(snrs[:,0], snrs[:,1]) plt.title('DM values vs SNR') plt.xlabel("DM values") plt.ylabel("SNR of Dedispersion") DM = snrs[np.argmax(snrs[:,1]),0] print(DM) fchan = fchan+ width*small_data.shape[1] data_adjust[:,0,:] = de_disperse(data_adjust[:,0,:], DM, fchan,abs(width),tsamp) # Preforming the fourier transform. %matplotlib inline import scipy.fftpack from scipy.fft import fft N = 60000 T = 1.0 / 800.0 x = np.linspace(0.0, N*T, N) y = data[:,0,:].mean(axis=1) yf = fft(y) xf = np.linspace(0.0, 1.0/(2.0*T), N//2) # Magintude of the fourier transform # Between 0.00035 and 3.5 seconds # We set this to a limit of 200 because # The total tchan is only 279 mag = np.abs(yf[:200]) candidates = top(mag, top=15) plt.plot(2.0/N * mag[1:]) plt.grid() plt.title('Fourier Transform of Signal') plt.xlabel("Periods") plt.ylabel("Magnitude of Fourier Transform") plt.show() print("Signal To Noise Ratio for the Fourier Transform is: "+str(SNR(mag))) print("Most likely Candidates are: "+str(candidates))
GBT/pulsar_searches/Pulsar_Search/Pulsar_DedisperseV3.ipynb
UCBerkeleySETI/breakthrough
gpl-3.0
NOTICE Notice how the signal to noise ratio is a lot smaller, It's smaller by 2 orders of magnitude (100x) than the original pulsar fold. Typically with a SNR of 1, it isn't considered a signal of interest as it's most likely just noise.
# Lets take an example of such a period! # The 0th candidate is the top ranked candidate by the FFT can_snr =[] for k in range(len(candidates)): period = candidates[k] fold = np.zeros((period, data.shape[2])) multiples = int(data.data.shape[0]/period) results = np.zeros((period)) for i in range(multiples-1): fold[:,:]=data[i*period:(i+1)*period,0,:]+ fold results = fold.mean(axis=1) results = results - results.min() results = results / results.max() can_snr.append(SNR(results)) print("Max SNR of Fold Candidates: "+ str(max(can_snr)))
GBT/pulsar_searches/Pulsar_Search/Pulsar_DedisperseV3.ipynb
UCBerkeleySETI/breakthrough
gpl-3.0
Introduction Ce notebook est la traduction française du cours sur SymPy disponible entre autre sur Wakari avec quelques modifications et compléments notamment pour la résolution d'équations différentielles. Il a pour but de permettre aux étudiants de différents niveaux d'expérimenter des notions mathématiques en leur fournissant une base de code qu'ils peuvent modifier. SymPy - est un module Python qui peut être utilisé dans un programme Python ou dans une session IPython. Il fournit de puissantes fonctionnalités de calcul symbolique. Pour commencer à utiliser SymPy dans un programme ou un notebook Python, importer le module sympy:
from sympy import *
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Pour obtenir des sorties mathématiques formatées $\LaTeX$ :
from sympy import init_printing init_printing(use_latex=True)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Variables symboliques Dans SymPy on a besoin de créer des symboles pour les variables qu'on souhaite employer. Pour cela on utilise la class Symbol:
x = Symbol('x') (pi + x)**2 # manière alternative de définir plusieurs symboles en une seule instruction a, b, c = symbols("a, b, c")
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
On peut ajouter des contraintes sur les symboles lors de leur création :
x = Symbol('x', real=True) x.is_imaginary x = Symbol('x', positive=True) x > 0
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Nombres complexes L'unité imaginaire est notée I dans Sympy.
1+1*I I**2 (1 + x * I)**2
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Nombres rationnels Il y a trois types numériques différents dans SymPy : Real (réel), Rational (rationnel), Integer (entier) :
r1 = Rational(4,5) r2 = Rational(5,4) r1 r1+r2 r1/r2
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Evaluation numérique SymPy permet une précision arbitraire des évaluations numériques et fournit des expressions pour quelques constantes comme : pi, E, oo pour l'infini. Pour évaluer numériquement une expression nous utilisons la fonction evalf (ou N). Elle prend un argument n qui spécifie le nombre de chiffres significatifs.
pi.evalf(n=50) E.evalf(n=4) y = (x + pi)**2 N(y, 5) # raccourci pour evalf
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Quand on évalue des expressions algébriques on souhaite souvent substituer un symbole par une valeur numérique. Dans SymPy cela s'effectue par la fonction subs :
y.subs(x, 1.5) N(y.subs(x, 1.5))
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
La fonction subs permet de substituer aussi des symboles et des expressions :
y.subs(x, a+pi)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
On peut aussi combiner l'évolution d'expressions avec les tableaux de NumPy (pour tracer une fonction par ex) :
import numpy x_vec = numpy.arange(0, 10, 0.1) y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec]) import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(x_vec, y_vec);
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Manipulations algébriques Une des principales utilisations d'un système de calcul symbolique est d'effectuer des manipulations algébriques d'expression. Il est possible de développer un produit ou bien de factoriser une expression. Les fonctions pour réaliser ces opérations de bases figurent dans les exemples des sections suivantes. Développer et factoriser Les premiers pas dans la manipulation algébrique
(x+1)*(x+2)*(x+3) expand((x+1)*(x+2)*(x+3))
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
La fonction expand (développer) prend des mots clés en arguments pour indiquer le type de développement à réaliser. Par exemple pour développer une expression trigonomètrique on utilise l'argument trig=True :
sin(a+b) expand(sin(a+b), trig=True) sin(a+b)**3 expand(sin(a+b)**3, trig=True)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Lancer help(expand) pour une explication détaillée des différents types de développements disponibles. L'opération opposée au développement est bien sur la factorisation qui s'effectue grâce à la fonction factor :
factor(x**3 + 6 * x**2 + 11*x + 6) x1, x2 = symbols("x1, x2") factor(x1**2*x2 + 3*x1*x2 + x1*x2**2)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Simplify The simplify tries to simplify an expression into a nice looking expression, using various techniques. More specific alternatives to the simplify functions also exists: trigsimp, powsimp, logcombine, etc. The basic usages of these functions are as follows:
# simplify expands a product simplify((x+1)*(x+2)*(x+3)) # simplify uses trigonometric identities simplify(sin(a)**2 + cos(a)**2) simplify(cos(x)/sin(x))
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
simplify permet aussi de tester l'égalité d'expressions :
exp1 = sin(a+b)**3 exp2 = sin(a)**3*cos(b)**3 + 3*sin(a)**2*sin(b)*cos(a)*cos(b)**2 + 3*sin(a)*sin(b)**2*cos(a)**2*cos(b) + sin(b)**3*cos(a)**3 simplify(exp1 - exp2) if simplify(exp1 - exp2) == 0: print "{0} = {1}".format(exp1, exp2) else: print "exp1 et exp2 sont différentes"
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
apart and together Pour manipuler des expressions numériques de fractions on dispose des fonctions apart and together :
f1 = 1/((a+1)*(a+2)) f1 apart(f1) f2 = 1/(a+2) + 1/(a+3) f2 together(f2)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Simplify combine les fractions mais ne factorise pas :
simplify(f2)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Calcul En plus des manipulations algébriques, l'autre grande utilisation d'un système de calcul symbolique et d'effectuer des calculs comme des dérivées et intégrales d'expressions algébriques. Dérivation La dérivation est habituellement simple. On utilise la fonction diff avec pour premier argument l'expression à dériver et comme second le symbole de la variable suivant laquelle dériver :
y
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Dérivée première
diff(y**2, x)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Pour des dérivées d'ordre supérieur :
diff(y**2, x, x) # dérivée seconde diff(y**2, x, 2) # dérivée seconde avec une autre syntaxe
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Pour calculer la dérivée d'une expression à plusieurs variables :
x, y, z = symbols("x,y,z") f = sin(x*y) + cos(y*z)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
$\frac{d^3f}{dxdy^2}$
diff(f, x, 1, y, 2)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Integration L'intégration est réalisée de manière similaire :
f integrate(f, x)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
En fournissant des limites pour la variable d'intégration on peut évaluer des intégrales définies :
integrate(f, (x, -1, 1))
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
et aussi des intégrales impropres pour lesquelles on ne connait pas de primitive
x_i = numpy.arange(-5, 5, 0.1) y_i = numpy.array([N((exp(-x**2)).subs(x, xx)) for xx in x_i]) fig2, ax2 = plt.subplots() ax2.plot(x_i, y_i) ax2.set_title("$e^{-x^2}$") integrate(exp(-x**2), (x, -oo, oo))
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Rappel, oo est la notation SymPy pour l'infini. Sommes et produits On peut évaluer les sommes et produits d'expression avec les fonctions Sum et Product :
n = Symbol("n") Sum(1/n**2, (n, 1, 10)) Sum(1/n**2, (n,1, 10)).evalf() Sum(1/n**2, (n, 1, oo)).evalf() N(pi**2/6) # fonction zeta(2) de Riemann
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Les produits sont calculés de manière très semblables :
Product(n, (n, 1, 10)) # 10!
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Limites Les limites sont évaluées par la fonction limit. Par exemple :
limit(sin(x)/x, x, 0)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
On peut changer la direction d'approche du point limite par l'argument du mot clé dir :
limit(1/x, x, 0, dir="+") limit(1/x, x, 0, dir="-")
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Séries Le développement en série est une autre fonctionnalité très utile d'un système de calcul symbolique. Dans SymPy on réalise les développements en série grâce à la fonction series :
series(exp(x), x)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Par défaut le développement de l'expression s'effectue au voisinage de $x=0$, mais on peut développer la série au voisinage de toute autre valeur de $x$ en incluant explicitement cette valeur lors de l'appel à la fonction :
series(exp(x), x, 1)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Et on peut explicitement définir jusqu'à quel ordre le développement doit être mené :
series(exp(x), x, 1, 10)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Le développement en séries inclue l'ordre d'approximation. Ceci permet de gérer l'ordre du résultat de calculs utilisant des développements en séries d'ordres différents :
s1 = cos(x).series(x, 0, 5) s1 s2 = sin(x).series(x, 0, 2) s2 expand(s1 * s2)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Si on ne souhaite pas afficher l'ordre on utilise la méthode removeO :
expand(s1.removeO() * s2.removeO())
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0