markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Electric car [Olin Electric Motorsports](https://www.olinelectricmotorsports.com/) is a club at Olin College that designs and builds electric cars, and participates in the [Formula SAE Electric](https://www.sae.org/attend/student-events/formula-sae-electric) competition.The goal of this case study is to use simulation to guide the design of a car intended to accelerate from standing to 100 kph as quickly as possible. The [world record for this event](https://www.youtube.com/watch?annotation_id=annotation_2297602723&feature=iv&src_vid=I-NCH8ct24U&v=n2XiCYA3C9s), using a car that meets the competition requirements, is 1.513 seconds.We'll start with a simple model that takes into account the characteristics of the motor and vehicle:* The motor is an [Emrax 228 high voltage axial flux synchronous permanent magnet motor](http://emrax.com/products/emrax-228/); according to the [data sheet](http://emrax.com/wp-content/uploads/2017/01/emrax_228_technical_data_4.5.pdf), its maximum torque is 240 Nm, at 0 rpm. But maximum torque decreases with motor speed; at 5000 rpm, maximum torque is 216 Nm.* The motor is connected to the drive axle with a chain drive with speed ratio 13:60 or 1:4.6; that is, the axle rotates once for each 4.6 rotations of the motor.* The radius of the tires is 0.26 meters.* The weight of the vehicle, including driver, is 300 kg.To start, we will assume no slipping between the tires and the road surface, no air resistance, and no rolling resistance. Then we will relax these assumptions one at a time.* First we'll add drag, assuming that the frontal area of the vehicle is 0.6 square meters, with coefficient of drag 0.6.* Next we'll add rolling resistance, assuming a coefficient of 0.2.* Finally we'll compute the peak acceleration to see if the "no slip" assumption is credible.We'll use this model to estimate the potential benefit of possible design improvements, including decreasing drag and rolling resistance, or increasing the speed ratio.I'll start by loading the units we need. | radian = UNITS.radian
m = UNITS.meter
s = UNITS.second
minute = UNITS.minute
hour = UNITS.hour
km = UNITS.kilometer
kg = UNITS.kilogram
N = UNITS.newton
rpm = UNITS.rpm | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
And store the parameters in a `Params` object. | params = Params(r_wheel=0.26 * m,
speed_ratio=13/60,
C_rr=0.2,
C_d=0.5,
area=0.6*m**2,
rho=1.2*kg/m**3,
mass=300*kg) | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
`make_system` creates the initial state, `init`, and constructs an `interp1d` object that represents torque as a function of motor speed. | def make_system(params):
"""Make a system object.
params: Params object
returns: System object
"""
init = State(x=0*m, v=0*m/s)
rpms = [0, 2000, 5000]
torques = [240, 240, 216]
interpolate_torque = interpolate(Series(torques, rpms))
return System(params, init=init,
interpolate_torque=interpolate_torque,
t_end=3*s) | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Testing `make_system` | system = make_system(params)
system.init | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Torque and speedThe relationship between torque and motor speed is taken from the [Emrax 228 data sheet](http://emrax.com/wp-content/uploads/2017/01/emrax_228_technical_data_4.5.pdf). The following functions reproduce the red dotted line that represents peak torque, which can only be sustained for a few seconds before the motor overheats. | def compute_torque(omega, system):
"""Maximum peak torque as a function of motor speed.
omega: motor speed in radian/s
system: System object
returns: torque in Nm
"""
factor = (1 * radian / s).to(rpm)
x = magnitude(omega * factor)
return system.interpolate_torque(x) * N * m
compute_torque(0*radian/s, system)
omega = (5000 * rpm).to(radian/s)
compute_torque(omega, system) | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Plot the whole curve. | xs = linspace(0, 525, 21) * radian / s
taus = [compute_torque(x, system) for x in xs]
plot(xs, taus)
decorate(xlabel='Motor speed (rpm)',
ylabel='Available torque (N m)') | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
SimulationHere's the slope function that computes the maximum possible acceleration of the car as a function of it current speed. | def slope_func(state, t, system):
"""Computes the derivatives of the state variables.
state: State object
t: time
system: System object
returns: sequence of derivatives
"""
x, v = state
r_wheel, speed_ratio = system.r_wheel, system.speed_ratio
mass = system.mass
# use velocity, v, to compute angular velocity of the wheel
omega2 = v / r_wheel
# use the speed ratio to compute motor speed
omega1 = omega2 / speed_ratio
# look up motor speed to get maximum torque at the motor
tau1 = compute_torque(omega1, system)
# compute the corresponding torque at the axle
tau2 = tau1 / speed_ratio
# compute the force of the wheel on the ground
F = tau2 / r_wheel
# compute acceleration
a = F/mass
return v, a | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Testing `slope_func` at linear velocity 10 m/s. | test_state = State(x=0*m, v=10*m/s)
slope_func(test_state, 0*s, system) | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Now we can run the simulation. | results, details = run_ode_solver(system, slope_func)
details | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
And look at the results. | results.tail() | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
After 3 seconds, the vehicle could be at 40 meters per second, in theory, which is 144 kph. | v_final = get_last_value(results.v)
v_final.to(km/hour) | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Plotting `x` | def plot_position(results):
plot(results.x, label='x')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results) | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Plotting `v` | def plot_velocity(results):
plot(results.v, label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results) | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Stopping at 100 kphWe'll use an event function to stop the simulation when we reach 100 kph. | def event_func(state, t, system):
"""Stops when we get to 100 km/hour.
state: State object
t: time
system: System object
returns: difference from 100 km/hour
"""
x, v = state
# convert to km/hour
factor = (1 * m/s).to(km/hour)
v = magnitude(v * factor)
return v - 100
results, details = run_ode_solver(system, slope_func, events=event_func)
details | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Here's what the results look like. | subplot(2, 1, 1)
plot_position(results)
subplot(2, 1, 2)
plot_velocity(results)
savefig('figs/chap11-fig02.pdf') | Saving figure to file figs/chap11-fig02.pdf
| MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
According to this model, we should be able to make this run in just over 2 seconds. | t_final = get_last_label(results) * s | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
At the end of the run, the car has gone about 28 meters. | state = results.last_row() | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
If we send the final state back to the slope function, we can see that the final acceleration is about 13 $m/s^2$, which is about 1.3 times the acceleration of gravity. | v, a = slope_func(state, 0, system)
v.to(km/hour)
a
g = 9.8 * m/s**2
(a / g).to(UNITS.dimensionless) | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
It's not easy for a vehicle to accelerate faster than `g`, because that implies a coefficient of friction between the wheels and the road surface that's greater than 1. But racing tires on dry asphalt can do that; the OEM team at Olin has tested their tires and found a peak coefficient near 1.5.So it's possible that our no slip assumption is valid, but only under ideal conditions, where weight is distributed equally on four tires, and all tires are driving. **Exercise:** How much time do we lose because maximum torque decreases as motor speed increases? Run the model again with no drop off in torque and see how much time it saves. Drag In this section we'll see how much effect drag has on the results.Here's a function to compute drag force, as we saw in Chapter 21. | def drag_force(v, system):
"""Computes drag force in the opposite direction of `v`.
v: velocity
system: System object
returns: drag force
"""
rho, C_d, area = system.rho, system.C_d, system.area
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
We can test it with a velocity of 20 m/s. | drag_force(20 * m/s, system) | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Here's the resulting acceleration of the vehicle due to drag. | drag_force(20 * m/s, system) / system.mass | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
We can see that the effect of drag is not huge, compared to the acceleration we computed in the previous section, but it is not negligible.Here's a modified slope function that takes drag into account. | def slope_func2(state, t, system):
"""Computes the derivatives of the state variables.
state: State object
t: time
system: System object
returns: sequence of derivatives
"""
x, v = state
r_wheel, speed_ratio = system.r_wheel, system.speed_ratio
mass = system.mass
omega2 = v / r_wheel * radian
omega1 = omega2 / speed_ratio
tau1 = compute_torque(omega1, system)
tau2 = tau1 / speed_ratio
F = tau2 / r_wheel
a_motor = F / mass
a_drag = drag_force(v, system) / mass
a = a_motor + a_drag
return v, a | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
And here's the next run. | results2, details = run_ode_solver(system, slope_func2, events=event_func)
details | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
The time to reach 100 kph is a bit higher. | t_final2 = get_last_label(results2) * s | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
But the total effect of drag is only about 2/100 seconds. | t_final2 - t_final | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
That's not huge, which suggests we might not be able to save much time by decreasing the frontal area, or coefficient of drag, of the car. Rolling resistance Next we'll consider [rolling resistance](https://en.wikipedia.org/wiki/Rolling_resistance), which the force that resists the motion of the car as it rolls on tires. The cofficient of rolling resistance, `C_rr`, is the ratio of rolling resistance to the normal force between the car and the ground (in that way it is similar to a coefficient of friction).The following function computes rolling resistance. | system.set(unit_rr = 1 * N / kg)
def rolling_resistance(system):
"""Computes force due to rolling resistance.
system: System object
returns: force
"""
return -system.C_rr * system.mass * system.unit_rr | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
The acceleration due to rolling resistance is 0.2 (it is not a coincidence that it equals `C_rr`). | rolling_resistance(system)
rolling_resistance(system) / system.mass | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Here's a modified slope function that includes drag and rolling resistance. | def slope_func3(state, t, system):
"""Computes the derivatives of the state variables.
state: State object
t: time
system: System object
returns: sequence of derivatives
"""
x, v = state
r_wheel, speed_ratio = system.r_wheel, system.speed_ratio
mass = system.mass
omega2 = v / r_wheel * radian
omega1 = omega2 / speed_ratio
tau1 = compute_torque(omega1, system)
tau2 = tau1 / speed_ratio
F = tau2 / r_wheel
a_motor = F / mass
a_drag = drag_force(v, system) / mass
a_roll = rolling_resistance(system) / mass
a = a_motor + a_drag + a_roll
return v, a | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
And here's the run. | results3, details = run_ode_solver(system, slope_func3, events=event_func)
details | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
The final time is a little higher, but the total cost of rolling resistance is only 3/100 seconds. | t_final3 = get_last_label(results3) * s
t_final3 - t_final2 | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
So, again, there is probably not much to be gained by decreasing rolling resistance.In fact, it is hard to decrease rolling resistance without also decreasing traction, so that might not help at all. Optimal gear ratio The gear ratio 13:60 is intended to maximize the acceleration of the car without causing the tires to slip. In this section, we'll consider other gear ratios and estimate their effects on acceleration and time to reach 100 kph.Here's a function that takes a speed ratio as a parameter and returns time to reach 100 kph. | def time_to_speed(speed_ratio, params):
"""Computes times to reach 100 kph.
speed_ratio: ratio of wheel speed to motor speed
params: Params object
returns: time to reach 100 kph, in seconds
"""
params = Params(params, speed_ratio=speed_ratio)
system = make_system(params)
system.set(unit_rr = 1 * N / kg)
results, details = run_ode_solver(system, slope_func3, events=event_func)
t_final = get_last_label(results)
a_initial = slope_func(system.init, 0, system)
return t_final | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
We can test it with the default ratio: | time_to_speed(13/60, params) | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Now we can try it with different numbers of teeth on the motor gear (assuming that the axle gear has 60 teeth): | for teeth in linrange(8, 18):
print(teeth, time_to_speed(teeth/60, params)) | 8 1.3230554808694261
9 1.4683740716590767
10 1.6154033363003908
11 1.763893473709603
12 1.913673186217739
13 2.0646544476416953
14 2.216761311453768
15 2.369962929121199
16 2.5242340753735495
17 2.6795453467447845
| MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Wow! The speed ratio has a big effect on the results. At first glance, it looks like we could break the world record (1.513 seconds) just by decreasing the number of teeth.But before we try it, let's see what effect that has on peak acceleration. | def initial_acceleration(speed_ratio, params):
"""Maximum acceleration as a function of speed ratio.
speed_ratio: ratio of wheel speed to motor speed
params: Params object
returns: peak acceleration, in m/s^2
"""
params = Params(params, speed_ratio=speed_ratio)
system = make_system(params)
a_initial = slope_func(system.init, 0, system)[1] * m/s**2
return a_initial | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Here are the results: | for teeth in linrange(8, 18):
print(teeth, initial_acceleration(teeth/60, params)) | 8 23.076923076923077 meter * newton / kilogram / second ** 2
9 20.51282051282051 meter * newton / kilogram / second ** 2
10 18.46153846153846 meter * newton / kilogram / second ** 2
11 16.783216783216787 meter * newton / kilogram / second ** 2
12 15.384615384615385 meter * newton / kilogram / second ** 2
13 14.201183431952662 meter * newton / kilogram / second ** 2
14 13.186813186813184 meter * newton / kilogram / second ** 2
15 12.307692307692308 meter * newton / kilogram / second ** 2
16 11.538461538461538 meter * newton / kilogram / second ** 2
17 10.85972850678733 meter * newton / kilogram / second ** 2
| MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
As we decrease the speed ratio, the peak acceleration increases. With 8 teeth on the motor gear, we could break the world record, but only if we can accelerate at 2.3 times the acceleration of gravity, which is impossible without very sticky ties and a vehicle that generates a lot of downforce. | 23.07 / 9.8 | _____no_output_____ | MIT | soln/oem_soln.ipynb | pmalo46/ModSimPy |
Python Solution for Hackerrank By Viraj Shetty Hello World | print("Hello, World!") | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Python If-Else | if __name__ == '__main__':
n = int(input().strip())
if(n%2==1):
print("Weird")
if(n%2==0):
if (n in range(2,5)):
print("Not Weird")
if (n in range(6,21)):
print("Weird")
if (n>20):
print("Not Weird") | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Print Function | if __name__ == '__main__':
n = int(input())se
x = ""
for i in range (1,n+1):
x += str(i)
print(x) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Leap Year Function | def is_leap(year):
leap = False
if year % 4 == 0 and year % 100 != 0:
leap = True
elif year % 400 ==0:
leap = True
elif year % 100 == 0:
leap = False
else:
leap = False
return leap
year = int(input())
print(is_leap(year)) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
String Validators | if __name__ == '__main__':
s = input()
an = a = d = l = u = 0
for c in s:
if(c.isalnum() == True):
an += 1
if(c.isalpha() == True):
a += 1
if(c.isdigit() == True):
d += 1
if(c.islower() == True):
l += 1
if(c.isupper() == True):
u += 1
if(an !=0):
print("True")
else:
print("False")
if(a !=0):
print("True")
else:
print("False")
if(d !=0):
print("True")
else:
print("False")
if(l !=0):
print("True")
else:
print("False")
if(u !=0):
print("True")
else:
print("False") | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Runner Up | if __name__ == '__main__':
n = int(input())
arr = map(int, input().split())
def dup(dupl):
fl = []
for num in dupl:
if num not in fl:
fl.append(num)
return fl
arr1 = dup(arr)
arr1.sort()
print(arr1[-2]) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
What’s your Name | def print_full_name(a, b):
print("Hello "+a+" "+b+"! You just delved into python." )
if __name__ == '__main__':
first_name = input()
last_name = input()
print_full_name(first_name, last_name) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
String Split and Join | def split_and_join(line):
line = line.split(" ")
line = "-".join(line)
return line
if __name__ == '__main__':
line = raw_input()
result = split_and_join(line)
print result | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Project Euler 173 | import math
count = 0
n = int(input())
for i in range(2,int(math.sqrt(n)),2):
b = int(((n/i) - i)/2)
if b > 0:
count+=b
print(count) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
List Comprehension | x, y, z, n = (int(input()) for _ in range(4))
print ([[a,b,c] for a in range(0,x+1) for b in range(0,y+1) for c in range(0,z+1) if a + b + c != n ]) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Lists | n_of_commands = int(input())
list_of_commands = []
for command in range(n_of_commands):
x = input()
list_of_commands.append(x)
list_elements = []
for command in list_of_commands:
if command == "print":
print(list_elements)
elif command[:3]=="rem":
x = command.split()
remove_elem = int(x[1])
list_elements.remove(remove_elem)
elif command[:3]=="rev":
list_elements.reverse()
elif command == "pop":
list_elements.pop()
elif command[:3]=="app":
x = command.split()
append_elem = int(x[1])
list_elements.append(append_elem)
elif command == "sort":
list_elements.sort()
elif command[:3]=="ins":
x = command.split()
index = int(x[1])
insert_elem = int(x[2])
list_elements.insert(index,insert_elem)
else:
Break | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Solve Me First! | def solveMeFirst(a,b):
m = a+b
return m
num1 = int(input())
num2 = int(input())
sum = solveMeFirst(num1,num2)
print(sum) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Simple Array Sum | import os
import sys
def simpleArraySum(ar):
Sum = sum(ar)
return Sum
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
ar_count = int(input())
ar = list(map(int, input().rstrip().split()))
result = simpleArraySum(ar)
fptr.write(str(result) + '\n')
fptr.close() | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Compare The Triplets | import math
import os
import random
import re
import sys
def compareTriplets(a, b):
counta = 0
countb = 0
for i in range (0,3):
if(a[i]>b[i]):
counta += 1
if(a[i]<b[i]):
countb += 1
return counta,countb
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
a = list(map(int, input().rstrip().split()))
b = list(map(int, input().rstrip().split()))
result = compareTriplets(a, b)
fptr.write(' '.join(map(str, result)))
fptr.write('\n')
fptr.close() | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
A Very Big Sum | import math
import os
import random
import re
import sys
#function is same since python deals with
def aVeryBigSum(ar):
Sum = sum(ar)
return Sum
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
ar_count = int(input())
ar = list(map(int, input().rstrip().split()))
result = aVeryBigSum(ar)
fptr.write(str(result) + '\n')
fptr.close() | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Find the Point (Maths Based Problems) | import os
def findPoint(px, py, qx, qy):
rx = (qx-px) + qx
ry = (qy-py) + qy
return (rx,ry)
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
n = int(input())
for n_itr in range(n):
pxPyQxQy = input().split()
px = int(pxPyQxQy[0])
py = int(pxPyQxQy[1])
qx = int(pxPyQxQy[2])
qy = int(pxPyQxQy[3])
result = findPoint(px, py, qx, qy)
fptr.write(' '.join(map(str, result)))
fptr.write('\n')
fptr.close() | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Power of A to B and mod C | if __name__ == '__main__':
import math as ms
c = []
while True:
try:
line = input()
except EOFError:
break
c.append(line)
a = int(c[0])
b = int(c[1])
m = int(c[2])
x = ms.pow(a,b)
c = (x%m)
print(int(x))
print(int(c)) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Map and Lambda | cube = lambda x: x**3
a = []
def fibonacci(n):
first = 0
second = 1
for i in range(n):
a.append(first)
t = first + second
first = second
second = t
return a
if __name__ == '__main__':
n = int(input())
print(list(map(cube, fibonacci(n)))) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Company Logo | from collections import Counter
for letter, counts in sorted(Counter(raw_input()).most_common(),key = lambda x:(-x[1],x[0]))[:3]:
print letter, counts | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Merge the Tools! | def merge_the_tools(string,k):
num_subsegments = int(len(string)/k)
for index in range(num_subsegments):
t = string[index * k : (index + 1) * k]
u = ""
for c in t:
if c not in u:
u += c
print(u)
if __name__ == '__main__':
string, k = input(), int(input())
merge_the_tools(string, k) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Check SuperScript | main_set = set(map(int,input().split()))
n = int(input())
output = []
for i in range(n):
x = set(map(int,input().split()))
if main_set.issuperset(x):
output.append(True)
else:
output.append(False)
print(all(output)) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Check Subset | def common (A,B):
a_set = set(A)
b_set = set(B)
if (a_set & b_set):
answer.append("True")
else:
answer.append("False")
n = int(input())
answer = []
for i in range(0,n):
alen = int(input())
A = list(map(int,input().split()))
blen = int(input())
B = list(map(int,input().split()))
common(A,B)
for i in answer:
print(i) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Formated Sorting ortingS | l = []
u = []
o = []
e = []
s = input()
all_list = list(s)
for i in all_list:
if i.islower():
l.append(i)
if i.isupper():
u.append(i)
if i.isnumeric():
if (int(i)%2==0):
e.append(i)
else:
o.append(i)
lower = sorted(l)
upper = sorted(u)
odd = sorted(o)
even = sorted(e)
tempr = lower+upper
tempr1 = tempr + odd
last = tempr1 + even
s = "".join(last)
print(s) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Exceptions | import re
n = int(input())
for i in range(n):
x = input()
try:
if re.compile(x):
value = True
except:
value = False
print(value) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Iterables and Iterators | from itertools import combinations
N = int(input())
S = raw_input().split(' ')
K = int(input())
num = 0
den = 0
for c in combinations(S,K):
den+=1
num+='a' in c
print float(num)/den | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Day of Any MM/DD/YYYY | import calendar as c
d = list(map(int,input().split()))
ans = c.weekday(d[2],d[0],d[1])
if (ans == 0):
print("MONDAY")
elif (ans == 1):
print("TUESDAY")
elif (ans == 2):
print("WEDNESDAY")
elif (ans == 3):
print("THURSDAY")
elif (ans == 4):
print("FRIDAY")
elif (ans == 5)
print("SATURDAY")
else:
print("SUNDAY") | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
No idea! | main_set = set(map(int,input().split()))
n = int(input())
output = []
for i in range(n):
x = set(map(int,input().split()))
if main_set.issuperset(x):
output.append(True)
else:
output.append(False)
print(all(output)) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Collections.Counter() | n = int(input())
arr = list(map(int, input().split()))
l = int(input())
x=0
for i in range(l):
size,price = map(int,input().split())
if (size in arr):
x += price
arr.remove(size)
print(x) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
sWAP cASE | def swap_case(s):
for i in s:
if (i.islower()):
a.append(i.upper())
elif(i.isupper()):
a.append(i.lower())
else:
a.append(i)
b = ''.join(a)
return b
a = []
if __name__ == '__main__':
s = input()
result = swap_case(s)
print(result) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Set discard and pop | n = int(input())
list_of_int = list(map(int,input().split()))
n_of_commands = int(input())
list_of_commands = []
for command in range(n_of_commands):
x = input()
list_of_commands.append(x)
set1 = set(list_of_int)
for command in list_of_commands:
if command == "pop":
set1.pop()
elif command.startswith('d'):
discard_num = int(command[-1])
set1.discard(discard_num)
else:
remove_num = int(command[-1])
set1.remove(remove_num)
print(sum(set1)) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Find a String | def count_substring(string, sub_string):
c=0
for i in range(len(string)):
if string[i:].startswith(sub_string):
c +=1
return c
if __name__ == '__main__':
string = input().strip()
sub_string = input().strip()
count = count_substring(string, sub_string)
print(count) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Introduction to Sets | def average(arr):
for i in arr:
if i not in a:
a.append(i)
x = float(sum(a)/len(a))
return x
a = []
if __name__ == '__main__':
n = int(input())
arr = list(map(int, input().split()))
result = average(arr)
print(result) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Set .symmetric_difference : Symmetric Difference can be changed to difference, union and intersection | n = int(input())
e = list(map(int,input().split()))
m = int(input())
f = list(map(int,input().split()))
a = set(e)
b = set(f)
c = 0
res = a.symmetric_difference(b)
for i in res:
c += 1
print(c) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Div-mod | a = int(input())
b = int(input())
print(a//b)
print(a%b)
print(divmod(a,b)) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Symmetric Difference | n = int(input())
list1 = list(map(int,input().split()))
n1 = int(input())
list2 = list(map(int,input().split()))
[print(i) for i in sorted(set(list1).difference(set(list2)).union(set(list2).difference(set(list1))))] | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Collections.deque | from collections import deque
n = int(input())
d = deque()
list_of_commands = []
for i in range(n):
x = input()
list_of_commands.append(x)
for command in list_of_commands:
print(command[:7])
if command[:7]=="append":
x = command.split()
d.append(int(x[1]))
print(d)
elif command[:7]=="appendl":
x = command.split()
d.appendleft(int(x[1]))
print(d)
elif command[:4]=="pop":
d.pop()
print(d)
else:
d.popleft()
print(d) | _____no_output_____ | MIT | Python_Hackerrank.ipynb | VirajVShetty/Python-Hackerrank |
Stock walkThis notebook shows how a Python class can inherit from an interface of an extension module (that is, a class in C++). | import xtensor_monte_carlo as xmc
import numpy as np
from bqplot import (LinearScale, Lines, Axis, Figure)
# Definition of a constant diffusion model
class ConstantDiffusionModel(xmc.diffusion_model):
def __init__(self, drift, vol):
xmc.diffusion_model.__init__(self)
self.drift = drift
self.volatility = vol
def get_drift(self, time, spot, drift):
drift.fill(self.drift)
def get_volatility(self, time, spot, vol):
vol.fill(self.volatility)
drift = 0.0016
vol = 0.0888
maturity = 1.
model = ConstantDiffusionModel(drift, vol)
engine = xmc.mc_engine(model)
engine.run_simulation(1., maturity, 10)
res = engine.get_path()
time = np.arange(0, int(maturity * 365) + 1)
sc_x = LinearScale(max=365)
sc_y = LinearScale()
ax_x = Axis(scale=sc_x, label='time')
ax_y = Axis(scale=sc_y, orientation='vertical', label='price')
lines = [Lines(x=time, y=res[i], scales={'x': sc_x, 'y': sc_y}) for i in range(0, res.shape[0])]
figure = Figure(marks=lines, axes=[ax_x, ax_y], title='Stock walk')
figure | _____no_output_____ | BSD-3-Clause | monte_carlo/notebooks/stock_walk.ipynb | oscar6echo/xtensor-finance |
Before running: `pip install geojsoncontour` | import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import geojsoncontour
# levels to draw contour lines at
levels = [-70, -60, -50, -40, -30, -20, 10, 20, 30, 40] | _____no_output_____ | MIT | archive/NASA_data/archive/geojson_for_tableau.ipynb | ACE-P/ev_temp_map |
The colors of the map does't really matter because Tableau does't recongize color information anyway. | # read lon and lat coordinates. FILES ARE IN OUR GOOGLE DRIVE
lon = pd.read_csv('./processed_min/lon.csv', index_col=0)
lat = pd.read_csv('./processed_min/lat.csv', index_col=0)
# mesh x and y (lon and lat coordinates)
x_mesh, y_mesh = np.meshgrid(lon, lat)
# z_mesh
z_mesh = pd.read_csv("./processed_min/0101.csv", index_col=0)
# create the contour plot
contourf = plt.contourf(x_mesh, y_mesh, z_mesh, linestyles='None', levels=levels)
# convert matplotlib contourf to geojson file
os.makedirs("./geojson_files", exist_ok=True)
geojsoncontour.contourf_to_geojson(contourf, geojson_filepath="./geojson_files/0101.geojson") | _____no_output_____ | MIT | archive/NASA_data/archive/geojson_for_tableau.ipynb | ACE-P/ev_temp_map |
Menyanthes File*Developed by Ruben Caljé* Menyanthes is timeseries analysis software used by many people in the Netherlands. In this example a Menyanthes-file with one observation-series is imported, and simulated. There are several stresses in the Menyanthes-file, among which are three groundwater extractions with a significant influence on groundwater head. | # First perform the necessary imports
import matplotlib.pyplot as plt
import pastas as ps
%matplotlib notebook | _____no_output_____ | MIT | examples/notebooks/4_menyanthes_file.ipynb | pgraafstra/pastas |
1. Importing the Menyanthes-fileImport the Menyanthes-file with observations and stresses. Then plot the observations, together with the diferent stresses in the Menyanthes file. | # how to use it?
fname = '../data/MenyanthesTest.men'
meny = ps.read.MenyData(fname)
# plot some series
f1, axarr = plt.subplots(len(meny.IN)+1, sharex=True)
oseries = meny.H['Obsevation well']["values"]
oseries.plot(ax=axarr[0])
axarr[0].set_title(meny.H['Obsevation well']["Name"])
for i, val in enumerate(meny.IN.items()):
name, data = val
data["values"].plot(ax=axarr[i+1])
axarr[i+1].set_title(name)
plt.tight_layout(pad=0)
plt.show() | _____no_output_____ | MIT | examples/notebooks/4_menyanthes_file.ipynb | pgraafstra/pastas |
2. Run a modelMake a model with precipitation, evaporation and three groundwater extractions. | # Create the time series model
ml = ps.Model(oseries)
# Add precipitation
IN = meny.IN['Precipitation']['values']
IN.index = IN.index.round("D")
IN2 = meny.IN['Evaporation']['values']
IN2.index = IN2.index.round("D")
ts = ps.StressModel2([IN, IN2], ps.Gamma, 'Recharge')
ml.add_stressmodel(ts)
# Add well extraction 1
# IN = meny.IN['Extraction 1']
# # extraction amount counts for the previous month
# ts = ps.StressModel(IN['values'], ps.Hantush, 'Extraction_1', up=False,
# settings="well")
# ml.add_stressmodel(ts)
# Add well extraction 2
IN = meny.IN['Extraction 2']
# extraction amount counts for the previous month
ts = ps.StressModel(IN['values'], ps.Hantush, 'Extraction_2', up=False,
settings="well")
ml.add_stressmodel(ts)
# Add well extraction 3
IN = meny.IN['Extraction 3']
# extraction amount counts for the previous month
ts = ps.StressModel(IN['values'], ps.Hantush, 'Extraction_3', up=False,
settings="well")
ml.add_stressmodel(ts)
# Solve the model (can take around 20 seconds..)
ml.solve() | INFO: Cannot determine frequency of series None
INFO: Inferred frequency from time series None: freq=D
INFO: Inferred frequency from time series None: freq=D
INFO: Cannot determine frequency of series None
INFO: Time Series None: values of stress were transformedto daily values (frequency not altered) with: divide
INFO: Time Series None: values of stress were transformedto daily values (frequency not altered) with: divide
INFO: Time Series None was sampled down to freq D with method timestep_weighted_resample
INFO: Cannot determine frequency of series None
INFO: Time Series None: values of stress were transformedto daily values (frequency not altered) with: divide
INFO: Time Series None: values of stress were transformedto daily values (frequency not altered) with: divide
INFO: Time Series None was sampled down to freq D with method timestep_weighted_resample
INFO: Time Series None: values of stress were transformedto daily values (frequency not altered) with: divide
INFO: Time Series None was sampled down to freq D with method timestep_weighted_resample
INFO: Time Series None: values of stress were transformedto daily values (frequency not altered) with: divide
INFO: Time Series None was sampled down to freq D with method timestep_weighted_resample
INFO: There are observations between the simulation timesteps. Linear interpolation between simulated values is used.
/Users/Raoul/Projects/pastas/pastas/pastas/solver.py:118: RuntimeWarning: invalid value encountered in double_scalars
pcor[i, j] = pcov[i, j] / np.sqrt(pcov[i, i] * pcov[j, j])
| MIT | examples/notebooks/4_menyanthes_file.ipynb | pgraafstra/pastas |
3. Plot the decompositionShow the decomposition of the groundwater head, by plotting the influence on groundwater head of each of the stresses. | ax = ml.plots.decomposition(ytick_base=1.)
ax[0].set_title('Observations vs simulation')
ax[0].legend()
ax[0].figure.tight_layout(pad=0) | _____no_output_____ | MIT | examples/notebooks/4_menyanthes_file.ipynb | pgraafstra/pastas |
Refactor: Wine Quality AnalysisIn this exercise, you'll refactor code that analyzes a wine quality dataset taken from the UCI Machine Learning Repository [here](https://archive.ics.uci.edu/ml/datasets/wine+quality). Each row contains data on a wine sample, including several physicochemical properties gathered from tests, as well as a quality rating evaluated by wine experts.The code in this notebook first renames the columns of the dataset and then calculates some statistics on how some features may be related to quality ratings. Can you refactor this code to make it more clean and modular? | import pandas as pd
df = pd.read_csv('winequality-red.csv', sep=';')
df.head(10) | _____no_output_____ | MIT | Data Analysis Projects/Wine Qulaity Prediction/refactor_wine_quality_.ipynb | nirmalya8/HackPython-21 |
Renaming ColumnsYou want to replace the spaces in the column labels with underscores to be able to reference columns with dot notation. Here's one way you could've done it. | new_df = df.rename(columns={'fixed acidity': 'fixed_acidity',
'volatile acidity': 'volatile_acidity',
'citric acid': 'citric_acid',
'residual sugar': 'residual_sugar',
'free sulfur dioxide': 'free_sulfur_dioxide',
'total sulfur dioxide': 'total_sulfur_dioxide'
})
new_df.head() | _____no_output_____ | MIT | Data Analysis Projects/Wine Qulaity Prediction/refactor_wine_quality_.ipynb | nirmalya8/HackPython-21 |
And here's a slightly better way you could do it. You can avoid making naming errors due to typos caused by manual typing. However, this looks a little repetitive. Can you make it better? | labels = list(df.columns)
labels[0] = labels[0].replace(' ', '_')
labels[1] = labels[1].replace(' ', '_')
labels[2] = labels[2].replace(' ', '_')
labels[3] = labels[3].replace(' ', '_')
labels[5] = labels[5].replace(' ', '_')
labels[6] = labels[6].replace(' ', '_')
df.columns = labels
df.head() | _____no_output_____ | MIT | Data Analysis Projects/Wine Qulaity Prediction/refactor_wine_quality_.ipynb | nirmalya8/HackPython-21 |
Analyzing FeaturesNow that your columns are ready, you want to see how different features of this dataset relate to the quality rating of the wine. A very simple way you could do this is by observing the mean quality rating for the top and bottom half of each feature. The code below does this for four features. It looks pretty repetitive right now. Can you make this more concise? You might challenge yourself to figure out how to make this code more efficient! But you don't need to worry too much about efficiency right now - we will cover that more in the next section. | median_alcohol = df.alcohol.median()
for i, alcohol in enumerate(df.alcohol):
if alcohol >= median_alcohol:
df.loc[i, 'alcohol'] = 'high'
else:
df.loc[i, 'alcohol'] = 'low'
df.groupby('alcohol').quality.mean()
median_pH = df.pH.median()
for i, pH in enumerate(df.pH):
if pH >= median_pH:
df.loc[i, 'pH'] = 'high'
else:
df.loc[i, 'pH'] = 'low'
df.groupby('pH').quality.mean()
median_sugar = df.residual_sugar.median()
for i, sugar in enumerate(df.residual_sugar):
if sugar >= median_sugar:
df.loc[i, 'residual_sugar'] = 'high'
else:
df.loc[i, 'residual_sugar'] = 'low'
df.groupby('residual_sugar').quality.mean()
median_citric_acid = df.citric_acid.median()
for i, citric_acid in enumerate(df.citric_acid):
if citric_acid >= median_citric_acid:
df.loc[i, 'citric_acid'] = 'high'
else:
df.loc[i, 'citric_acid'] = 'low'
df.groupby('citric_acid').quality.mean()
| _____no_output_____ | MIT | Data Analysis Projects/Wine Qulaity Prediction/refactor_wine_quality_.ipynb | nirmalya8/HackPython-21 |
A Whale off the Port(folio) --- In this assignment, you'll get to use what you've learned this week to evaluate the performance among various algorithmic, hedge, and mutual fund portfolios and compare them against the S&P TSX 60 Index. | # Initial imports
import pandas as pd
import numpy as np
import datetime as dt
from pathlib import Path
%matplotlib inline | _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
Data CleaningIn this section, you will need to read the CSV files into DataFrames and perform any necessary data cleaning steps. After cleaning, combine all DataFrames into a single DataFrame.Files:* `whale_returns.csv`: Contains returns of some famous "whale" investors' portfolios.* `algo_returns.csv`: Contains returns from the in-house trading algorithms from Harold's company.* `sp_tsx_history.csv`: Contains historical closing prices of the S&P TSX 60 Index. Whale ReturnsRead the Whale Portfolio daily returns and clean the data. | # Set file path for CSV
file_path = Path("Resources/whale_returns.csv")
# Read in the CSV into a DataFrame
whale_returns_csv = pd.read_csv(file_path)
whale_returns_csv.head()
# Inspect the first 10 rows of the DataFrame
whale_returns_csv.head(10)
# Inspect the last 10 rows of the DataFrame
whale_returns_csv.tail(10)
# View column data types by using the 'dtypes' attribute to list the column data types
whale_returns_csv.dtypes
# Identify data quality issues
# Identify the number of rows
whale_returns_csv.count()
# Count nulls
whale_returns_csv.isnull()
# Determine the number of nulls
whale_returns_csv.isnull().sum()
# Determine the percentage of nulls for each column
whale_returns_csv.isnull().sum() / len(whale_returns_csv) * 100
# Drop nulls
whale_returns_csv.dropna()
# Check for duplicated rows
whale_returns_csv.duplicated()
# Use the dropna function to drop the whole records that have at least one null value
whale_returns_csv.dropna(inplace=True) | _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
Algorithmic Daily ReturnsRead the algorithmic daily returns and clean the data. | #Calculate and plot daily return
# Calculate and plot cumulative return
# Confirm null values have been dropped 1
whale_returns_csv.isnull()
# Confirm null values have been dropped 2
whale_returns_csv.isnull().sum()
# Reading algorithmic returns
# Count nulls
# Drop nulls
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
S&P TSX 60 ReturnsRead the S&P TSX 60 historic closing prices and create a new daily returns DataFrame from the data. | # Reading S&P TSX 60 Closing Prices
sp_tsx_path = Path("Resources/sp_tsx_history.csv")
# Check Data Types
sp_tsx_df = pd.read_csv(sp_tsx_path)
sp_tsx_df.head()
sp_tsx_df.tail()
# Use the 'dtypes' attribute to list the column data types
sp_tsx_df.dtypes
# Use the 'info' attribute to list additional infor about the column data types
sp_tsx_df.info()
# Use the 'as_type' function to convert 'Date' from 'object' to 'datetime64'
sp_tsx_df['Date'] = sp_tsx_df['Date'].astype('datetime64')
sp_tsx_df
# Sort datetime index in ascending order (past to present)
sp_tsx_df.sort_index(inplace = True)
sp_tsx_df.head()
# Confirm datetime64 conversion was proccesed correctly
sp_tsx_df.dtypes
# Set the date as the index to the Dataframe
sp_tsx_df.set_index(pd.to_datetime(sp_tsx_df['Date'], infer_datetime_format=True), inplace=True)
sp_tsx_df.head()
# Drop the extra date column
sp_tsx_df.drop(columns=['Date'], inplace=True)
sp_tsx_df.head()
sp_tsx_df.dtypes
sp_tsx_df['Close'] = sp_tsx_df.to_numeric('Close')
sp_tsx_df
daily_returns = sp_tsx_df.pct_change()
sp_tsx_df()
# Plot daily close
sp_tsx_df.plot()
# Calculate Daily Returns
# Drop nulls
# Rename `Close` Column to be specific to this portfolio.
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
Combine Whale, Algorithmic, and S&P TSX 60 Returns | # Join Whale Returns, Algorithmic Returns, and the S&P TSX 60 Returns into a single DataFrame with columns for each portfolio's returns.
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
--- Conduct Quantitative AnalysisIn this section, you will calculate and visualize performance and risk metrics for the portfolios. Performance Anlysis Calculate and Plot the daily returns. | # Plot daily returns of all portfolios
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
Calculate and Plot cumulative returns. | # Calculate cumulative returns of all portfolios
# Plot cumulative returns
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
--- Risk AnalysisDetermine the _risk_ of each portfolio:1. Create a box plot for each portfolio. 2. Calculate the standard deviation for all portfolios.4. Determine which portfolios are riskier than the S&P TSX 60.5. Calculate the Annualized Standard Deviation. Create a box plot for each portfolio | # Box plot to visually show risk
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
Calculate Standard Deviations | # Calculate the daily standard deviations of all portfolios
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
Determine which portfolios are riskier than the S&P TSX 60 | # Calculate the daily standard deviation of S&P TSX 60
# Determine which portfolios are riskier than the S&P TSX 60
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
Calculate the Annualized Standard Deviation | # Calculate the annualized standard deviation (252 trading days)
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
--- Rolling StatisticsRisk changes over time. Analyze the rolling statistics for Risk and Beta. 1. Calculate and plot the rolling standard deviation for all portfolios using a 21-day window.2. Calculate the correlation between each stock to determine which portfolios may mimick the S&P TSX 60.3. Choose one portfolio, then calculate and plot the 60-day rolling beta for it and the S&P TSX 60. Calculate and plot rolling `std` for all portfolios with 21-day window | # Calculate the rolling standard deviation for all portfolios using a 21-day window
# Plot the rolling standard deviation
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
Calculate and plot the correlation | # Calculate the correlation
# Display de correlation matrix
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
Calculate and Plot Beta for a chosen portfolio and the S&P 60 TSX | # Calculate covariance of a single portfolio
# Calculate variance of S&P TSX
# Computing beta
# Plot beta trend
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
Rolling Statistics Challenge: Exponentially Weighted Average An alternative way to calculate a rolling window is to take the exponentially weighted moving average. This is like a moving window average, but it assigns greater importance to more recent observations. Try calculating the [`ewm`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html) with a 21-day half life for each portfolio, using standard deviation (`std`) as the metric of interest. | # Use `ewm` to calculate the rolling window
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
--- Sharpe RatiosIn reality, investment managers and thier institutional investors look at the ratio of return-to-risk, and not just returns alone. After all, if you could invest in one of two portfolios, and each offered the same 10% return, yet one offered lower risk, you'd take that one, right? Using the daily returns, calculate and visualize the Sharpe ratios using a bar plot | # Annualized Sharpe Ratios
# Visualize the sharpe ratios as a bar plot
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
Determine whether the algorithmic strategies outperform both the market (S&P TSX 60) and the whales portfolios.Write your answer here! --- Create Custom PortfolioIn this section, you will build your own portfolio of stocks, calculate the returns, and compare the results to the Whale Portfolios and the S&P TSX 60. 1. Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.2. Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock.3. Join your portfolio returns to the DataFrame that contains all of the portfolio returns.4. Re-run the performance and risk analysis with your portfolio to see how it compares to the others.5. Include correlation analysis to determine which stocks (if any) are correlated. Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock. | # Reading data from 1st stock
# Reading data from 2nd stock
# Reading data from 3rd stock
# Combine all stocks in a single DataFrame
# Reset Date index
# Reorganize portfolio data by having a column per symbol
# Calculate daily returns
# Drop NAs
# Display sample data
| _____no_output_____ | ADSL | .ipynb_checkpoints/whale.py-checkpoint.ipynb | charbelnehme/pandas-homework |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.