content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Pyspark keep only most recent timestamps that meet condition
I have the following dataset:
id col1 timestamp
1 a 01.01.2022 9:00:00
1 b 01.01.2022 9:01:00
1 c 01.01.2022 9:02:00
1 a 01.01.2022 10:00:00
1 b 01.01.2022 10:01:00
1 d 01.01.2022 10:02:00
2 a 01.01.2022 12:00:00
2 b 01.01.2022 12:01:00
2 a 01.01.2022 13:00:00
2 c 01.01.2022 13:01:00
What i would like to do is to keep all the timestamps after the last occurrence of 'a' for each id. This is what the dataset will look like:
id col1 timestamp
1 a 01.01.2022 10:00:00
1 b 01.01.2022 10:01:00
1 d 01.01.2022 10:02:00
2 a 01.01.2022 13:00:00
2 c 01.01.2022 13:01:00
It is important to identify 'a' as the starting point. Any idea on how can I do it?
I was thinking of using groupby and take the max timestamp but it only seems to work in some specific situations
A:
spark.sql("set spark.sql.legacy.timeParserPolicy=LEGACY")
w = Window.partitionBy('id')
( #column cum_a =1 when col1=a else cum_a=0. Once populated, calculate the cumulative sum of cum_a for every id ordered by timestamp
df.withColumn('cum_a', sum(when(col('col1')=='a',1).otherwise(0)).over(w.orderBy(to_timestamp('timestamp','dd.MM.yyyy HH:mm:ss'))))
#Find the maximum cum_a value per id
.withColumn('max', max('cum_a').over(w))
#Filter out where cum_a equals to max value of cum_a per id
.where(((col('cum_a')==col('max'))))
#Drop unwamted intermediatary columns
.drop('cum_a','max')
).show()
+---+----+-------------------+
| id|col1| timestamp|
+---+----+-------------------+
| 1| a|01.01.2022 10:00:00|
| 1| b|01.01.2022 10:01:00|
| 1| d|01.01.2022 10:02:00|
| 2| a|01.01.2022 13:00:00|
| 2| c|01.01.2022 13:01:00|
+---+----+-------------------+
| Pyspark keep only most recent timestamps that meet condition | I have the following dataset:
id col1 timestamp
1 a 01.01.2022 9:00:00
1 b 01.01.2022 9:01:00
1 c 01.01.2022 9:02:00
1 a 01.01.2022 10:00:00
1 b 01.01.2022 10:01:00
1 d 01.01.2022 10:02:00
2 a 01.01.2022 12:00:00
2 b 01.01.2022 12:01:00
2 a 01.01.2022 13:00:00
2 c 01.01.2022 13:01:00
What i would like to do is to keep all the timestamps after the last occurrence of 'a' for each id. This is what the dataset will look like:
id col1 timestamp
1 a 01.01.2022 10:00:00
1 b 01.01.2022 10:01:00
1 d 01.01.2022 10:02:00
2 a 01.01.2022 13:00:00
2 c 01.01.2022 13:01:00
It is important to identify 'a' as the starting point. Any idea on how can I do it?
I was thinking of using groupby and take the max timestamp but it only seems to work in some specific situations
| [
"spark.sql(\"set spark.sql.legacy.timeParserPolicy=LEGACY\")\n\n w = Window.partitionBy('id')\n( #column cum_a =1 when col1=a else cum_a=0. Once populated, calculate the cumulative sum of cum_a for every id ordered by timestamp\n df.withColumn('cum_a', sum(when(col('col1')=='a',1).otherwise(0)).over(w.orderBy(to_timestamp('timestamp','dd.MM.yyyy HH:mm:ss'))))\n #Find the maximum cum_a value per id \n .withColumn('max', max('cum_a').over(w))\n #Filter out where cum_a equals to max value of cum_a per id \n .where(((col('cum_a')==col('max'))))\n #Drop unwamted intermediatary columns\n .drop('cum_a','max')\n).show()\n\n\n+---+----+-------------------+\n| id|col1| timestamp|\n+---+----+-------------------+\n| 1| a|01.01.2022 10:00:00|\n| 1| b|01.01.2022 10:01:00|\n| 1| d|01.01.2022 10:02:00|\n| 2| a|01.01.2022 13:00:00|\n| 2| c|01.01.2022 13:01:00|\n+---+----+-------------------+\n\n"
] | [
1
] | [] | [] | [
"group_by",
"pyspark",
"python"
] | stackoverflow_0074644247_group_by_pyspark_python.txt |
Q:
add slice of data from a dataframe to a time series dataframe based on multiple criteria
I have two Pandas dataframes. df1 is a time series with 6 columns of values, one of which is column 'name'. df2 is a list of rows each with a unique string in the 'name' column and a the same date in each row of the 'date' column---all rows have the same date value from an observation that occurred on same date.
df1-->
Date
name
score
stat1
stat2
pick
2022-1-1
ABC
23.3
0.234
34
NaN
2022-1-2
ABC
21.1
0.431
14
NaN
2022-1-3
ABC
29.9
1.310
4
NaN
2022-1-4
ABC
11.3
9.310
3
NaN
df1 Index is Date
df2-->
index
Date
name
pick
0
2022-1-3
QRS
23.3
1
2022-1-3
ABC
21.1
2
2022-1-3
DBA
29.9
3
2022-1-3
KLL
11.3
I would like to merge / add the row from df2 to the row in df1 where 'name' and 'date' criteria are met.
I've reviewed many articles here and elsewhere as well as trying .merge as follows:
df3['pick'] = pd.merge(df1, df2, how='outer', on=['name'])
Success is to get df2 or a new df3 to look like this:
Date
name
score
stat1
stat2
pick
2022-1-1
ABC
23.3
0.234
34
NaN
2022-1-2
ABC
21.1
0.431
14
NaN
2022-1-3
ABC
29.9
1.310
4
21.1
2022-1-4
ABC
11.3
9.310
3
NaN
Where df1 is updated based on the relevant row from df2 OR a new df3 is the resulting combination of df1 and df2 where the 'pick' value of 21.1 is inserted to match the appropriate date and name.
Guidance would be most appreciated.
A:
try this:
df3 = pd.merge(df1,df2, how='left', left_on=['Date','name'], right_on=['Date','name'])
A:
df1 = df1.merge(df2, how='left', on=['Date', 'name']) # what this line does is it merges df1 and df2 based on the 'Date' and 'name' columns. The resulting df1 has the 'pick' column from df2 appended to it.
df1['pick'] = df1['pick_y'].fillna(df1['pick_x']) # this line replaces the NaN values in the 'pick' column with the values from the 'pick_x' column.
df1 = df1.drop(['pick_x', 'pick_y', 'index'], axis=1) # this line drops the 'pick_x', 'pick_y' and 'index' columns as they are no longer needed.
output:
Date name score stat1 stat2 pick
0 2022-1-1 ABC 23.3 0.234 34 NaN
1 2022-1-2 ABC 21.1 0.431 14 NaN
2 2022-1-3 ABC 29.9 1.31 4 21.1
3 2022-1-4 ABC 11.3 9.31 3 NaN
| add slice of data from a dataframe to a time series dataframe based on multiple criteria | I have two Pandas dataframes. df1 is a time series with 6 columns of values, one of which is column 'name'. df2 is a list of rows each with a unique string in the 'name' column and a the same date in each row of the 'date' column---all rows have the same date value from an observation that occurred on same date.
df1-->
Date
name
score
stat1
stat2
pick
2022-1-1
ABC
23.3
0.234
34
NaN
2022-1-2
ABC
21.1
0.431
14
NaN
2022-1-3
ABC
29.9
1.310
4
NaN
2022-1-4
ABC
11.3
9.310
3
NaN
df1 Index is Date
df2-->
index
Date
name
pick
0
2022-1-3
QRS
23.3
1
2022-1-3
ABC
21.1
2
2022-1-3
DBA
29.9
3
2022-1-3
KLL
11.3
I would like to merge / add the row from df2 to the row in df1 where 'name' and 'date' criteria are met.
I've reviewed many articles here and elsewhere as well as trying .merge as follows:
df3['pick'] = pd.merge(df1, df2, how='outer', on=['name'])
Success is to get df2 or a new df3 to look like this:
Date
name
score
stat1
stat2
pick
2022-1-1
ABC
23.3
0.234
34
NaN
2022-1-2
ABC
21.1
0.431
14
NaN
2022-1-3
ABC
29.9
1.310
4
21.1
2022-1-4
ABC
11.3
9.310
3
NaN
Where df1 is updated based on the relevant row from df2 OR a new df3 is the resulting combination of df1 and df2 where the 'pick' value of 21.1 is inserted to match the appropriate date and name.
Guidance would be most appreciated.
| [
"try this:\ndf3 = pd.merge(df1,df2, how='left', left_on=['Date','name'], right_on=['Date','name'])\n\n",
"\ndf1 = df1.merge(df2, how='left', on=['Date', 'name']) # what this line does is it merges df1 and df2 based on the 'Date' and 'name' columns. The resulting df1 has the 'pick' column from df2 appended to it.\ndf1['pick'] = df1['pick_y'].fillna(df1['pick_x']) # this line replaces the NaN values in the 'pick' column with the values from the 'pick_x' column.\ndf1 = df1.drop(['pick_x', 'pick_y', 'index'], axis=1) # this line drops the 'pick_x', 'pick_y' and 'index' columns as they are no longer needed.\n\noutput:\n Date name score stat1 stat2 pick\n0 2022-1-1 ABC 23.3 0.234 34 NaN\n1 2022-1-2 ABC 21.1 0.431 14 NaN\n2 2022-1-3 ABC 29.9 1.31 4 21.1\n3 2022-1-4 ABC 11.3 9.31 3 NaN\n\n"
] | [
1,
1
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074648487_dataframe_pandas_python.txt |
Q:
problem with certain inputs in password checker
Recently, I have been building a password checker for a project I have been working on but for some reason certain inputs cause errors to occur and I cant figure out what it's from. If anyone can help that would be great as I have been searching for quite a while.
import re
exit = False
allowed_char = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!$%^&*()-_=+"
row1 = "qwertyuiop"
row2 = "asdfghjkl"
row3 = "zxcvbnm"
class password_checker:
def __init__(self, password, score):
self.password = password
self.score = score
def validate(self):
length = len(self.password)
if (length < 8 or length > 24) and [x for x in self.password if x not in allowed_char]:
print("Invalid Length and Invalid Character")
raise ValueError
elif length < 8 or length > 24:
print("Invalid length")
raise ValueError
elif [x for x in self.password if x not in allowed_char]:
print("Invalid Character")
raise ValueError
def scoring(self):
length = len(self.password)
if re.search(r"[a-z]", self.password): self.score += 5
if re.search(r"[A-Z]", self.password): self.score += 5
if re.search(r"[0-9]", self.password): self.score += 5
if re.search(r"[!$%^&*()-_=+]",self.password): self.score += 5
if re.fullmatch(r"[A-Za-z]+", self.password): self.score -= 5
if re.fullmatch(r"[0-9]+", self.password): self.score -= 5
if re.fullmatch(r"[!$%^&*()\-_=+]+", self.password): self.score -= 5
self.password.lower()
for i in range (length - 2):
if re.search(self.password[i:i+3], row1) or re.search(self.password[i:i+3], row2) or re.search(self.password[i:i+3], row3):
self.score -= 5
print(self.score)
while exit == False:
try:
choice = int(input("""1. Check Password
2. Generate Password
3. Quit: """))
print("")
if choice == 1:
password = input("Enter Password: ")
score = len(password)
checking = password_checker(password, score)
checking.validate()
checking.scoring()
elif choice == 2:
pass
elif choice == 3:
exit = True
else:
raise ValueError
except ValueError:
print("")
I have tried multiple inputs which work with my code, for example "sdAqwe12!a^"
Later I tried the test case of "aSD7V^&*gS77+"
I expected this to work fine but for some reason an error happened.
I think this is because of the class function scoring.
This is what I run and the error received, Thank you
1. Check Password
2. Generate Password
3. Quit: 1
Enter Password: aSD7V^&*gS77+
Traceback (most recent call last):
File "main.py", line 53, in <module>
checking.scoring()
File "main.py", line 37, in scoring
if re.search(self.password[i:i+3], row1) or re.search(self.password[i:i+3], row2) or re.search(password[i:i+3], row3):
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/re.py", line 201, in search
return _compile(pattern, flags).search(string)
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/sre_parse.py", line 668, in _parse
raise source.error("nothing to repeat",
re.error: nothing to repeat at position 0
ξΊ§
A:
You are looking to see if 3 character extents in the password match 3 character rows from the keyboard using re.search:
re.search(self.password[i:i+3], row1)
But that's the problem. If your password contains a regex control character, re will try to use it. In your example "aSD7V^&*gS77+", you'll try the sequence "*gS". But that's not a valid regex. "*" tells regex to repeat the previous character zero or more times. But there is no character before it, hence re.error: nothing to repeat at position 0
Use in instead.
self.password[i:i+3] in row1
| problem with certain inputs in password checker | Recently, I have been building a password checker for a project I have been working on but for some reason certain inputs cause errors to occur and I cant figure out what it's from. If anyone can help that would be great as I have been searching for quite a while.
import re
exit = False
allowed_char = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!$%^&*()-_=+"
row1 = "qwertyuiop"
row2 = "asdfghjkl"
row3 = "zxcvbnm"
class password_checker:
def __init__(self, password, score):
self.password = password
self.score = score
def validate(self):
length = len(self.password)
if (length < 8 or length > 24) and [x for x in self.password if x not in allowed_char]:
print("Invalid Length and Invalid Character")
raise ValueError
elif length < 8 or length > 24:
print("Invalid length")
raise ValueError
elif [x for x in self.password if x not in allowed_char]:
print("Invalid Character")
raise ValueError
def scoring(self):
length = len(self.password)
if re.search(r"[a-z]", self.password): self.score += 5
if re.search(r"[A-Z]", self.password): self.score += 5
if re.search(r"[0-9]", self.password): self.score += 5
if re.search(r"[!$%^&*()-_=+]",self.password): self.score += 5
if re.fullmatch(r"[A-Za-z]+", self.password): self.score -= 5
if re.fullmatch(r"[0-9]+", self.password): self.score -= 5
if re.fullmatch(r"[!$%^&*()\-_=+]+", self.password): self.score -= 5
self.password.lower()
for i in range (length - 2):
if re.search(self.password[i:i+3], row1) or re.search(self.password[i:i+3], row2) or re.search(self.password[i:i+3], row3):
self.score -= 5
print(self.score)
while exit == False:
try:
choice = int(input("""1. Check Password
2. Generate Password
3. Quit: """))
print("")
if choice == 1:
password = input("Enter Password: ")
score = len(password)
checking = password_checker(password, score)
checking.validate()
checking.scoring()
elif choice == 2:
pass
elif choice == 3:
exit = True
else:
raise ValueError
except ValueError:
print("")
I have tried multiple inputs which work with my code, for example "sdAqwe12!a^"
Later I tried the test case of "aSD7V^&*gS77+"
I expected this to work fine but for some reason an error happened.
I think this is because of the class function scoring.
This is what I run and the error received, Thank you
1. Check Password
2. Generate Password
3. Quit: 1
Enter Password: aSD7V^&*gS77+
Traceback (most recent call last):
File "main.py", line 53, in <module>
checking.scoring()
File "main.py", line 37, in scoring
if re.search(self.password[i:i+3], row1) or re.search(self.password[i:i+3], row2) or re.search(password[i:i+3], row3):
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/re.py", line 201, in search
return _compile(pattern, flags).search(string)
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/sre_parse.py", line 668, in _parse
raise source.error("nothing to repeat",
re.error: nothing to repeat at position 0
ξΊ§
| [
"You are looking to see if 3 character extents in the password match 3 character rows from the keyboard using re.search:\nre.search(self.password[i:i+3], row1)\n\nBut that's the problem. If your password contains a regex control character, re will try to use it. In your example \"aSD7V^&*gS77+\", you'll try the sequence \"*gS\". But that's not a valid regex. \"*\" tells regex to repeat the previous character zero or more times. But there is no character before it, hence re.error: nothing to repeat at position 0\nUse in instead.\nself.password[i:i+3] in row1\n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074648485_python.txt |
Q:
Celery worker concurrency
I have made a scraper to scan around 150 links.
Each link has around 5k sub links to get info from.
I am using Celery to run the scraper in background and store data on a Django ORM. I use BeautifulSoup for scrap URL .
When i running the celery using this command
celery worker -A ... --concurrency=50
everything working fine but the workers from 1 to 50 sleep
How i can make the celery working till the scraper finish its task?
A:
First of all that command will not start 50 workers, but 1 worker with 50 processes. I'd also recommend to just use as many processes as you have cores available. (Let's say 8 for the rest of my answer.)
My guess here is that the other processes are idle because you only perform one task. If you want to do concurrent work, you'll have to split up your work in parts that can be executed concurrently. The easiest way to do this is just make a separate task for every link you want to scrape. The worker will then start working on scraping 8 links and when it finishes 1 it will start on the next one until it has finished scraping all 150.
so your calling code of your task should roughly like:
for link in links:
scrape_link.delay(link)
with scrape_link your task function that will look something like:
@app.task
def scrape_link(link):
#scrape the link and its sub-links
A:
I know it's been sometime since this was posted but if anyone is researching this you can use Flower to monitor Celery tasks.
| Celery worker concurrency | I have made a scraper to scan around 150 links.
Each link has around 5k sub links to get info from.
I am using Celery to run the scraper in background and store data on a Django ORM. I use BeautifulSoup for scrap URL .
When i running the celery using this command
celery worker -A ... --concurrency=50
everything working fine but the workers from 1 to 50 sleep
How i can make the celery working till the scraper finish its task?
| [
"First of all that command will not start 50 workers, but 1 worker with 50 processes. I'd also recommend to just use as many processes as you have cores available. (Let's say 8 for the rest of my answer.)\nMy guess here is that the other processes are idle because you only perform one task. If you want to do concurrent work, you'll have to split up your work in parts that can be executed concurrently. The easiest way to do this is just make a separate task for every link you want to scrape. The worker will then start working on scraping 8 links and when it finishes 1 it will start on the next one until it has finished scraping all 150.\nso your calling code of your task should roughly like:\nfor link in links:\n scrape_link.delay(link)\n\nwith scrape_link your task function that will look something like:\[email protected]\ndef scrape_link(link):\n #scrape the link and its sub-links\n\n",
"I know it's been sometime since this was posted but if anyone is researching this you can use Flower to monitor Celery tasks.\n"
] | [
4,
0
] | [] | [] | [
"celery",
"django",
"python"
] | stackoverflow_0047953643_celery_django_python.txt |
Q:
Calculating net wins for football teams
As part of a machine learning architecture I'm building I need to parallelise a certain calculation in pytorch. For simplicity I'm going state a modified version of the problem and use numpy so it's easier to understand.
Suppose I have a collection of football teams (say 10) and they play a collection of matches (say 20). Each football team is represented by an ID (a number from 1-10). The match outcomes are saved as tuples (t_1, t_2, win) where t_i is the ID (int) for 'team i', and win=1 if team 1 wins (win=-1 if team 2 wins).
I want to calculate the total number of wins for every team. More specifically I want an numpy array X (of shape (10)) where X[t_i] := wins - losses (of 'team i' from the 20 matches). Assuming the match data is split into numpy arrays match (of shape (20, 2)), and outcome (of shape (20,1)), my current solution for solving this problem is as follows
outcome = np.concatenate((outcome, -outcome), axis=1)
for i in range(20):
X[match[i]] += outcome[i]
Now as you can guess, I want to get rid of the for loop. If I was to replace this code with
X[match] += outcome
Then clearly the it will not work. Does anyone have any ideas how to solve this problem completely in parallel? Like I said, my problem is actually more complicated than what I've stated here. It's closer to wanting to calculate the win/loss total for each player on each team. If possible could someone provide a solution which is not dependant on there only being two teams in each match. Thanks in advance.
A:
For anyone who sees this, I figured out a solution using 'scatter' in pytorch however it is a bit ad-hoc. Here is the equivalent code for numpy.
outcome = np.concatenate((outcome, -outcome), axis=1)
temp = np.put_along_axis(np.zeros((20, 10)), match, outcomes, 1)
scores = np.sum(temp, axis=0)
| Calculating net wins for football teams | As part of a machine learning architecture I'm building I need to parallelise a certain calculation in pytorch. For simplicity I'm going state a modified version of the problem and use numpy so it's easier to understand.
Suppose I have a collection of football teams (say 10) and they play a collection of matches (say 20). Each football team is represented by an ID (a number from 1-10). The match outcomes are saved as tuples (t_1, t_2, win) where t_i is the ID (int) for 'team i', and win=1 if team 1 wins (win=-1 if team 2 wins).
I want to calculate the total number of wins for every team. More specifically I want an numpy array X (of shape (10)) where X[t_i] := wins - losses (of 'team i' from the 20 matches). Assuming the match data is split into numpy arrays match (of shape (20, 2)), and outcome (of shape (20,1)), my current solution for solving this problem is as follows
outcome = np.concatenate((outcome, -outcome), axis=1)
for i in range(20):
X[match[i]] += outcome[i]
Now as you can guess, I want to get rid of the for loop. If I was to replace this code with
X[match] += outcome
Then clearly the it will not work. Does anyone have any ideas how to solve this problem completely in parallel? Like I said, my problem is actually more complicated than what I've stated here. It's closer to wanting to calculate the win/loss total for each player on each team. If possible could someone provide a solution which is not dependant on there only being two teams in each match. Thanks in advance.
| [
"For anyone who sees this, I figured out a solution using 'scatter' in pytorch however it is a bit ad-hoc. Here is the equivalent code for numpy.\noutcome = np.concatenate((outcome, -outcome), axis=1)\ntemp = np.put_along_axis(np.zeros((20, 10)), match, outcomes, 1)\nscores = np.sum(temp, axis=0)\n\n"
] | [
0
] | [] | [] | [
"data_science",
"machine_learning",
"numpy",
"python",
"pytorch"
] | stackoverflow_0074643176_data_science_machine_learning_numpy_python_pytorch.txt |
Q:
How to webscrape from a selected tab with an embedded table on a website?
I am trying to scrape data from https://www.onthesnow.com/alberta/lake-louise/historical-snowfall however the default script only shows the monthly totals and not the annual totals. There is a tab you have to select 'Annual' on the webpage to show the annual totals. website source code showing the table I can successfully scrape the monthly totals but I am unsuccessful in returning the annual totals. Annual totals table
def historic_snowfall():
#Resort naming convetion on OpenSnow.com
# Revelstoke, BC: british-columbia, revelstoke-mountain
# Whistler, BC: british-columbia, whistler-blackcomb
# Lake Louise, AB: alberta, lake-louise
# Big Sky, MT: montana, big-sky-resort
# Snowbird, UT: utah, snowbird
# Palisades: califronia, squaw-valley-usa
# Steamboat, CO: colorado, steamboat
# Copper Mountain, CO: colorado, copper-mountain
# Aspen, CO: colorado, aspen-snowmass
# Jackson Hole, WY: wyoming, jackson-hole
# Taos, NM: taos-ski-valley
resort_table = ['alberta/lake-louise',
'montana/big-sky-resort',
'utah/snowbird',
'california/squaw-valley-usa',
'colorado/steamboat',
'colorado/copper-mountain',
'colorado/aspen-snowmass',
'wyoming/jackson-hole',
'new-mexico/taos-ski-valley'
]
resort = (resort_table[1])
hsnow_url = 'https://www.onthesnow.com/' + resort + '/historical-snowfall'
page = requests.get(hsnow_url)
soup = BeautifulSoup(page.content, "html.parser")
data = []
table = soup.find('table') # Find table
table_body = table.find('tbody') # Find body of table
rows = table_body.find_all('tr') #find rows within table
for row in rows:
cols = row.find_all('td') # within rows pull column data
cols = [ele.text.strip() for ele in cols]
data.append([ele for ele in cols if ele]) # Get rid of empty values
return data #returns monthly averages not yearly results
A:
You can use the Json data embedded inside the HTML page to get the annual info:
import json
import requests
import pandas as pd
from bs4 import BeautifulSoup
url = "https://www.onthesnow.com/alberta/lake-louise/historical-snowfall"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
data = json.loads(soup.select_one("#__NEXT_DATA__").text)
df = pd.DataFrame(data["props"]["pageProps"]["snowfallInfoAnnual"])
print(df.to_markdown(index=False))
Prints:
date
totalSnow
snowDays
baseDepth
summitDepth
maxBaseDepth
biggestSnowfall
2012-01-01
467
80
25.2892
128.724
118
28
2013-01-01
756
89
111.013
123.061
207
40
2014-01-01
465
64
77.3352
105.181
158
34
2015-01-01
339
60
75.5234
95.4746
125
54
2016-01-01
544
95
99.4935
147.88
177
30
2017-01-01
737
89
114.79
153.75
192
41
2018-01-01
505
81
98.2466
124.705
158
71
2019-01-01
537
73
76.7257
84.805
188
36
2020-01-01
445
76
88.5694
110.969
183
38
2021-01-01
669
96
118.13
168.105
178
54
2022-01-01
111
14
39.9333
47.7667
52
24
Average
507
75
85
118
158
41
A:
I would leave this in a comment but I don't have enough reputation, but you can get the data directly:
Monthly:
https://api.onthesnow.com/api/v2/resort/368/snowfall/monthly
Annual:
https://api.onthesnow.com/api/v2/resort/368/snowfall/annual
Example data returned:
{"snowfallItems":[{"date":"2012-01-01","totalSnow":467,"snowDays":80,"baseDepth":25.2892,"summitDepth":128.72395779996586,"maxBaseDepth":118,"biggestSnowfall":28}]}
Note: Each measurement is in centimetres rather than inches so you might have to convert it.
Edit: The other (BETTER) answer basically does this and makes things beautiful in a table too
A:
If you can use find_all() function instead of find() while searching tables, you can get the annual table as well. So the code should be like:
from bs4 import BeautifulSoup
import requests
def historic_snowfall():
#Resort naming convetion on OpenSnow.com
# Revelstoke, BC: british-columbia, revelstoke-mountain
# Whistler, BC: british-columbia, whistler-blackcomb
# Lake Louise, AB: alberta, lake-louise
# Big Sky, MT: montana, big-sky-resort
# Snowbird, UT: utah, snowbird
# Palisades: califronia, squaw-valley-usa
# Steamboat, CO: colorado, steamboat
# Copper Mountain, CO: colorado, copper-mountain
# Aspen, CO: colorado, aspen-snowmass
# Jackson Hole, WY: wyoming, jackson-hole
# Taos, NM: taos-ski-valley
resort_table = ['alberta/lake-louise',
'montana/big-sky-resort',
'utah/snowbird',
'california/squaw-valley-usa',
'colorado/steamboat',
'colorado/copper-mountain',
'colorado/aspen-snowmass',
'wyoming/jackson-hole',
'new-mexico/taos-ski-valley'
]
resort = (resort_table[1])
hsnow_url = 'https://www.onthesnow.com/' + resort + '/historical-snowfall'
page = requests.get(hsnow_url)
soup = BeautifulSoup(page.content, "html.parser")
data = []
data2 = []
table = soup.find_all('table') # Find table
table_body_1 = table[0].find('tbody') # Find body of table
rows_table_1 = table_body_1.find_all('tr') #find rows within table
for row in rows_table_1:
cols = row.find_all('td') # within rows pull column data
cols = [ele.text.strip() for ele in cols]
data.append([ele for ele in cols if ele]) # Get rid of empty values
table_body_2 = table[1].find('tbody') # Find body of table
rows_table_2 = table_body_2.find_all('tr') #find rows within table
for row in rows_table_2:
cols = row.find_all('td') # within rows pull column data
cols = [ele.text.strip() for ele in cols]
data2.append([ele for ele in cols if ele]) # Get rid of empty values
return data,data2 #returns monthly averages not yearly results
print(historic_snowfall())
| How to webscrape from a selected tab with an embedded table on a website? | I am trying to scrape data from https://www.onthesnow.com/alberta/lake-louise/historical-snowfall however the default script only shows the monthly totals and not the annual totals. There is a tab you have to select 'Annual' on the webpage to show the annual totals. website source code showing the table I can successfully scrape the monthly totals but I am unsuccessful in returning the annual totals. Annual totals table
def historic_snowfall():
#Resort naming convetion on OpenSnow.com
# Revelstoke, BC: british-columbia, revelstoke-mountain
# Whistler, BC: british-columbia, whistler-blackcomb
# Lake Louise, AB: alberta, lake-louise
# Big Sky, MT: montana, big-sky-resort
# Snowbird, UT: utah, snowbird
# Palisades: califronia, squaw-valley-usa
# Steamboat, CO: colorado, steamboat
# Copper Mountain, CO: colorado, copper-mountain
# Aspen, CO: colorado, aspen-snowmass
# Jackson Hole, WY: wyoming, jackson-hole
# Taos, NM: taos-ski-valley
resort_table = ['alberta/lake-louise',
'montana/big-sky-resort',
'utah/snowbird',
'california/squaw-valley-usa',
'colorado/steamboat',
'colorado/copper-mountain',
'colorado/aspen-snowmass',
'wyoming/jackson-hole',
'new-mexico/taos-ski-valley'
]
resort = (resort_table[1])
hsnow_url = 'https://www.onthesnow.com/' + resort + '/historical-snowfall'
page = requests.get(hsnow_url)
soup = BeautifulSoup(page.content, "html.parser")
data = []
table = soup.find('table') # Find table
table_body = table.find('tbody') # Find body of table
rows = table_body.find_all('tr') #find rows within table
for row in rows:
cols = row.find_all('td') # within rows pull column data
cols = [ele.text.strip() for ele in cols]
data.append([ele for ele in cols if ele]) # Get rid of empty values
return data #returns monthly averages not yearly results
| [
"You can use the Json data embedded inside the HTML page to get the annual info:\nimport json\nimport requests\nimport pandas as pd\nfrom bs4 import BeautifulSoup\n\n\nurl = \"https://www.onthesnow.com/alberta/lake-louise/historical-snowfall\"\n\nsoup = BeautifulSoup(requests.get(url).content, \"html.parser\")\n\ndata = json.loads(soup.select_one(\"#__NEXT_DATA__\").text)\ndf = pd.DataFrame(data[\"props\"][\"pageProps\"][\"snowfallInfoAnnual\"])\nprint(df.to_markdown(index=False))\n\nPrints:\n\n\n\n\ndate\ntotalSnow\nsnowDays\nbaseDepth\nsummitDepth\nmaxBaseDepth\nbiggestSnowfall\n\n\n\n\n2012-01-01\n467\n80\n25.2892\n128.724\n118\n28\n\n\n2013-01-01\n756\n89\n111.013\n123.061\n207\n40\n\n\n2014-01-01\n465\n64\n77.3352\n105.181\n158\n34\n\n\n2015-01-01\n339\n60\n75.5234\n95.4746\n125\n54\n\n\n2016-01-01\n544\n95\n99.4935\n147.88\n177\n30\n\n\n2017-01-01\n737\n89\n114.79\n153.75\n192\n41\n\n\n2018-01-01\n505\n81\n98.2466\n124.705\n158\n71\n\n\n2019-01-01\n537\n73\n76.7257\n84.805\n188\n36\n\n\n2020-01-01\n445\n76\n88.5694\n110.969\n183\n38\n\n\n2021-01-01\n669\n96\n118.13\n168.105\n178\n54\n\n\n2022-01-01\n111\n14\n39.9333\n47.7667\n52\n24\n\n\nAverage\n507\n75\n85\n118\n158\n41\n\n\n\n",
"I would leave this in a comment but I don't have enough reputation, but you can get the data directly:\nMonthly:\nhttps://api.onthesnow.com/api/v2/resort/368/snowfall/monthly\nAnnual:\nhttps://api.onthesnow.com/api/v2/resort/368/snowfall/annual\nExample data returned:\n{\"snowfallItems\":[{\"date\":\"2012-01-01\",\"totalSnow\":467,\"snowDays\":80,\"baseDepth\":25.2892,\"summitDepth\":128.72395779996586,\"maxBaseDepth\":118,\"biggestSnowfall\":28}]}\nNote: Each measurement is in centimetres rather than inches so you might have to convert it.\nEdit: The other (BETTER) answer basically does this and makes things beautiful in a table too\n",
"If you can use find_all() function instead of find() while searching tables, you can get the annual table as well. So the code should be like:\nfrom bs4 import BeautifulSoup\nimport requests \n\ndef historic_snowfall():\n #Resort naming convetion on OpenSnow.com\n # Revelstoke, BC: british-columbia, revelstoke-mountain\n # Whistler, BC: british-columbia, whistler-blackcomb\n # Lake Louise, AB: alberta, lake-louise\n # Big Sky, MT: montana, big-sky-resort\n # Snowbird, UT: utah, snowbird\n # Palisades: califronia, squaw-valley-usa\n # Steamboat, CO: colorado, steamboat\n # Copper Mountain, CO: colorado, copper-mountain\n # Aspen, CO: colorado, aspen-snowmass\n # Jackson Hole, WY: wyoming, jackson-hole\n # Taos, NM: taos-ski-valley\n \n resort_table = ['alberta/lake-louise',\n 'montana/big-sky-resort',\n 'utah/snowbird',\n 'california/squaw-valley-usa',\n 'colorado/steamboat',\n 'colorado/copper-mountain',\n 'colorado/aspen-snowmass',\n 'wyoming/jackson-hole',\n 'new-mexico/taos-ski-valley'\n ]\n\n resort = (resort_table[1])\n \n hsnow_url = 'https://www.onthesnow.com/' + resort + '/historical-snowfall'\n page = requests.get(hsnow_url) \n soup = BeautifulSoup(page.content, \"html.parser\")\n \n\n data = []\n data2 = []\n table = soup.find_all('table') # Find table\n\n table_body_1 = table[0].find('tbody') # Find body of table\n rows_table_1 = table_body_1.find_all('tr') #find rows within table\n for row in rows_table_1:\n cols = row.find_all('td') # within rows pull column data\n cols = [ele.text.strip() for ele in cols]\n data.append([ele for ele in cols if ele]) # Get rid of empty values\n\n table_body_2 = table[1].find('tbody') # Find body of table\n rows_table_2 = table_body_2.find_all('tr') #find rows within table\n for row in rows_table_2:\n cols = row.find_all('td') # within rows pull column data\n cols = [ele.text.strip() for ele in cols]\n data2.append([ele for ele in cols if ele]) # Get rid of empty values\n \n return data,data2 #returns monthly averages not yearly results\n\n\nprint(historic_snowfall())\n\n"
] | [
1,
1,
1
] | [] | [] | [
"beautifulsoup",
"python",
"web_scraping"
] | stackoverflow_0074648506_beautifulsoup_python_web_scraping.txt |
Q:
How to sort angles in the range 0 to +Pi to -Pi to 0 as a linear range?
I am trying to solve a problem of sorting angles in the range of 0 to +Pi radians and +Pi to -Pi to 0 radians as one continuous range. I know this might be difficult to understand. Let me quote an example below.
The following are examples of the final range I would like to get after sorting a list of jumbled angles:
Example Inputs
Case - 1: Pi/4, 0, Pi/2, -Pi/20, Pi, -Pi/4, -Pi, -3*Pi/4, 3*Pi/4, -Pi/2, -Pi/10
Case - 2: -Pi/20, Pi/2, Pi, -Pi/2, -Pi/10, -Pi/4, Pi/4, 0,-Pi, -3*Pi/4, 3*Pi/4
Expected Output
0, Pi/4, Pi/2, 3*Pi/4, Pi, -Pi, -3*Pi/4, -Pi/2, -Pi/4, -Pi/10, -Pi/20
As you can see in the above list (Expected Output), the sorted list basically represents a continuous range of angles in a circle (starting from 0, making a full 360 degree rotation and ending at 0).
It is easy to sort these numbers if they are simply in a range of 0 to 360. But it gets trickier when the range is split into positive and negative angles like this.
Extra Info:
For some kind of strange performance reasons, I am not allowed to do the conversion of this angle into the range 0 to 2Pi for sorting. The range has to be preserved while sorting. My first solution was to convert it into 2pi range using (theta + 2pi) % (2*pi). But that solution got rejected. So I am now stuck trying to figure out how can I sort this without converting it into a different range
A:
Create a custom compare function and pass it to sort/sorted using functools.cmp_to_key:
def angle_compare(a, b):
def cmp(a, b):
return (a > b) - (b > a)
if (a < 0) == (b < 0): # both positive or both negative
return cmp(a, b)
return cmp(b, a)
>>> Pi = 3.14
>>> l1 = [Pi/4, 0, Pi/2, -Pi/20, Pi, -Pi/4, -Pi, -3*Pi/4, 3*Pi/4, -Pi/2, -Pi/10]
>>> sorted(l1, key=functools.cmp_to_key(angle_compare))
[0, 0.785, 1.57, 2.355, 3.14, -3.14, -2.355, -1.57, -0.785, -0.314, -0.157]
| How to sort angles in the range 0 to +Pi to -Pi to 0 as a linear range? | I am trying to solve a problem of sorting angles in the range of 0 to +Pi radians and +Pi to -Pi to 0 radians as one continuous range. I know this might be difficult to understand. Let me quote an example below.
The following are examples of the final range I would like to get after sorting a list of jumbled angles:
Example Inputs
Case - 1: Pi/4, 0, Pi/2, -Pi/20, Pi, -Pi/4, -Pi, -3*Pi/4, 3*Pi/4, -Pi/2, -Pi/10
Case - 2: -Pi/20, Pi/2, Pi, -Pi/2, -Pi/10, -Pi/4, Pi/4, 0,-Pi, -3*Pi/4, 3*Pi/4
Expected Output
0, Pi/4, Pi/2, 3*Pi/4, Pi, -Pi, -3*Pi/4, -Pi/2, -Pi/4, -Pi/10, -Pi/20
As you can see in the above list (Expected Output), the sorted list basically represents a continuous range of angles in a circle (starting from 0, making a full 360 degree rotation and ending at 0).
It is easy to sort these numbers if they are simply in a range of 0 to 360. But it gets trickier when the range is split into positive and negative angles like this.
Extra Info:
For some kind of strange performance reasons, I am not allowed to do the conversion of this angle into the range 0 to 2Pi for sorting. The range has to be preserved while sorting. My first solution was to convert it into 2pi range using (theta + 2pi) % (2*pi). But that solution got rejected. So I am now stuck trying to figure out how can I sort this without converting it into a different range
| [
"Create a custom compare function and pass it to sort/sorted using functools.cmp_to_key:\ndef angle_compare(a, b):\n def cmp(a, b):\n return (a > b) - (b > a)\n\n if (a < 0) == (b < 0): # both positive or both negative\n return cmp(a, b)\n return cmp(b, a)\n\n>>> Pi = 3.14\n>>> l1 = [Pi/4, 0, Pi/2, -Pi/20, Pi, -Pi/4, -Pi, -3*Pi/4, 3*Pi/4, -Pi/2, -Pi/10]\n>>> sorted(l1, key=functools.cmp_to_key(angle_compare))\n[0, 0.785, 1.57, 2.355, 3.14, -3.14, -2.355, -1.57, -0.785, -0.314, -0.157]\n\n"
] | [
1
] | [] | [] | [
"angle",
"geometry",
"python",
"range",
"sorting"
] | stackoverflow_0074648516_angle_geometry_python_range_sorting.txt |
Q:
Pip error when trying to install zlib
I'm new to Python and programming and am having trouble installing zlib through pip, I keep having the issue below:
pip install zlib
Collecting zlib
Could not find a version that satisfies the requirement zlib (from versions: )
No matching distribution found for zlib
A:
zlib is not a python package, you can try yum/apt-get install zlib-devel or yum/apt-get install zlib.
A:
For Ubuntu:
sudo apt-get install zlib1g-dev
A:
On mac using homebrew:
brew install zlib
| Pip error when trying to install zlib | I'm new to Python and programming and am having trouble installing zlib through pip, I keep having the issue below:
pip install zlib
Collecting zlib
Could not find a version that satisfies the requirement zlib (from versions: )
No matching distribution found for zlib
| [
"zlib is not a python package, you can try yum/apt-get install zlib-devel or yum/apt-get install zlib. \n",
"For Ubuntu:\n sudo apt-get install zlib1g-dev\n",
"On mac using homebrew:\nbrew install zlib\n\n"
] | [
3,
3,
0
] | [] | [] | [
"pip",
"python",
"zlib"
] | stackoverflow_0047403874_pip_python_zlib.txt |
Q:
How To Read a CSV File Using Pandas
I am having trouble running my code. I want to load the "Forest Fires" dataset by calling the pandas method read_csv() with the name of the csv file "forestfires.csv" (docs) and store the result in a variable forestfire_df.
The interpreter keeps throwing this error
name 'forestfire_df' is not defined".
Here is my code:
import numpy as np
import pandas as pd
if not os.path.exists("forestfires.csv"):
raise Exception(f"The forestfires.csv is not detected in your local path! " \
f"You need to move the 'forestfires.csv' file to the same " \
f"location/directory as this notebook which is {os.getcwd()}")
# TODO 1.1
display(forestfire_df)
display(pd.read_csv("forestfires.csv"))
([
(np.all(forestfire_df.iloc[0].values == np.array([7, 5, 'mar', 'fri', 86.2, 26.2, 94.3, 5.1, 8.2, 51, 6.7, 0.0, 0.0],
dtype=object)), '')
])
Does anybody know how to fix this?
A:
forestfire_df will have to be defined before it is displayed, for example with a line like:
forestfire_df = pd.read_csv("forestfires.csv")
| How To Read a CSV File Using Pandas | I am having trouble running my code. I want to load the "Forest Fires" dataset by calling the pandas method read_csv() with the name of the csv file "forestfires.csv" (docs) and store the result in a variable forestfire_df.
The interpreter keeps throwing this error
name 'forestfire_df' is not defined".
Here is my code:
import numpy as np
import pandas as pd
if not os.path.exists("forestfires.csv"):
raise Exception(f"The forestfires.csv is not detected in your local path! " \
f"You need to move the 'forestfires.csv' file to the same " \
f"location/directory as this notebook which is {os.getcwd()}")
# TODO 1.1
display(forestfire_df)
display(pd.read_csv("forestfires.csv"))
([
(np.all(forestfire_df.iloc[0].values == np.array([7, 5, 'mar', 'fri', 86.2, 26.2, 94.3, 5.1, 8.2, 51, 6.7, 0.0, 0.0],
dtype=object)), '')
])
Does anybody know how to fix this?
| [
"forestfire_df will have to be defined before it is displayed, for example with a line like:\nforestfire_df = pd.read_csv(\"forestfires.csv\")\n\n"
] | [
0
] | [] | [] | [
"google_colaboratory",
"python"
] | stackoverflow_0074648703_google_colaboratory_python.txt |
Q:
How to retrieve Python list data from a separate HTML file using bottle
I am creating a web-based python program using the Bottle framework built into PythonAnywhere. I have an HTML file that gets information about restaurants in a given area using an API. In this file, I take the received data and set it to a list with the name y. This all works well inside this file however I am working on another HTML file (in the same directory) that also requires this information but I have not been able to figure out a way to transfer the list data to the other file without calling another API request.
This screenshot shows my use of the list y in the working file that I need to use in a separate HTML file.
I attempted to use a get request I have used to obtain inputted HTML data from other files and was hoping this would also work for the python data in a separate HTML file however this was not the case and resulted in y being null.
This screenshot shows my attempt at retrieving the y list data in this manner to no avail.
A:
Since HTML essentially forgets all of Python after it is initially executed I was unable to find an obvious way to transfer this data without using a database.
What I resorted to instead was using HTML input element with type set to hidden to pass the necessary data to the next form and then retrieve it there using the request.forms.get() function.
| How to retrieve Python list data from a separate HTML file using bottle | I am creating a web-based python program using the Bottle framework built into PythonAnywhere. I have an HTML file that gets information about restaurants in a given area using an API. In this file, I take the received data and set it to a list with the name y. This all works well inside this file however I am working on another HTML file (in the same directory) that also requires this information but I have not been able to figure out a way to transfer the list data to the other file without calling another API request.
This screenshot shows my use of the list y in the working file that I need to use in a separate HTML file.
I attempted to use a get request I have used to obtain inputted HTML data from other files and was hoping this would also work for the python data in a separate HTML file however this was not the case and resulted in y being null.
This screenshot shows my attempt at retrieving the y list data in this manner to no avail.
| [
"Since HTML essentially forgets all of Python after it is initially executed I was unable to find an obvious way to transfer this data without using a database.\nWhat I resorted to instead was using HTML input element with type set to hidden to pass the necessary data to the next form and then retrieve it there using the request.forms.get() function.\n\n"
] | [
0
] | [] | [] | [
"bottle",
"list",
"python",
"web_based"
] | stackoverflow_0074636492_bottle_list_python_web_based.txt |
Q:
Python - How to create dataFrame from this result
How to create a DataFrame from the result of this print?
for teste2 in Vagas:
on_click = teste2.get('onclick')
print(on_click)
print(on_click) returns me several of these strings below, I want to insert them into a csv file.
<input id="ctl00_ctl00_Content_Content_rpt_turno_4_ctl01_imb_vaga_1" name="ctl00$ctl00$Content$Content$rpt_turno_4$ctl01$imb_vaga_1" onclick="javascript:window.open('Cadastro.aspx?id_agenda=0&id_turno=29/11/2022 3:00:00;29/11/2022 4:00:00&data=29/11/2022&id_turno_exportador=198397&id_turno_agenda=61298&id_transportadora=23213&id_turno_transp=68291&id_Cliente=40300&codigo_terminal=40300&codigo_empresa=1&codigo_exportador=24978&codigo_transportador=23213&codigo_turno=4&turno_transp_vg=68291','_blank','height=850,width=1000,top=(screen.width)?(screen.width-1000)/2 : 0,left=(screen.height)?(screen.height-700)/2 : 0,toolbar=no,location=no,directories=no,status=no,menubar=no,scrollbars=yes,resizable=no');" src="../App_Themes/SisLog/Images/add-document.gif" style="height:20px;border-width:0px;" title="Vaga disponΓvel." type="image"/>,
<input id="ctl00_ctl00_Content_Content_rpt_turno_6_ctl01_imb_vaga_1" name="ctl00$ctl00$Content$Content$rpt_turno_6$ctl01$imb_vaga_1" onclick="javascript:window.open('Cadastro.aspx?id_agenda=0&id_turno=29/11/2022 5:00:00;29/11/2022 6:00:00&data=29/11/2022&id_turno_exportador=198397&id_turno_agenda=61298&id_transportadora=23213&id_turno_transp=68291&id_Cliente=40300&codigo_terminal=40300&codigo_empresa=1&codigo_exportador=24978&codigo_transportador=23213&codigo_turno=6&turno_transp_vg=68291','_blank','height=850,width=1000,top=(screen.width)?(screen.width-1000)/2 : 0,left=(screen.height)?(screen.height-700)/2 : 0,toolbar=no,location=no,directories=no,status=no,menubar=no,scrollbars=yes,resizable=no');" src="../App_Themes/SisLog/Images/add-document.gif" style="height:20px;border-width:0px;" title="Vaga disponΓvel." type="image"/>,`
A:
You don't need a dataframe
with open('test.csv', 'a') as f:
for item in Vagas:
on_click = teste2.get('onclick')
f.write(on_click+'\n')
or with a dataframe:
temp=[]
for i in Vagas:
on_click = teste2.get('onclick')
temp.append(on_click)
df = pd.DataFrame(temp)
df.to_csv('test.csv', mode='a', header=False, index=False)
| Python - How to create dataFrame from this result | How to create a DataFrame from the result of this print?
for teste2 in Vagas:
on_click = teste2.get('onclick')
print(on_click)
print(on_click) returns me several of these strings below, I want to insert them into a csv file.
<input id="ctl00_ctl00_Content_Content_rpt_turno_4_ctl01_imb_vaga_1" name="ctl00$ctl00$Content$Content$rpt_turno_4$ctl01$imb_vaga_1" onclick="javascript:window.open('Cadastro.aspx?id_agenda=0&id_turno=29/11/2022 3:00:00;29/11/2022 4:00:00&data=29/11/2022&id_turno_exportador=198397&id_turno_agenda=61298&id_transportadora=23213&id_turno_transp=68291&id_Cliente=40300&codigo_terminal=40300&codigo_empresa=1&codigo_exportador=24978&codigo_transportador=23213&codigo_turno=4&turno_transp_vg=68291','_blank','height=850,width=1000,top=(screen.width)?(screen.width-1000)/2 : 0,left=(screen.height)?(screen.height-700)/2 : 0,toolbar=no,location=no,directories=no,status=no,menubar=no,scrollbars=yes,resizable=no');" src="../App_Themes/SisLog/Images/add-document.gif" style="height:20px;border-width:0px;" title="Vaga disponΓvel." type="image"/>,
<input id="ctl00_ctl00_Content_Content_rpt_turno_6_ctl01_imb_vaga_1" name="ctl00$ctl00$Content$Content$rpt_turno_6$ctl01$imb_vaga_1" onclick="javascript:window.open('Cadastro.aspx?id_agenda=0&id_turno=29/11/2022 5:00:00;29/11/2022 6:00:00&data=29/11/2022&id_turno_exportador=198397&id_turno_agenda=61298&id_transportadora=23213&id_turno_transp=68291&id_Cliente=40300&codigo_terminal=40300&codigo_empresa=1&codigo_exportador=24978&codigo_transportador=23213&codigo_turno=6&turno_transp_vg=68291','_blank','height=850,width=1000,top=(screen.width)?(screen.width-1000)/2 : 0,left=(screen.height)?(screen.height-700)/2 : 0,toolbar=no,location=no,directories=no,status=no,menubar=no,scrollbars=yes,resizable=no');" src="../App_Themes/SisLog/Images/add-document.gif" style="height:20px;border-width:0px;" title="Vaga disponΓvel." type="image"/>,`
| [
"You don't need a dataframe\nwith open('test.csv', 'a') as f:\n for item in Vagas:\n on_click = teste2.get('onclick')\n f.write(on_click+'\\n')\n\nor with a dataframe:\ntemp=[]\nfor i in Vagas:\n on_click = teste2.get('onclick')\n temp.append(on_click)\n\ndf = pd.DataFrame(temp)\ndf.to_csv('test.csv', mode='a', header=False, index=False)\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python",
"selenium",
"web_scraping"
] | stackoverflow_0074648508_dataframe_pandas_python_selenium_web_scraping.txt |
Q:
Wrong inflection points
I want to come up with a plot that shows the inflection points of a curve as follows:
I have a somewhat similar curve and I want to compute somehow the inflection points by using python. My curve looks as follows:
I am using the following code to compute the inflection points:
def find_inflection_points(df, n=1):
raw = df['consumption'].to_numpy()
infls = []
dx = 0
for i, x in enumerate(np.diff(raw, n)):
if x >= dx and i > 0:
infls.append(i*n)
dx = x
# plot results
plt.plot(raw, label='Input Data')
for i, infl in enumerate(infls, 1):
plt.axvline(x=infl, color='k', label=f'Inflection Point {i}')
plt.legend(bbox_to_anchor=(1.55, 1.0))
return infls
However, I am getting the following plot:
I would expect less inflection points. Any idea of what should I change or any other proposal for implementation?
EDIT:
The data is the following:
raw = np.array([52.33,
50.154444444444444,
48.69222222222223,
46.49111111111111,
44.01444444444444,
43.30555555555556,
43.034444444444446,
40.62888888888889,
40.38111111111111,
39.07666666666667,
38.339999999999996,
36.41444444444445,
36.37888888888889,
36.17111111111111,
35.666666666666664,
33.827777777777776,
29.35222222222222,
28.60888888888889,
24.43,
22.078888888888887,
21.756666666666664,
20.345555555555556,
19.874444444444446,
19.763333333333335])
A:
So, strictly speaking, inflection point is indeed a change of sign of curvature. Which, for a 3 times differenciable function, is a point at which there is a change of sign of the 2nd derivative (the second derivative is 0, and the third derivative is not).
In your case, since the data are very discrete (only 24 data, one per hour in the day, I surmise), it is quite tricky to talk about second and third derivative. But if we give a try, we can see that, not only the point you are interested in are not inflection points (when the second derivative is 0, that means that the first derivative is locally constant. Which means that the slope is constant. And the points you seem to be interested in are, on the contrary, points where there is a change of slope. The the opposite of an inflection point!.
They are tho, inflection points of the derivative, since it seems that they are local extremum of the second derivative, so, at least if we had a continuous enough curve to dare speak of 3rd derivative, one could that that the extrema of second derivative are the 0 of the 3rd derivative. And 3rd derivative is the 2nd derivative of the 1st derivative. So, you could say that you are interested in the inflection points of the 1st derivative, that is of the slope.
See code below:
import numpy as np
from matplotlib import pyplot as plt
raw = np.array([52.33, 50.154444444444444, 48.69222222222223, 46.49111111111111, 44.01444444444444, 43.30555555555556, 43.034444444444446, 40.62888888888889, 40.38111111111111, 39.07666666666667, 38.339999999999996, 36.41444444444445, 36.37888888888889, 36.17111111111111, 35.666666666666664, 33.827777777777776, 29.35222222222222, 28.60888888888889, 24.43, 22.078888888888887, 21.756666666666664, 20.345555555555556, 19.874444444444446, 19.763333333333335])
x=np.arange(24)
rawb=np.roll(raw, 1)
rawa=np.roll(raw,-1)
der2=rawb+rawa-2*raw
der2[0]=der2[-1]=np.nan
der2abs=np.abs(der2)
offs = der2abs/(der2abs+np.roll(der2abs, -1))
yoffs = raw*(1-offs) + rawa*offs
chsign=(der2*np.roll(der2,-1))<0
s=1.5
mxder2 = (((der2>np.roll(der2,-1)) & (der2>np.roll(der2,1))) | ((der2<np.roll(der2,-1)) & (der2<np.roll(der2,1)))) & (der2abs>s)
fig, ax=plt.subplots()
ax2=ax.twinx()
ax.plot(raw)
ax2.plot([0]*24)
ax2.plot(der2)
ax.scatter(x[mxder2], raw[mxder2], 80, c='r', marker='*')
ax.scatter((x+offs)[chsign], yoffs[chsign], 30, c='g', marker='o')
plt.show()
It shows
The blue line are your data.
The orange line is the second derivative.
The green dots are the points where second derivative is 0.
While the red stars are the points where second derivative is an extrema (with a minimum absolute value: we don't count local extrema of area where second derivative is almost flatlining 0).
From what you have shown, you seem more interested in red stars!
The green dots are not just too numerous. Even filtering them (from an unknown criteria) would not do: they are all quite boring!
What makes the situation, and the vocabulary, ambiguous is the fact that we are talking about discrete points in reality. Inflection point are points where second derivative is 0. That is where 1st derivative is an extrema. And you need on the contrary points where second derivative is extreme. On so discrete set of data, you can be both tho. And maybe that was the case in your paper: points with sharp change of slopes are points where second derivative are extremely positive, but is surronded by 2 extremely negative second derivative (or the opposite).
But, my point is, you seem more interested in red stars.
As for how I compute that:
der2 is the second derivative, using discrete scheme y[-dt]-2y[0]+y[dt]
der2abs is its absolute value.
offs is a barycenter weighted by successive values of der2abs. Where there is a change of sign of the 2nd derivative, between index i and i+1, this account for an estimation of the exact position of the 0: offs is 0 if the 0 is at index i, 1 if it is at index 1, 0.5 if it is in the middle between i and i+1, etc. offs makes no sense where there is no change of sign (and we won't use those values).
yoffs is the raw value using the same barycenter. So, yoffs is yoffs[i], yoffs[i+1], yoffs[i+0.5] in the 3 previous cases (what would be yoffs[i+0.5] were a legal thing). Like offs, makes sense only where there is a change of sign of der2.
chsign is precisely what says where those change of sign occur.
So, we just have to plot yoffs[chsign] vs (x+offs)[chsign] to filter the cases where the second derivative are 0.
The red stars are easier to compute:
We just find all the points whose second derivative is either bigger or smaller than its 2 neighbor. And filter those to add a minimum value condition (|secondDerivative| must be at least 1.5)
| Wrong inflection points | I want to come up with a plot that shows the inflection points of a curve as follows:
I have a somewhat similar curve and I want to compute somehow the inflection points by using python. My curve looks as follows:
I am using the following code to compute the inflection points:
def find_inflection_points(df, n=1):
raw = df['consumption'].to_numpy()
infls = []
dx = 0
for i, x in enumerate(np.diff(raw, n)):
if x >= dx and i > 0:
infls.append(i*n)
dx = x
# plot results
plt.plot(raw, label='Input Data')
for i, infl in enumerate(infls, 1):
plt.axvline(x=infl, color='k', label=f'Inflection Point {i}')
plt.legend(bbox_to_anchor=(1.55, 1.0))
return infls
However, I am getting the following plot:
I would expect less inflection points. Any idea of what should I change or any other proposal for implementation?
EDIT:
The data is the following:
raw = np.array([52.33,
50.154444444444444,
48.69222222222223,
46.49111111111111,
44.01444444444444,
43.30555555555556,
43.034444444444446,
40.62888888888889,
40.38111111111111,
39.07666666666667,
38.339999999999996,
36.41444444444445,
36.37888888888889,
36.17111111111111,
35.666666666666664,
33.827777777777776,
29.35222222222222,
28.60888888888889,
24.43,
22.078888888888887,
21.756666666666664,
20.345555555555556,
19.874444444444446,
19.763333333333335])
| [
"So, strictly speaking, inflection point is indeed a change of sign of curvature. Which, for a 3 times differenciable function, is a point at which there is a change of sign of the 2nd derivative (the second derivative is 0, and the third derivative is not).\nIn your case, since the data are very discrete (only 24 data, one per hour in the day, I surmise), it is quite tricky to talk about second and third derivative. But if we give a try, we can see that, not only the point you are interested in are not inflection points (when the second derivative is 0, that means that the first derivative is locally constant. Which means that the slope is constant. And the points you seem to be interested in are, on the contrary, points where there is a change of slope. The the opposite of an inflection point!.\nThey are tho, inflection points of the derivative, since it seems that they are local extremum of the second derivative, so, at least if we had a continuous enough curve to dare speak of 3rd derivative, one could that that the extrema of second derivative are the 0 of the 3rd derivative. And 3rd derivative is the 2nd derivative of the 1st derivative. So, you could say that you are interested in the inflection points of the 1st derivative, that is of the slope.\nSee code below:\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\nraw = np.array([52.33, 50.154444444444444, 48.69222222222223, 46.49111111111111, 44.01444444444444, 43.30555555555556, 43.034444444444446, 40.62888888888889, 40.38111111111111, 39.07666666666667, 38.339999999999996, 36.41444444444445, 36.37888888888889, 36.17111111111111, 35.666666666666664, 33.827777777777776, 29.35222222222222, 28.60888888888889, 24.43, 22.078888888888887, 21.756666666666664, 20.345555555555556, 19.874444444444446, 19.763333333333335])\nx=np.arange(24)\n\nrawb=np.roll(raw, 1)\nrawa=np.roll(raw,-1)\n\n\nder2=rawb+rawa-2*raw\nder2[0]=der2[-1]=np.nan\nder2abs=np.abs(der2)\noffs = der2abs/(der2abs+np.roll(der2abs, -1))\nyoffs = raw*(1-offs) + rawa*offs\nchsign=(der2*np.roll(der2,-1))<0\n\n\ns=1.5\nmxder2 = (((der2>np.roll(der2,-1)) & (der2>np.roll(der2,1))) | ((der2<np.roll(der2,-1)) & (der2<np.roll(der2,1)))) & (der2abs>s)\n\nfig, ax=plt.subplots()\nax2=ax.twinx()\n\nax.plot(raw)\nax2.plot([0]*24)\nax2.plot(der2)\n\nax.scatter(x[mxder2], raw[mxder2], 80, c='r', marker='*')\nax.scatter((x+offs)[chsign], yoffs[chsign], 30, c='g', marker='o')\nplt.show()\n\nIt shows\n\nThe blue line are your data.\nThe orange line is the second derivative.\nThe green dots are the points where second derivative is 0.\nWhile the red stars are the points where second derivative is an extrema (with a minimum absolute value: we don't count local extrema of area where second derivative is almost flatlining 0).\nFrom what you have shown, you seem more interested in red stars!\nThe green dots are not just too numerous. Even filtering them (from an unknown criteria) would not do: they are all quite boring!\nWhat makes the situation, and the vocabulary, ambiguous is the fact that we are talking about discrete points in reality. Inflection point are points where second derivative is 0. That is where 1st derivative is an extrema. And you need on the contrary points where second derivative is extreme. On so discrete set of data, you can be both tho. And maybe that was the case in your paper: points with sharp change of slopes are points where second derivative are extremely positive, but is surronded by 2 extremely negative second derivative (or the opposite).\nBut, my point is, you seem more interested in red stars.\nAs for how I compute that:\nder2 is the second derivative, using discrete scheme y[-dt]-2y[0]+y[dt]\nder2abs is its absolute value.\noffs is a barycenter weighted by successive values of der2abs. Where there is a change of sign of the 2nd derivative, between index i and i+1, this account for an estimation of the exact position of the 0: offs is 0 if the 0 is at index i, 1 if it is at index 1, 0.5 if it is in the middle between i and i+1, etc. offs makes no sense where there is no change of sign (and we won't use those values).\nyoffs is the raw value using the same barycenter. So, yoffs is yoffs[i], yoffs[i+1], yoffs[i+0.5] in the 3 previous cases (what would be yoffs[i+0.5] were a legal thing). Like offs, makes sense only where there is a change of sign of der2.\nchsign is precisely what says where those change of sign occur.\nSo, we just have to plot yoffs[chsign] vs (x+offs)[chsign] to filter the cases where the second derivative are 0.\nThe red stars are easier to compute:\nWe just find all the points whose second derivative is either bigger or smaller than its 2 neighbor. And filter those to add a minimum value condition (|secondDerivative| must be at least 1.5)\n"
] | [
1
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074643194_numpy_python.txt |
Q:
How to fix ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1056)?
I am trying to send an email with python, but it keeps saying ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1056). Here is my code:
server = smtplib.SMTP_SSL('smtp.mail.com', 587)
server.login("[email protected]", "password")
server.sendmail(
"[email protected]",
"[email protected]",
"email text")
server.quit()
Do you know what is wrong?
A:
The port for SSL is 465 and not 587, however when I used SSL the mail arrived to the junk mail.
For me the thing that worked was to use TLS over regular SMTP instead of SMTP_SSL.
Note that this is a secure method as TLS is also a cryptographic protocol (like SSL).
import smtplib, ssl
port = 587 # For starttls
smtp_server = "smtp.gmail.com"
sender_email = "[email protected]"
receiver_email = "[email protected]"
password = input("Type your password and press enter:")
message = """\
Subject: Hi there
This message is sent from Python."""
context = ssl.create_default_context()
with smtplib.SMTP(smtp_server, port) as server:
server.ehlo() # Can be omitted
server.starttls(context=context)
server.ehlo() # Can be omitted
server.login(sender_email, password)
server.sendmail(sender_email, receiver_email, message)
provided thanks to the real python tutorial.
A:
this is how i solved same problem
import ssl
sender = "[email protected]"
password = "password123"
where_to_email = "[email protected]"
theme = "this is subject"
message = "this is your message, say hi to reciever"
sender_password = password
session = smtplib.SMTP_SSL('smtp.yandex.ru', 465)
session.login(sender, sender_password)
msg = f'From: {sender}\r\nTo: {where_to_email}\r\nContent-Type: text/plain; charset="utf-8"\r\nSubject: {theme}\r\n\r\n'
msg += message
session.sendmail(sender, where_to_email, msg.encode('utf8'))
session.quit()
A:
google no longer lets you turn this feature off, meaning it just wont work no matter what you do, yahoo appears to be the same way
A:
Code to send email via python:
import smtplib , ssl
import getpass
server = smtplib.SMTP_SSL("smtp.gmail.com",465)
server.ehlo()
server.starttls
password = getpass.getpass() # to hide your password while typing (feels cool)
server.login("[email protected]", password)
server.sendmail("[email protected]" , "[email protected]" , "I am trying out python email through coding")
server.quit()
#turn off LESS SECURE APPS to make this work on your gmail
A:
Connect to a host on a given port.
port β If you are providing a host argument, then you need to specify a port, where the SMTP server is listening. Usually, this port would be 25.
If the port is zero, the standard SMTP-over-SSL port (465) is used.
Try this
server = smtplib.SMTP_SSL('smtp.mail.com', 465)
| How to fix ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1056)? | I am trying to send an email with python, but it keeps saying ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1056). Here is my code:
server = smtplib.SMTP_SSL('smtp.mail.com', 587)
server.login("[email protected]", "password")
server.sendmail(
"[email protected]",
"[email protected]",
"email text")
server.quit()
Do you know what is wrong?
| [
"The port for SSL is 465 and not 587, however when I used SSL the mail arrived to the junk mail.\nFor me the thing that worked was to use TLS over regular SMTP instead of SMTP_SSL.\nNote that this is a secure method as TLS is also a cryptographic protocol (like SSL).\nimport smtplib, ssl\n\nport = 587 # For starttls\nsmtp_server = \"smtp.gmail.com\"\nsender_email = \"[email protected]\"\nreceiver_email = \"[email protected]\"\npassword = input(\"Type your password and press enter:\")\nmessage = \"\"\"\\\nSubject: Hi there\n\nThis message is sent from Python.\"\"\"\n\ncontext = ssl.create_default_context()\nwith smtplib.SMTP(smtp_server, port) as server:\n server.ehlo() # Can be omitted\n server.starttls(context=context)\n server.ehlo() # Can be omitted\n server.login(sender_email, password)\n server.sendmail(sender_email, receiver_email, message)\n\nprovided thanks to the real python tutorial.\n",
"this is how i solved same problem\nimport ssl\n\n\nsender = \"[email protected]\"\npassword = \"password123\"\n \nwhere_to_email = \"[email protected]\"\ntheme = \"this is subject\"\nmessage = \"this is your message, say hi to reciever\"\n \nsender_password = password\nsession = smtplib.SMTP_SSL('smtp.yandex.ru', 465)\nsession.login(sender, sender_password)\nmsg = f'From: {sender}\\r\\nTo: {where_to_email}\\r\\nContent-Type: text/plain; charset=\"utf-8\"\\r\\nSubject: {theme}\\r\\n\\r\\n'\nmsg += message\nsession.sendmail(sender, where_to_email, msg.encode('utf8'))\nsession.quit()\n\n",
"google no longer lets you turn this feature off, meaning it just wont work no matter what you do, yahoo appears to be the same way\n",
"Code to send email via python:\nimport smtplib , ssl\nimport getpass\nserver = smtplib.SMTP_SSL(\"smtp.gmail.com\",465)\nserver.ehlo()\nserver.starttls\npassword = getpass.getpass() # to hide your password while typing (feels cool)\nserver.login(\"[email protected]\", password)\nserver.sendmail(\"[email protected]\" , \"[email protected]\" , \"I am trying out python email through coding\")\nserver.quit()\n\n#turn off LESS SECURE APPS to make this work on your gmail\n",
"Connect to a host on a given port.\nport β If you are providing a host argument, then you need to specify a port, where the SMTP server is listening. Usually, this port would be 25.\nIf the port is zero, the standard SMTP-over-SSL port (465) is used.\nTry this\nserver = smtplib.SMTP_SSL('smtp.mail.com', 465)\n\n"
] | [
64,
1,
1,
0,
0
] | [] | [] | [
"python",
"smtplib",
"ssl"
] | stackoverflow_0057715289_python_smtplib_ssl.txt |
Q:
Using Python for data cleanup, looping over rows to find unique records
I have a file with about 2 million rows. On any given day, around 7,000 active rows should exist. The current SQL job is checking for active rows and inserting that data into a table (with a column for that specific date)
How can I use Python to iterate over the rows using a date index, and if there is a change, that row gets a new end_date (the default would be set to 2100-01-01)? Let's say there are 7000 unique people and only about 1000 changes; how can I use Python to create a new file with only 8000 rows instead of the 2 million?
An example would be:
Example
| Using Python for data cleanup, looping over rows to find unique records | I have a file with about 2 million rows. On any given day, around 7,000 active rows should exist. The current SQL job is checking for active rows and inserting that data into a table (with a column for that specific date)
How can I use Python to iterate over the rows using a date index, and if there is a change, that row gets a new end_date (the default would be set to 2100-01-01)? Let's say there are 7000 unique people and only about 1000 changes; how can I use Python to create a new file with only 8000 rows instead of the 2 million?
An example would be:
Example
| [] | [] | [
"If the data is too big then you can read the CSV in chunks with the pandas library fairly easily.\nHere's a link to the documentation: https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html\nAnd a stack overflow that shows how to do it: https://stackoverflow.com/a/25962187/4719158\nTo filter and process just look into the pandas library, there are millions of examples online and your questions to do this kind of operation have most likely been answered here before, so just look it up.\n"
] | [
-1
] | [
"dataframe",
"python"
] | stackoverflow_0074648568_dataframe_python.txt |
Q:
Is it possible to iterate through dates on a PySpark data frame given a date range?
I am not sure if I am going the right way on this, but I am trying to see if it's possible to output multiple dates object such as dates_1, dates_2, dates_3 or even an array if that works that each has 7 days? so dates_1 = ("2022-08-20", "2022-08-27") dates_2 = ("2022-08-28", "2022-09-04") dates_3 = ("2022-09-05", "2022-09-12") and so on. I have so far manually input the date ranges:
from pyspark.sql.functions import to_date
from pyspark.sql.functions import col,lit
import datetime
from pyspark.sql.types import*
dates = ("2022-09-18", "2022-09-25") # Manually input date range of 7 days
date_from, date_to = [to_date(lit(s)).cast(TimestampType()) for s in dates]
New = NewSpark.where((NewSpark.datetype >= date_from) & (NewSpark.datetype <= date_to))
New = New.orderBy(asc("datetype"))
New.show(10)
Output dataframe with datetype dates ranging by 7 days ("2022-09-18" - "2022-09-25")
+----------+-------------------+------------------+
| datetype| user_id| course_id|
+----------+-------------------+------------------+
|2022-09-20|-436631365164282159|193340000000013715|
|2022-09-20| 280876762290791430|193340000000016827|
|2022-09-20| 445049443233666362| \N|
|2022-09-20|-322720029403815673|193340000000016900|
|2022-09-20| 491178362284543871| \N|
|2022-09-20| 13092183588131224| \N|
|2022-09-20| \N| \N|
|2022-09-20|-367908093210940595| \N|
|2022-09-20| 72800797765911039|193340000000014279|
|2022-09-20| -14158652536236447|193340000000013898|
+----------+-------------------+------------------+
only showing top 10 rows
I have this whole PySpark Data frame(NewSpark):
+----------+-------------------+------------------+
| datetype| user_id| course_id|
+----------+-------------------+------------------+
|2022-09-15| 425465600693903129| \N|
|2022-09-15| 508873040735657962|193340000000014379|
|2022-09-15| 284347190388427414|193340000000014966|
|2022-09-15|-486512951318998054|193340000000018519|
|2022-09-15| 125529631549145496| \N|
|2022-09-15| 125529631549145496| \N|
|2022-09-15| 557089411379160781|193340000000016522|
|2022-09-15| 522439159932067624| \N|
|2022-09-15|-405858644089907758| \N|
|2022-09-15|-260152078780427420| \N|
+----------+-------------------+------------------+
only showing top 10 rows
that has the following dates:
+----------+------+
| datetype| count|
+----------+------+
|2022-09-21| 1498|
|2022-09-20|305696|
|2022-09-16| 1668|
|2022-09-15|998332|
|2022-09-11| 2345|
|2022-09-10|997655|
|2022-09-05| 6895|
|2022-09-04|993101|
|2022-09-03| 4|
|2022-08-28| 3093|
|2022-08-27|997945|
|2022-08-26|998962|
|2022-08-25| 2493|
|2022-08-24|997507|
|2022-08-19| 2613|
|2022-08-18|999524|
|2022-08-17|997863|
+----------+------+
Any way to do the above code without manually inputting the date range? Or is there a better approach to this?
Possibly use something like this?:
from datetime import timedelta, date
def daterange(date1, date2):
for n in range(int ((date2 - date1).days)):
yield date1 + timedelta(n*7)
start_dt = date(2022, 8, 20)
end_dt = date(2022, 12, 3)
for dt in daterange(start_dt, end_dt):
print(dt.strftime("%Y-%m-%d"))
A:
To output multiple date objects with ranges of 7 days, you can use a loop to generate the date ranges and then apply the same filtering logic to your DataFrame for each range. Here is an example of how you might do this:
from pyspark.sql.functions import col, to_date, asc
from pyspark.sql.types import TimestampType
import datetime
# Start and end dates for the date range
start_date = "2022-01-01"
end_date = "2022-12-31"
# Create a Spark session
spark = SparkSession.builder.appName("Date Iteration").getOrCreate()
# Create a DataFrame with a single column containing the date range
date_range_df = spark.range(start_date, end_date) \
.withColumn("date", to_date(col("id")))
# Create a list to store the DataFrames for each date range
date_ranges = []
# Iterate through the dates in the DataFrame and filter the data for each range
for date in date_range_df.select("date").collect():
# Calculate the start and end dates for the range
date_from = date.date
date_to = date_from + datetime.timedelta(days=7)
# Filter the DataFrame for the current date range
df = NewSpark.where((NewSpark.datetype >= date_from) & (NewSpark.datetype <= date_to)) \
.orderBy(asc("datetype"))
# Add the filtered DataFrame to the list
date_ranges.append(df)
# Print the DataFrame for each date range
for i, df in enumerate(date_ranges):
print(f"Date range {i+1}:")
df.show()
In this example, we use a for loop to iterate through the dates in the DataFrame and calculate the start and end dates for each range of 7 days. We then use the where() and orderBy() functions to filter the original DataFrame for each range and sort the results by the datetype column. Finally, we print the resulting DataFrame for each date range.
I hope this helps!
| Is it possible to iterate through dates on a PySpark data frame given a date range? | I am not sure if I am going the right way on this, but I am trying to see if it's possible to output multiple dates object such as dates_1, dates_2, dates_3 or even an array if that works that each has 7 days? so dates_1 = ("2022-08-20", "2022-08-27") dates_2 = ("2022-08-28", "2022-09-04") dates_3 = ("2022-09-05", "2022-09-12") and so on. I have so far manually input the date ranges:
from pyspark.sql.functions import to_date
from pyspark.sql.functions import col,lit
import datetime
from pyspark.sql.types import*
dates = ("2022-09-18", "2022-09-25") # Manually input date range of 7 days
date_from, date_to = [to_date(lit(s)).cast(TimestampType()) for s in dates]
New = NewSpark.where((NewSpark.datetype >= date_from) & (NewSpark.datetype <= date_to))
New = New.orderBy(asc("datetype"))
New.show(10)
Output dataframe with datetype dates ranging by 7 days ("2022-09-18" - "2022-09-25")
+----------+-------------------+------------------+
| datetype| user_id| course_id|
+----------+-------------------+------------------+
|2022-09-20|-436631365164282159|193340000000013715|
|2022-09-20| 280876762290791430|193340000000016827|
|2022-09-20| 445049443233666362| \N|
|2022-09-20|-322720029403815673|193340000000016900|
|2022-09-20| 491178362284543871| \N|
|2022-09-20| 13092183588131224| \N|
|2022-09-20| \N| \N|
|2022-09-20|-367908093210940595| \N|
|2022-09-20| 72800797765911039|193340000000014279|
|2022-09-20| -14158652536236447|193340000000013898|
+----------+-------------------+------------------+
only showing top 10 rows
I have this whole PySpark Data frame(NewSpark):
+----------+-------------------+------------------+
| datetype| user_id| course_id|
+----------+-------------------+------------------+
|2022-09-15| 425465600693903129| \N|
|2022-09-15| 508873040735657962|193340000000014379|
|2022-09-15| 284347190388427414|193340000000014966|
|2022-09-15|-486512951318998054|193340000000018519|
|2022-09-15| 125529631549145496| \N|
|2022-09-15| 125529631549145496| \N|
|2022-09-15| 557089411379160781|193340000000016522|
|2022-09-15| 522439159932067624| \N|
|2022-09-15|-405858644089907758| \N|
|2022-09-15|-260152078780427420| \N|
+----------+-------------------+------------------+
only showing top 10 rows
that has the following dates:
+----------+------+
| datetype| count|
+----------+------+
|2022-09-21| 1498|
|2022-09-20|305696|
|2022-09-16| 1668|
|2022-09-15|998332|
|2022-09-11| 2345|
|2022-09-10|997655|
|2022-09-05| 6895|
|2022-09-04|993101|
|2022-09-03| 4|
|2022-08-28| 3093|
|2022-08-27|997945|
|2022-08-26|998962|
|2022-08-25| 2493|
|2022-08-24|997507|
|2022-08-19| 2613|
|2022-08-18|999524|
|2022-08-17|997863|
+----------+------+
Any way to do the above code without manually inputting the date range? Or is there a better approach to this?
Possibly use something like this?:
from datetime import timedelta, date
def daterange(date1, date2):
for n in range(int ((date2 - date1).days)):
yield date1 + timedelta(n*7)
start_dt = date(2022, 8, 20)
end_dt = date(2022, 12, 3)
for dt in daterange(start_dt, end_dt):
print(dt.strftime("%Y-%m-%d"))
| [
"To output multiple date objects with ranges of 7 days, you can use a loop to generate the date ranges and then apply the same filtering logic to your DataFrame for each range. Here is an example of how you might do this:\nfrom pyspark.sql.functions import col, to_date, asc\nfrom pyspark.sql.types import TimestampType\nimport datetime\n\n# Start and end dates for the date range\nstart_date = \"2022-01-01\"\nend_date = \"2022-12-31\"\n\n# Create a Spark session\nspark = SparkSession.builder.appName(\"Date Iteration\").getOrCreate()\n\n# Create a DataFrame with a single column containing the date range\ndate_range_df = spark.range(start_date, end_date) \\\n .withColumn(\"date\", to_date(col(\"id\")))\n\n# Create a list to store the DataFrames for each date range\ndate_ranges = []\n\n# Iterate through the dates in the DataFrame and filter the data for each range\nfor date in date_range_df.select(\"date\").collect():\n # Calculate the start and end dates for the range\n date_from = date.date\n date_to = date_from + datetime.timedelta(days=7)\n \n # Filter the DataFrame for the current date range\n df = NewSpark.where((NewSpark.datetype >= date_from) & (NewSpark.datetype <= date_to)) \\\n .orderBy(asc(\"datetype\"))\n \n # Add the filtered DataFrame to the list\n date_ranges.append(df)\n\n# Print the DataFrame for each date range\nfor i, df in enumerate(date_ranges):\n print(f\"Date range {i+1}:\")\n df.show()\n\nIn this example, we use a for loop to iterate through the dates in the DataFrame and calculate the start and end dates for each range of 7 days. We then use the where() and orderBy() functions to filter the original DataFrame for each range and sort the results by the datetype column. Finally, we print the resulting DataFrame for each date range.\nI hope this helps!\n"
] | [
0
] | [] | [] | [
"apache_spark_sql",
"datetime",
"pyspark",
"python"
] | stackoverflow_0074648827_apache_spark_sql_datetime_pyspark_python.txt |
Q:
GTK+: How can I add a Gtk.CheckButton to a Gtk.FileChooserDialog?
I want to present a folder chooser to users, and allow them to specify whether that folder should be processed recursively. I tried
do_recursion = False
def enable_recurse(widget, data=None):
nonlocal do_recursion
do_recursion = widget.get_active()
choose_file_dialog = Gtk.FileChooserDialog(use_header_bar=use_header_bar,
title=_(da_title), # _( invokes GLib.dgettext
action=Gtk.FileChooserAction.SELECT_FOLDER)
choose_file_dialog.add_button("_Cancel", Gtk.ResponseType.CANCEL)
choose_file_dialog.add_button("_OK", Gtk.ResponseType.OK)
check_box_1 = Gtk.CheckButton("_RECURSE")
check_box_1.connect("toggled", enable_recurse)
choose_file_dialog.add(check_box_1)
But that fails, and generates the warning:
Gtk-WARNING **: 14:03:31.139: Attempting to add a widget with type GtkCheckButton to a GtkFileChooserDialog, but as a GtkBin subclass a GtkFileChooserDialog can only contain one widget at a time; it already contains a widget of type GtkBox
What is a correct way to do this?
A:
As noted above, an answer is to use set_extra_widget instead of add
check_box_1 = Gtk.CheckButton(label="Recurse source directory")
check_box_1.connect("toggled", enable_recurse)
choose_file_dialog.set_extra_widget(check_box_1)
But I do not like the placement of the checkbox in the lower left corner, so I hope someone has a better answer.
| GTK+: How can I add a Gtk.CheckButton to a Gtk.FileChooserDialog? | I want to present a folder chooser to users, and allow them to specify whether that folder should be processed recursively. I tried
do_recursion = False
def enable_recurse(widget, data=None):
nonlocal do_recursion
do_recursion = widget.get_active()
choose_file_dialog = Gtk.FileChooserDialog(use_header_bar=use_header_bar,
title=_(da_title), # _( invokes GLib.dgettext
action=Gtk.FileChooserAction.SELECT_FOLDER)
choose_file_dialog.add_button("_Cancel", Gtk.ResponseType.CANCEL)
choose_file_dialog.add_button("_OK", Gtk.ResponseType.OK)
check_box_1 = Gtk.CheckButton("_RECURSE")
check_box_1.connect("toggled", enable_recurse)
choose_file_dialog.add(check_box_1)
But that fails, and generates the warning:
Gtk-WARNING **: 14:03:31.139: Attempting to add a widget with type GtkCheckButton to a GtkFileChooserDialog, but as a GtkBin subclass a GtkFileChooserDialog can only contain one widget at a time; it already contains a widget of type GtkBox
What is a correct way to do this?
| [
"As noted above, an answer is to use set_extra_widget instead of add\n check_box_1 = Gtk.CheckButton(label=\"Recurse source directory\")\n check_box_1.connect(\"toggled\", enable_recurse)\n choose_file_dialog.set_extra_widget(check_box_1)\n\nBut I do not like the placement of the checkbox in the lower left corner, so I hope someone has a better answer.\n"
] | [
0
] | [] | [] | [
"gtk",
"python"
] | stackoverflow_0074648765_gtk_python.txt |
Q:
crop frames of a video using opencv
i want to crop each of the frames of this video and save all the cropped images to use them as input for a focus stacking software but my approach:
cap = cv2.VideoCapture(r"C:\Users\HP\Downloads\VID_20221128_112556.mp4")
ret, frames = cap.read()
count=0
for img in frames:
stops.append(img)
cv2.imwrite("../stack/croppedframe%d.jpg" % count,img[500:1300,100:1000])
print(count)
count += 1
Throws this error:
error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgcodecs\src\loadsave.cpp:801: error: (-215:Assertion failed) !_img.empty() in function 'cv::imwrite'
what can i do?
A:
If you take the frames variable with for loop it will give you the image on the y-axis.if you use while loop and read the next frame, the code will work. You can try the example below.
cap = cv2.VideoCapture(r"C:\Users\HP\Downloads\VID_20221128_112556.mp4")
ret, frame = cap.read()
count=0
while(ret):
stops.append(frame)
cv2.imwrite("../stack/croppedframe%d.jpg" % count,frame[500:1300,100:1000])
print(count)
count += 1
ret, frame = cap.read()
| crop frames of a video using opencv | i want to crop each of the frames of this video and save all the cropped images to use them as input for a focus stacking software but my approach:
cap = cv2.VideoCapture(r"C:\Users\HP\Downloads\VID_20221128_112556.mp4")
ret, frames = cap.read()
count=0
for img in frames:
stops.append(img)
cv2.imwrite("../stack/croppedframe%d.jpg" % count,img[500:1300,100:1000])
print(count)
count += 1
Throws this error:
error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgcodecs\src\loadsave.cpp:801: error: (-215:Assertion failed) !_img.empty() in function 'cv::imwrite'
what can i do?
| [
"If you take the frames variable with for loop it will give you the image on the y-axis.if you use while loop and read the next frame, the code will work. You can try the example below.\ncap = cv2.VideoCapture(r\"C:\\Users\\HP\\Downloads\\VID_20221128_112556.mp4\")\nret, frame = cap.read()\ncount=0\nwhile(ret):\n stops.append(frame)\n cv2.imwrite(\"../stack/croppedframe%d.jpg\" % count,frame[500:1300,100:1000])\n print(count)\n count += 1\n ret, frame = cap.read()\n\n"
] | [
3
] | [] | [] | [
"image_processing",
"opencv",
"python",
"video_processing"
] | stackoverflow_0074648745_image_processing_opencv_python_video_processing.txt |
Q:
How do you calculate a satellite's position in GCRF from an RA/DEC measurement in Skyfield?
I have a measurement of the RA and Dec for an Earth orbiting satellite, as measured from a sensor on the Earth's surface. I'm trying to calculate the satellite's position vector in the GCRF reference frame (so a vector from the centre of the earth).
Since the object is in earth orbit, I can't assume that the RA/Dec is the same from the centre of the earth as it is from the surface. I therefore can't use this example: https://rhodesmill.org/skyfield/examples.html#what-latitude-and-longitude-is-beneath-this-right-ascension-and-declination
I have tried the following code.
from skyfield.api import load, wgs84, utc
from skyfield.positionlib import position_of_radec
from skyfield.units import Distance
from datetime import datetime
ra = 90
dec = 5
sensorlat = -30
sensorlon = 150
sensoralt = 1000
range = 37000
timestring = "2022-11-18T00:00:00.0Z"
distance = Distance(km=range)
time = datetime.strptime(timestring,'%Y-%m-%dT%H:%M:%S.%fZ')
time = time.replace(tzinfo=utc)
ts = load.timescale()
t = ts.from_datetime(time)
eph = load('de421.bsp')
earth = eph['earth']
sensor = wgs84.latlon(sensorlat,sensorlon,sensoralt)
satellite = position_of_radec(ra/15,dec,distance.au,t=t,center=sensor)
It appears that the sensor is represented as a Geocentric vector, while the satellite position is represented by a Geometric vector, and I can't figure out how to combine the two into a geocentric vector for the satellite.
satellite_icrf = sensor.at(t) + satellite
# Gives the following exception:
# Exception has occurred: TypeError
# unsupported operand type(s) for +: 'Geocentric' and 'Geometric'
I also tried simply changing the centre of the satellite Geometric vector, but that didn't appear to change the vector in any way.
print(satellite.position.km) # Prints [2.25697530e-12 3.68592038e+04 3.22476248e+03]
satellite.center = earth
print(satellite.position.km) # Still prints [2.25697530e-12 3.68592038e+04 3.22476248e+03]
Can someone explain how to convert from this Geometric vector into a GCRF vector?
A:
I think I've found a workaround.
sensor.at(t) + satellite
does not work, but it looks like:
-(-sensor.at(t)-satellite) does work, and gives the required GCRF vector for the satellite.
This seems a bit hacky though, surely there's a more 'correct' method. I won't mark this as the accepted answer just yet, but I will if no one else posts a better method.
| How do you calculate a satellite's position in GCRF from an RA/DEC measurement in Skyfield? | I have a measurement of the RA and Dec for an Earth orbiting satellite, as measured from a sensor on the Earth's surface. I'm trying to calculate the satellite's position vector in the GCRF reference frame (so a vector from the centre of the earth).
Since the object is in earth orbit, I can't assume that the RA/Dec is the same from the centre of the earth as it is from the surface. I therefore can't use this example: https://rhodesmill.org/skyfield/examples.html#what-latitude-and-longitude-is-beneath-this-right-ascension-and-declination
I have tried the following code.
from skyfield.api import load, wgs84, utc
from skyfield.positionlib import position_of_radec
from skyfield.units import Distance
from datetime import datetime
ra = 90
dec = 5
sensorlat = -30
sensorlon = 150
sensoralt = 1000
range = 37000
timestring = "2022-11-18T00:00:00.0Z"
distance = Distance(km=range)
time = datetime.strptime(timestring,'%Y-%m-%dT%H:%M:%S.%fZ')
time = time.replace(tzinfo=utc)
ts = load.timescale()
t = ts.from_datetime(time)
eph = load('de421.bsp')
earth = eph['earth']
sensor = wgs84.latlon(sensorlat,sensorlon,sensoralt)
satellite = position_of_radec(ra/15,dec,distance.au,t=t,center=sensor)
It appears that the sensor is represented as a Geocentric vector, while the satellite position is represented by a Geometric vector, and I can't figure out how to combine the two into a geocentric vector for the satellite.
satellite_icrf = sensor.at(t) + satellite
# Gives the following exception:
# Exception has occurred: TypeError
# unsupported operand type(s) for +: 'Geocentric' and 'Geometric'
I also tried simply changing the centre of the satellite Geometric vector, but that didn't appear to change the vector in any way.
print(satellite.position.km) # Prints [2.25697530e-12 3.68592038e+04 3.22476248e+03]
satellite.center = earth
print(satellite.position.km) # Still prints [2.25697530e-12 3.68592038e+04 3.22476248e+03]
Can someone explain how to convert from this Geometric vector into a GCRF vector?
| [
"I think I've found a workaround.\nsensor.at(t) + satellite\ndoes not work, but it looks like:\n-(-sensor.at(t)-satellite) does work, and gives the required GCRF vector for the satellite.\nThis seems a bit hacky though, surely there's a more 'correct' method. I won't mark this as the accepted answer just yet, but I will if no one else posts a better method.\n"
] | [
2
] | [] | [] | [
"python",
"skyfield"
] | stackoverflow_0074646447_python_skyfield.txt |
Q:
Jupyter - widget to play audio with playhead on graph
Is there any Jupyter widget for visualizing audio synced with a playhead on a time-series plot?
I would like to visualize data derived from an audio sample (e.g. spectrogram and various computed signals), listening to the audio sample while seeing the playhead move across the plots.
I found this old gist https://gist.github.com/deeplycloudy/2152643 which uses pyaudio on the Python backend to play the sound. Any good solutions out there that are a bit less hacky, e.g. ideally entirely JavaScript-based and with playback running fully in the browser?
A:
You can now :). It took me about 10 minutes to put together a demo using Jupyter proxy widget to load a wavesurfer control into a notebook. It works in Chrome but I haven't tested it anywhere else. It should work anywhere wavesurfer and Jupyter work.
Here is a screenshot
See the pastable text from the notebook here:
https://github.com/AaronWatters/jp_doodle/blob/master/notebooks/misc/wavesurfer%20demo.ipynb
For information on jp_proxy widgets look here:
https://github.com/AaronWatters/jp_proxy_widget
A:
In the time since I posted this question, a few new solutions have emerged:
Scott Condron: Building Tools to Interact With Your Data
Building Tools to Interact With Your Data 2020-10-21-interactive-audio-plots-in-jupyter-notebook.ipynb
These solutions use holoview, have a playhead linked between the audio and the plots, and can run fully on the browser.
| Jupyter - widget to play audio with playhead on graph | Is there any Jupyter widget for visualizing audio synced with a playhead on a time-series plot?
I would like to visualize data derived from an audio sample (e.g. spectrogram and various computed signals), listening to the audio sample while seeing the playhead move across the plots.
I found this old gist https://gist.github.com/deeplycloudy/2152643 which uses pyaudio on the Python backend to play the sound. Any good solutions out there that are a bit less hacky, e.g. ideally entirely JavaScript-based and with playback running fully in the browser?
| [
"You can now :). It took me about 10 minutes to put together a demo using Jupyter proxy widget to load a wavesurfer control into a notebook. It works in Chrome but I haven't tested it anywhere else. It should work anywhere wavesurfer and Jupyter work.\nHere is a screenshot\n\nSee the pastable text from the notebook here:\nhttps://github.com/AaronWatters/jp_doodle/blob/master/notebooks/misc/wavesurfer%20demo.ipynb\nFor information on jp_proxy widgets look here:\nhttps://github.com/AaronWatters/jp_proxy_widget\n",
"In the time since I posted this question, a few new solutions have emerged:\n\nScott Condron: Building Tools to Interact With Your Data\nBuilding Tools to Interact With Your Data 2020-10-21-interactive-audio-plots-in-jupyter-notebook.ipynb\n\nThese solutions use holoview, have a playhead linked between the audio and the plots, and can run fully on the browser.\n"
] | [
4,
0
] | [] | [] | [
"audio",
"jupyter",
"python",
"signal_processing",
"visualization"
] | stackoverflow_0059641390_audio_jupyter_python_signal_processing_visualization.txt |
Q:
Force array of arrays when using numpy genfromtext
Sample text input file:
35.6 45.1
21.2 34.1
30.3 29.3
When you use numpy.genfromtxt(input_file, delimiter=' '), it loads the text file as an array of arrays
[[35.6 45.1]
[21.2 34.1]
[30.3 29.3]]
If there is only one entry or row of data in the input file, then it loads the input file as a 1d array
[35.6 45.1] instead of [[35.6 45.1]]
How do you force numpy to always result in a 2d array so that the resulting data structure stays consistent whether the input is 1 row or 10 rows?
A:
Use the ndmin argument (new in 1.23.0):
numpy.genfromtxt(input_file, ndmin=2)
If you're on a version before 1.23.0, you'll have to do something else. If you don't have missing data, you can use numpy.loadtxt, which supports ndmin since 1.16.0:
numpy.loadtxt(input_file, ndmin=2)
Or if you know that your input will always have multiple columns, you can add the extra dimension with numpy.atleast_2d:
numpy.atleast_2d(numpy.genfromtxt(input_file))
(If you don't know how many rows or columns your input will have, it's impossible to distinguish the genfromtxt output for a single row or a single column, so atleast_2d won't help.)
| Force array of arrays when using numpy genfromtext | Sample text input file:
35.6 45.1
21.2 34.1
30.3 29.3
When you use numpy.genfromtxt(input_file, delimiter=' '), it loads the text file as an array of arrays
[[35.6 45.1]
[21.2 34.1]
[30.3 29.3]]
If there is only one entry or row of data in the input file, then it loads the input file as a 1d array
[35.6 45.1] instead of [[35.6 45.1]]
How do you force numpy to always result in a 2d array so that the resulting data structure stays consistent whether the input is 1 row or 10 rows?
| [
"Use the ndmin argument (new in 1.23.0):\nnumpy.genfromtxt(input_file, ndmin=2)\n\nIf you're on a version before 1.23.0, you'll have to do something else. If you don't have missing data, you can use numpy.loadtxt, which supports ndmin since 1.16.0:\nnumpy.loadtxt(input_file, ndmin=2)\n\nOr if you know that your input will always have multiple columns, you can add the extra dimension with numpy.atleast_2d:\nnumpy.atleast_2d(numpy.genfromtxt(input_file))\n\n(If you don't know how many rows or columns your input will have, it's impossible to distinguish the genfromtxt output for a single row or a single column, so atleast_2d won't help.)\n"
] | [
1
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074648907_numpy_python.txt |
Q:
Python how to only accept numbers as a input
mark= eval(raw_input("What is your mark?"))
try:
int(mark)
except ValueError:
try:
float(mark)
except ValueError:
print "This is not a number"
So I need to make a python program that looks at your mark and gives you varying responses depending on what it is.
However I also need to add a way to stop random text which isn't numbers from being entered into the program.
I thought I had found a solution to this but it won't make it it past the first statement to the failsafe code that is meant to catch it if it was anything but numbers.
So pretty much what happens is if I enter hello instead of a number it fails at the first line and gives me back an error that says exceptions:NameError: name 'happy' is not defined.
How can I change it so that it can make it to the code that gives them the print statement that they need to enter a number?
A:
remove eval and your code is correct:
mark = raw_input("What is your mark?")
try:
int(mark)
except ValueError:
try:
float(mark)
except ValueError:
print("This is not a number")
Just checking for a float will work fine:
try:
float(mark)
except ValueError:
print("This is not a number")
A:
Is it easier to declare a global value than to pass an argument,
In my case it's also gives an error.
def getInput():
global value
value = input()
while not value.isnumeric():
print("enter a number")
value = input("enter again")
return int(value)
getInput()
print(value)
#can't comment :)
A:
You can simply cae to float or int and catch the exception (if any). Youre using eval which is considered poor and you add a lot of redundant statements.
try:
mark= float(raw_input("What is your mark?"))
except ValueError:
print "This is not a number"
"Why not use eval?" you ask, well... Try this input from the user: [1 for i in range (100000000)]
A:
import re
pattern = re.compile("^[0-9][0-9]\*\\.?[0-9]*")
status = re.search(pattern, raw_input("Enter the Mark : "))
if not status:
print "Invalid Input"
A:
you can use the String object method called isnumeric. it's more efficient than try- except method. see the below code.
def getInput(prompt):
value = input(prompt)
while not value.isnumeric():
print("enter a number")
value = input("enter again")
return int(value)
A:
Might be a bit too late but to do this you can do this:
from os import system
from time import sleep
while True:
try:
numb = float(input("Enter number>>>"))
break
except ValueError:
system("cls")
print("Error! Numbers only!")
sleep(1)
system("cls")
but to make it within a number range you can do this:
from os import system
from time import sleep
while True:
try:
numb = float(input("Enter number within 1-5>>>"))
if numb > 5 or numb < 1:
raise ValueError
else:
break
except ValueError:
system("cls")
print("Error! Numbers only!")
sleep(1)
system("cls")
| Python how to only accept numbers as a input | mark= eval(raw_input("What is your mark?"))
try:
int(mark)
except ValueError:
try:
float(mark)
except ValueError:
print "This is not a number"
So I need to make a python program that looks at your mark and gives you varying responses depending on what it is.
However I also need to add a way to stop random text which isn't numbers from being entered into the program.
I thought I had found a solution to this but it won't make it it past the first statement to the failsafe code that is meant to catch it if it was anything but numbers.
So pretty much what happens is if I enter hello instead of a number it fails at the first line and gives me back an error that says exceptions:NameError: name 'happy' is not defined.
How can I change it so that it can make it to the code that gives them the print statement that they need to enter a number?
| [
"remove eval and your code is correct:\nmark = raw_input(\"What is your mark?\")\ntry:\n int(mark)\nexcept ValueError:\n try:\n float(mark)\n except ValueError:\n print(\"This is not a number\")\n\nJust checking for a float will work fine:\ntry:\n float(mark)\nexcept ValueError:\n print(\"This is not a number\")\n\n",
"Is it easier to declare a global value than to pass an argument,\nIn my case it's also gives an error.\ndef getInput():\n global value\n value = input()\n while not value.isnumeric():\n print(\"enter a number\")\n value = input(\"enter again\")\n return int(value)\n\ngetInput()\nprint(value)\n\n#can't comment :)\n",
"You can simply cae to float or int and catch the exception (if any). Youre using eval which is considered poor and you add a lot of redundant statements.\ntry:\n mark= float(raw_input(\"What is your mark?\"))\nexcept ValueError:\n print \"This is not a number\"\n\n\"Why not use eval?\" you ask, well... Try this input from the user: [1 for i in range (100000000)]\n",
"import re\n\npattern = re.compile(\"^[0-9][0-9]\\*\\\\.?[0-9]*\")\n\nstatus = re.search(pattern, raw_input(\"Enter the Mark : \"))\n\nif not status:\n\n print \"Invalid Input\"\n\n",
"you can use the String object method called isnumeric. it's more efficient than try- except method. see the below code.\ndef getInput(prompt):\n value = input(prompt)\n while not value.isnumeric():\n print(\"enter a number\")\n value = input(\"enter again\")\n return int(value)\n\n",
"Might be a bit too late but to do this you can do this:\nfrom os import system\nfrom time import sleep\nwhile True:\n try:\n numb = float(input(\"Enter number>>>\"))\n break\n\n except ValueError:\n system(\"cls\")\n print(\"Error! Numbers only!\")\n sleep(1)\n system(\"cls\")\n\nbut to make it within a number range you can do this:\nfrom os import system\nfrom time import sleep\nwhile True:\n try:\n numb = float(input(\"Enter number within 1-5>>>\"))\n if numb > 5 or numb < 1:\n raise ValueError\n else:\n break\n\n except ValueError:\n system(\"cls\")\n print(\"Error! Numbers only!\")\n sleep(1)\n system(\"cls\")\n\n"
] | [
10,
4,
0,
0,
0,
0
] | [
"Actually if you going to use eval() you have to define more things.\nacceptables=[1,2,3,4,5,6,7,8,9,0,\"+\",\"*\",\"/\",\"-\"]\ntry:\n mark= eval(int(raw_input(\"What is your mark?\")))\nexcept ValueError:\n print (\"It's not a number!\")\nif mark not in acceptables:\n print (\"You cant do anything but arithmetical operations!\")\n\nIt's a basically control mechanism for eval().\n"
] | [
-1
] | [
"python",
"python_2.x"
] | stackoverflow_0027516093_python_python_2.x.txt |
Q:
How do I install pandas into visual studios code?
I want to read an excel csv file, and after researching, I realized I need to import pandas as pd. Is there a way to install it into the visual studio code? I have tried typing import pandas as pd, but it shows a red line. I'm still new to python.
Thank you
A:
As pandas is a Python library, you can install it using pip - the Python's package management system. If you are using Python 2 >=2.7.9 or Python 3 >=3.4, pip is already installed with your Python. Ensure that Python has been added to PATH
Then, to install pandas, just simply do:
$ pip install pandas
A:
I think the above answers are very well put already.
just to add to that.
Windows:
1.open cmd
2.type python -m pip install pandas
3.restart your visual studio code
Linux or macOS:
1.open terminal
2.type pip install pandas
3.restart your visual studio code
A:
you can install using pip
pip install pandas
A:
For anyone else in a similar situation, I'd recommend following along with this VS Code official tutorial.
It guides you to use Conda instead of Pip, and setup a Python environment, along with installing various packages like Pandas, Jupyter, etc.
https://code.visualstudio.com/docs/datascience/data-science-tutorial
For example, after installing the Python extension for VSCode and Miniconda or Anaconda:
conda create -n myenv python=3.9 pandas
A:
I also had the same question. As a newbie, I did not understand the answer. Perhaps these notes will help others in the same boat.
You need to type this into command prompt (not visual studio or python): pip install pandas
Before you do that, you must "Ensure that Python has been added to PATH". This did not make sense to me, but there are pages on this if you Google.
Also useful to know: CMD and Terminal = Command Prompt (please correct me if that's not true).
Hopefully this helps others. Thanks
A:
In terminal on vscode, check and make sure python is installed:
py -3 --version
Then you can install libraries with:
py -m pip install packagename
This was a simple solution I came up with since the others weren't working on my system.
Hopefully this helps!
| How do I install pandas into visual studios code? | I want to read an excel csv file, and after researching, I realized I need to import pandas as pd. Is there a way to install it into the visual studio code? I have tried typing import pandas as pd, but it shows a red line. I'm still new to python.
Thank you
| [
"As pandas is a Python library, you can install it using pip - the Python's package management system. If you are using Python 2 >=2.7.9 or Python 3 >=3.4, pip is already installed with your Python. Ensure that Python has been added to PATH\nThen, to install pandas, just simply do:\n$ pip install pandas\n\n",
"I think the above answers are very well put already.\njust to add to that.\nWindows:\n1.open cmd\n2.type python -m pip install pandas\n3.restart your visual studio code\nLinux or macOS:\n1.open terminal\n2.type pip install pandas\n3.restart your visual studio code\n",
"you can install using pip \npip install pandas\n",
"For anyone else in a similar situation, I'd recommend following along with this VS Code official tutorial.\nIt guides you to use Conda instead of Pip, and setup a Python environment, along with installing various packages like Pandas, Jupyter, etc.\nhttps://code.visualstudio.com/docs/datascience/data-science-tutorial\nFor example, after installing the Python extension for VSCode and Miniconda or Anaconda:\nconda create -n myenv python=3.9 pandas\n\n",
"I also had the same question. As a newbie, I did not understand the answer. Perhaps these notes will help others in the same boat.\nYou need to type this into command prompt (not visual studio or python): pip install pandas\nBefore you do that, you must \"Ensure that Python has been added to PATH\". This did not make sense to me, but there are pages on this if you Google.\nAlso useful to know: CMD and Terminal = Command Prompt (please correct me if that's not true).\nHopefully this helps others. Thanks\n",
"In terminal on vscode, check and make sure python is installed:\npy -3 --version\nThen you can install libraries with:\npy -m pip install packagename\nThis was a simple solution I came up with since the others weren't working on my system.\nHopefully this helps!\n"
] | [
6,
6,
3,
3,
0,
0
] | [
"You need to start off by installing Anaconda in order to create an environment for Pandas; you can manage this environment with Anaconda.\nGo to your terminal then run conda create -n myenv python=3.9 pandas jupyter seaborn scikit-learn keras tensorflow. It will create environments for all of the libraries mentioned above.\nPS : this is an old post, please check python's latest version\nAfter that click on your Kernel1 (top right) and chose the environment that is associated with Anaconda\n"
] | [
-1
] | [
"python",
"visual_studio_code"
] | stackoverflow_0067946868_python_visual_studio_code.txt |
Q:
Password Generator performance : Python vs Javascript (Google apps script)
I created a random code generator script via Google apps script. My goal is to generate 6000 uniques random codes (in spreadsheet) as fast as possible.
The following javascript code crashes with Google spreadsheet + apps script --> too long to execute and the same code under python generates 20,000 random codes in less than 1 second... I'm not a JS ninja, do you have any idea to optimize the JS code below ?
Code JS
function main(nbre_car,nbre_pass,number,letter_maj,letter_min,spec_car){
var nbre_car = 6;
var nbre_pass = 6000;
var number = true;
var letter_maj = false;
var letter_min = false;
var spec_car = false;
var prefixe="FOULE";
return generate_password(nbre_car,nbre_pass,number,letter_maj,letter_min,spec_car,prefixe)
}
function combinaison_possible(char_number,lenght_possible_char){
combinaison_nbre=Math.pow(lenght_possible_char,char_number)
return combinaison_nbre
}
function generate_password(nbre_car,nbre_pass,number=true,letter_maj=false,letter_min=false,spec_car=false,prefixe="") {
if (Number.isInteger(nbre_car)&&Number.isInteger(nbre_pass)){
}
else{
return "Veuillez rentrer un nombre entier pour les champs en bleu"
}
var nbre_car = nbre_car || 10;
var nbre_pass = nbre_pass || 3;
var pass_number="123456789";
var pass_letter_maj="ABCDEFGHIJKLMNPQRSTUVWXYZ";
var pass_letter_min="abcdefghijklmnpqrstuvwxyz"
var pass_spec_car="'(-Γ¨_Γ§Γ )=:;,!."
// Check entry type
// Create an empty map which will contain all password
var col = new Map([]);
var prefixe=prefixe;
var list_char='';
list_char= letter_maj == true ? list_char+pass_letter_maj : list_char
list_char= number == true ? list_char+pass_number : list_char
list_char= letter_min == true ? list_char+pass_letter_min : list_char
list_char= spec_car == true ? list_char+pass_spec_car : list_char
// Teste les combinaisons possible entre le nombre de caractère demandés pour le password et la liste disponible
if (combinaison_possible(nbre_car,list_char.length)>=nbre_pass) {
// CrΓ©ation des mots de passe unique
while(col.size===0||nbre_pass>col.size) {
Logger.log("col.size : "+col.size)
Logger.log("nbre_pass : "+nbre_pass)
search_new_pass=true;
while (search_new_pass==true){
pass=create_one_password(nbre_car,list_char,prefixe)
Logger.log('nom du password : '+pass)
if (verify_unique(col,pass)!=true)
col.set({}, pass);
Logger.log("valeur de col : "+col)
search_new_pass=false;
}
}
}
else{
col = [];
col.push("Vous avez demander trop de mots de passe, cela va crΓ©er des doublons,Veuillez diminuer le nombre de mots de passe Γ afficher");
}
final_values=[...col.values()];
//Logger.log('valeur final de col : '+final_values)
console.log(Array.from(col.values()));
return Array.from(col.values());
}
function create_one_password(nbre_car,list_char,prefixe) {
var nbre_car = nbre_car;
s = '', r = list_char;
for (var i=0; i < nbre_car; i++) {
s += r.charAt(Math.floor(Math.random()*r.length));
}
return prefixe+s;
}
Code Python
import random
def combinaison_possible(char_number,lenght_possible_char):
combinaison_nbre=pow(lenght_possible_char,char_number)
return combinaison_nbre
def generate_password(nbre_car,nbre_pass,number=True,letter_maj=True,letter_min=True,spec_car=True,prefixe="FOULE") :
if(not isinstance(nbre_car,int) and isinstance(not nbre_pass,int)) :
print( "Veuillez rentrer un nombre entier pour les champs en bleu")
nbre_car = nbre_car
nbre_pass = nbre_pass
pass_number="123456789"
pass_letter_maj="ABCDEFGHIJKLMNPQRSTUVWXYZ"
pass_letter_min="abcdefghijklmnpqrstuvwxyz"
pass_spec_car="!@#$%^&*()_+"
prefixe=prefixe
list_char=''
col={}
longueur_col=len(col)
list_char= list_char+pass_letter_maj if letter_maj else list_char
list_char= list_char+pass_letter_min if letter_min else list_char
list_char= list_char+pass_number if number else list_char
list_char= list_char+pass_spec_car if spec_car else list_char
if (combinaison_possible(nbre_car,len(list_char))>=nbre_pass) :
while(len(col)==0 or nbre_pass>len(col)):
longueur_col=len(col)
search_new_pass=True
while (search_new_pass==True):
pass_word = prefixe+''.join(random.choice(list_char) for i in range(nbre_car))
if pass_word not in col:
col[longueur_col]=pass_word
search_new_pass=False
print (col)
else :
print("Le nombre de mot de passe à générer est trop important par rapport au nombre de caractères possible")
generate_password(6,20000)
A:
Performance-wise, the main difference between the Apps Script and Python versions is that the Apps Script code logs about 20,000 values in the Apps Script console, which is slow, while the Python code outputs 1 value.
The Apps Script code has several syntactical and semantical errors, including:
verify_unique() is undefined
col.set({}, pass) does not make sense; perhaps use an Array instead of a Map, find if a value is already in the list with col.includes(pass), insert values with col.push(pass), and use col instead of Array.from(col.values()) to retrieve the values
var prefixe = prefixe; is superfluous
See Apps Script at Stack Overflow and Clean Code JavaScript.
A:
I think the code could be quite a bit easier. I did not study your code extensively. But this would be my approach to solve the problem. As you can see it takes less than one second to generate 20'000 passwords.
What actually really takes a long time is the duplicate check.
Aside from thath be careful when generating passwords without a cryptographically secure random algorithm.
Please have a look at this for how to use Crypto.getRandomValues()
const CHARACTER_POOL =
"123456789ABCDEFGHIJKLMNPQRSTUVWXYZabcdefghijklmnpqrstuvwxyz'(-Γ¨_Γ§Γ )=:;,!.";
const PASSWORDS_TO_GENERATE = 20000;
const PASSWORD_LENGTH = 6;
const PREFIX = "";
const createPassword = () => {
let password = "";
for (let i = 0; i < PASSWORD_LENGTH; i++) {
// this is not secure
password += CHARACTER_POOL.charAt(
Math.floor(Math.random() * CHARACTER_POOL.length)
);
}
return `${PREFIX}${password}`;
};
const generatePassword = () => {
const passwords = [];
while (passwords.length < PASSWORDS_TO_GENERATE) {
const password = createPassword();
if (!passwords.includes(password)) {
passwords.push(password);
}
}
return passwords;
};
const start = new Date().getTime();
const passwords = generatePassword();
console.log(`It took ${(new Date().getTime() - start) / 1000} to generate ${passwords.length} passwords`);
console.log(passwords);
A:
That's a lot of code for what seems a pretty straightforward problem. I didn't look closely at your version, but here's how I might handle the problem. It creates 20000 in about 20 milliseconds on my mid-level machine.
const genPasswords = (chars) => (n, length = 6, pre = '') => Array .from (
{length: n},
() => pre + Array.from({length}, () => chars[~~(Math.random() * chars.length)]) .join('')
)
const pwdGen = genPasswords ("123456789ABCDEFGHIJKLMNPQRSTUVWXYZabcdefghijklmnpqrstuvwxyz'(-Γ¨_Γ§Γ )=:;,!.")
console.time('generate 20000')
const res = pwdGen (20000, 6, 'FOULE')
console.timeEnd('generate 20000')
console .log (res)
.as-console-wrapper {max-height: 100% !important; top: 0}
I store the characters to use in a closure, returning a function that takes the number to generate, their length, and a prefix.
| Password Generator performance : Python vs Javascript (Google apps script) | I created a random code generator script via Google apps script. My goal is to generate 6000 uniques random codes (in spreadsheet) as fast as possible.
The following javascript code crashes with Google spreadsheet + apps script --> too long to execute and the same code under python generates 20,000 random codes in less than 1 second... I'm not a JS ninja, do you have any idea to optimize the JS code below ?
Code JS
function main(nbre_car,nbre_pass,number,letter_maj,letter_min,spec_car){
var nbre_car = 6;
var nbre_pass = 6000;
var number = true;
var letter_maj = false;
var letter_min = false;
var spec_car = false;
var prefixe="FOULE";
return generate_password(nbre_car,nbre_pass,number,letter_maj,letter_min,spec_car,prefixe)
}
function combinaison_possible(char_number,lenght_possible_char){
combinaison_nbre=Math.pow(lenght_possible_char,char_number)
return combinaison_nbre
}
function generate_password(nbre_car,nbre_pass,number=true,letter_maj=false,letter_min=false,spec_car=false,prefixe="") {
if (Number.isInteger(nbre_car)&&Number.isInteger(nbre_pass)){
}
else{
return "Veuillez rentrer un nombre entier pour les champs en bleu"
}
var nbre_car = nbre_car || 10;
var nbre_pass = nbre_pass || 3;
var pass_number="123456789";
var pass_letter_maj="ABCDEFGHIJKLMNPQRSTUVWXYZ";
var pass_letter_min="abcdefghijklmnpqrstuvwxyz"
var pass_spec_car="'(-Γ¨_Γ§Γ )=:;,!."
// Check entry type
// Create an empty map which will contain all password
var col = new Map([]);
var prefixe=prefixe;
var list_char='';
list_char= letter_maj == true ? list_char+pass_letter_maj : list_char
list_char= number == true ? list_char+pass_number : list_char
list_char= letter_min == true ? list_char+pass_letter_min : list_char
list_char= spec_car == true ? list_char+pass_spec_car : list_char
// Teste les combinaisons possible entre le nombre de caractère demandés pour le password et la liste disponible
if (combinaison_possible(nbre_car,list_char.length)>=nbre_pass) {
// CrΓ©ation des mots de passe unique
while(col.size===0||nbre_pass>col.size) {
Logger.log("col.size : "+col.size)
Logger.log("nbre_pass : "+nbre_pass)
search_new_pass=true;
while (search_new_pass==true){
pass=create_one_password(nbre_car,list_char,prefixe)
Logger.log('nom du password : '+pass)
if (verify_unique(col,pass)!=true)
col.set({}, pass);
Logger.log("valeur de col : "+col)
search_new_pass=false;
}
}
}
else{
col = [];
col.push("Vous avez demander trop de mots de passe, cela va crΓ©er des doublons,Veuillez diminuer le nombre de mots de passe Γ afficher");
}
final_values=[...col.values()];
//Logger.log('valeur final de col : '+final_values)
console.log(Array.from(col.values()));
return Array.from(col.values());
}
function create_one_password(nbre_car,list_char,prefixe) {
var nbre_car = nbre_car;
s = '', r = list_char;
for (var i=0; i < nbre_car; i++) {
s += r.charAt(Math.floor(Math.random()*r.length));
}
return prefixe+s;
}
Code Python
import random
def combinaison_possible(char_number,lenght_possible_char):
combinaison_nbre=pow(lenght_possible_char,char_number)
return combinaison_nbre
def generate_password(nbre_car,nbre_pass,number=True,letter_maj=True,letter_min=True,spec_car=True,prefixe="FOULE") :
if(not isinstance(nbre_car,int) and isinstance(not nbre_pass,int)) :
print( "Veuillez rentrer un nombre entier pour les champs en bleu")
nbre_car = nbre_car
nbre_pass = nbre_pass
pass_number="123456789"
pass_letter_maj="ABCDEFGHIJKLMNPQRSTUVWXYZ"
pass_letter_min="abcdefghijklmnpqrstuvwxyz"
pass_spec_car="!@#$%^&*()_+"
prefixe=prefixe
list_char=''
col={}
longueur_col=len(col)
list_char= list_char+pass_letter_maj if letter_maj else list_char
list_char= list_char+pass_letter_min if letter_min else list_char
list_char= list_char+pass_number if number else list_char
list_char= list_char+pass_spec_car if spec_car else list_char
if (combinaison_possible(nbre_car,len(list_char))>=nbre_pass) :
while(len(col)==0 or nbre_pass>len(col)):
longueur_col=len(col)
search_new_pass=True
while (search_new_pass==True):
pass_word = prefixe+''.join(random.choice(list_char) for i in range(nbre_car))
if pass_word not in col:
col[longueur_col]=pass_word
search_new_pass=False
print (col)
else :
print("Le nombre de mot de passe à générer est trop important par rapport au nombre de caractères possible")
generate_password(6,20000)
| [
"Performance-wise, the main difference between the Apps Script and Python versions is that the Apps Script code logs about 20,000 values in the Apps Script console, which is slow, while the Python code outputs 1 value.\nThe Apps Script code has several syntactical and semantical errors, including:\n\nverify_unique() is undefined\ncol.set({}, pass) does not make sense; perhaps use an Array instead of a Map, find if a value is already in the list with col.includes(pass), insert values with col.push(pass), and use col instead of Array.from(col.values()) to retrieve the values\nvar prefixe = prefixe; is superfluous\n\nSee Apps Script at Stack Overflow and Clean Code JavaScript.\n",
"I think the code could be quite a bit easier. I did not study your code extensively. But this would be my approach to solve the problem. As you can see it takes less than one second to generate 20'000 passwords.\nWhat actually really takes a long time is the duplicate check.\nAside from thath be careful when generating passwords without a cryptographically secure random algorithm.\nPlease have a look at this for how to use Crypto.getRandomValues()\n\n\nconst CHARACTER_POOL =\n \"123456789ABCDEFGHIJKLMNPQRSTUVWXYZabcdefghijklmnpqrstuvwxyz'(-Γ¨_Γ§Γ )=:;,!.\";\nconst PASSWORDS_TO_GENERATE = 20000;\nconst PASSWORD_LENGTH = 6;\nconst PREFIX = \"\";\n\nconst createPassword = () => {\n let password = \"\";\n for (let i = 0; i < PASSWORD_LENGTH; i++) {\n // this is not secure\n password += CHARACTER_POOL.charAt(\n Math.floor(Math.random() * CHARACTER_POOL.length)\n );\n }\n return `${PREFIX}${password}`;\n};\n\nconst generatePassword = () => {\n const passwords = [];\n while (passwords.length < PASSWORDS_TO_GENERATE) {\n const password = createPassword();\n if (!passwords.includes(password)) {\n passwords.push(password);\n }\n }\n return passwords;\n};\n\nconst start = new Date().getTime();\nconst passwords = generatePassword();\nconsole.log(`It took ${(new Date().getTime() - start) / 1000} to generate ${passwords.length} passwords`);\nconsole.log(passwords);\n\n\n\n",
"That's a lot of code for what seems a pretty straightforward problem. I didn't look closely at your version, but here's how I might handle the problem. It creates 20000 in about 20 milliseconds on my mid-level machine.\n\n\nconst genPasswords = (chars) => (n, length = 6, pre = '') => Array .from (\n {length: n}, \n () => pre + Array.from({length}, () => chars[~~(Math.random() * chars.length)]) .join('')\n)\n\nconst pwdGen = genPasswords (\"123456789ABCDEFGHIJKLMNPQRSTUVWXYZabcdefghijklmnpqrstuvwxyz'(-Γ¨_Γ§Γ )=:;,!.\")\n\nconsole.time('generate 20000')\nconst res = pwdGen (20000, 6, 'FOULE')\nconsole.timeEnd('generate 20000')\n\nconsole .log (res)\n.as-console-wrapper {max-height: 100% !important; top: 0}\n\n\n\nI store the characters to use in a closure, returning a function that takes the number to generate, their length, and a prefix.\n"
] | [
2,
1,
0
] | [] | [] | [
"algorithm",
"generator",
"google_apps_script",
"javascript",
"python"
] | stackoverflow_0074626387_algorithm_generator_google_apps_script_javascript_python.txt |
Q:
Selenium Python XML parsing
I need to parse XML with Selenium, but the XML is not a file, it is on the web.
Here is the site https://www.thetutorsdirectory.com/usa/sitemap/sitemap_l1.xml and I need to get all the links for example this one
<url>
<loc>https://www.thetutorsdirectory.com/usa/location/private-tutor-anaheim</loc>
<changefreq>weekly</changefreq>
</url>
Please help me :)
I tried multiple solutions that were given on this site
A:
A solution with beautifulsoup:
import requests
from bs4 import BeautifulSoup
url = "https://www.thetutorsdirectory.com/usa/sitemap/sitemap_l1.xml"
headers = {
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0"
}
soup = BeautifulSoup(requests.get(url, headers=headers).content, "xml")
for link in soup.select("loc"):
print(link.text)
Prints:
...
https://www.thetutorsdirectory.com/usa/location/private-tutor-wichita-falls
https://www.thetutorsdirectory.com/usa/location/private-tutor-wilmington
https://www.thetutorsdirectory.com/usa/location/private-tutor-winston-salem
https://www.thetutorsdirectory.com/usa/location/private-tutor-woodbridge
https://www.thetutorsdirectory.com/usa/location/private-tutor-worcester-usa
https://www.thetutorsdirectory.com/usa/location/private-tutor-yonkers
| Selenium Python XML parsing | I need to parse XML with Selenium, but the XML is not a file, it is on the web.
Here is the site https://www.thetutorsdirectory.com/usa/sitemap/sitemap_l1.xml and I need to get all the links for example this one
<url>
<loc>https://www.thetutorsdirectory.com/usa/location/private-tutor-anaheim</loc>
<changefreq>weekly</changefreq>
</url>
Please help me :)
I tried multiple solutions that were given on this site
| [
"A solution with beautifulsoup:\nimport requests\nfrom bs4 import BeautifulSoup\n\nurl = \"https://www.thetutorsdirectory.com/usa/sitemap/sitemap_l1.xml\"\n\nheaders = {\n \"User-Agent\": \"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0\"\n}\n\nsoup = BeautifulSoup(requests.get(url, headers=headers).content, \"xml\")\n\nfor link in soup.select(\"loc\"):\n print(link.text)\n\nPrints:\n\n...\n\nhttps://www.thetutorsdirectory.com/usa/location/private-tutor-wichita-falls\nhttps://www.thetutorsdirectory.com/usa/location/private-tutor-wilmington\nhttps://www.thetutorsdirectory.com/usa/location/private-tutor-winston-salem\nhttps://www.thetutorsdirectory.com/usa/location/private-tutor-woodbridge\nhttps://www.thetutorsdirectory.com/usa/location/private-tutor-worcester-usa\nhttps://www.thetutorsdirectory.com/usa/location/private-tutor-yonkers\n\n"
] | [
1
] | [] | [] | [
"parsing",
"python",
"selenium",
"xml"
] | stackoverflow_0074648860_parsing_python_selenium_xml.txt |
Q:
Removing all non-numeric characters from string in Python
How do we remove all non-numeric characters from a string in Python?
A:
>>> import re
>>> re.sub("[^0-9]", "", "sdkjh987978asd098as0980a98sd")
'987978098098098'
A:
Not sure if this is the most efficient way, but:
>>> ''.join(c for c in "abc123def456" if c.isdigit())
'123456'
The ''.join part means to combine all the resulting characters together without any characters in between. Then the rest of it is a list comprehension, where (as you can probably guess) we only take the parts of the string that match the condition isdigit.
A:
This should work for both strings and unicode objects in Python2, and both strings and bytes in Python3:
# python <3.0
def only_numerics(seq):
return filter(type(seq).isdigit, seq)
# python β₯3.0
def only_numerics(seq):
seq_type= type(seq)
return seq_type().join(filter(seq_type.isdigit, seq))
A:
@Ned Batchelder and @newacct provided the right answer, but ...
Just in case if you have comma(,) decimal(.) in your string:
import re
re.sub("[^\d\.]", "", "$1,999,888.77")
'1999888.77'
A:
Just to add another option to the mix, there are several useful constants within the string module. While more useful in other cases, they can be used here.
>>> from string import digits
>>> ''.join(c for c in "abc123def456" if c in digits)
'123456'
There are several constants in the module, including:
ascii_letters (abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ)
hexdigits (0123456789abcdefABCDEF)
If you are using these constants heavily, it can be worthwhile to covert them to a frozenset. That enables O(1) lookups, rather than O(n), where n is the length of the constant for the original strings.
>>> digits = frozenset(digits)
>>> ''.join(c for c in "abc123def456" if c in digits)
'123456'
A:
Many right answers but in case you want it in a float, directly, without using regex:
x= '$123.45M'
float(''.join(c for c in x if (c.isdigit() or c =='.'))
123.45
You can change the point for a comma depending on your needs.
change for this if you know your number is an integer
x='$1123'
int(''.join(c for c in x if c.isdigit())
1123
A:
Fastest approach, if you need to perform more than just one or two such removal operations (or even just one, but on a very long string!-), is to rely on the translate method of strings, even though it does need some prep:
>>> import string
>>> allchars = ''.join(chr(i) for i in xrange(256))
>>> identity = string.maketrans('', '')
>>> nondigits = allchars.translate(identity, string.digits)
>>> s = 'abc123def456'
>>> s.translate(identity, nondigits)
'123456'
The translate method is different, and maybe a tad simpler simpler to use, on Unicode strings than it is on byte strings, btw:
>>> unondig = dict.fromkeys(xrange(65536))
>>> for x in string.digits: del unondig[ord(x)]
...
>>> s = u'abc123def456'
>>> s.translate(unondig)
u'123456'
You might want to use a mapping class rather than an actual dict, especially if your Unicode string may potentially contain characters with very high ord values (that would make the dict excessively large;-). For example:
>>> class keeponly(object):
... def __init__(self, keep):
... self.keep = set(ord(c) for c in keep)
... def __getitem__(self, key):
... if key in self.keep:
... return key
... return None
...
>>> s.translate(keeponly(string.digits))
u'123456'
>>>
A:
An easy way:
str.isdigit() returns True if str contains only numeric characters. Call filter(predicate, iterable) with str.isdigit as predicate and the string as iterable to return an iterable containing only the string's numeric characters. Call str.join(iterable) with the empty string as str and the result of filter() as iterable to join each numeric character together into one string.
For example:
a_string = "!1a2;b3c?"
numeric_filter = filter(str.isdigit, a_string)
numeric_string = "".join(numeric_filter)
print(numeric_string)
And the output is:
123
A:
There are a lot of correct answers here. Some are faster or slower than others. The approach used in Ehsan Akbaritabar's and tzot's answers, filter with str.isdigit, is really fast; as is translate, from Alex Martelli's answer, once the setup is done. These are the two fastest methods. However, if you are only doing the substitution once, the setup penalty for translate is significant.
Which way is the best may depend on your use case. One replacement in a unit test? I'd go for filter using isdigit. It requires no imports, uses builtins only, and is quick and easy:
''.join(filter(str.isdigit, string_to_filter))
In a pandas or pyspark DataFrame, with millions of rows, the efficiency of translate is probably worth it, if you don't use the methods the DataFrame provides (which tend to rely on regex).
If you want to take the use translate approach, I'd recommend some changes for Python 3:
import string
unicode_non_digits = dict.fromkeys(
[x for x in range(65536) if chr(x) not in string.digits]
)
string_to_filter.translate(unicode_non_digits)
Method
Loops
Repeats
Best of result per loop
filter using isdigit
1000
15
0.83 usec
generator using isdigit
1000
15
1.6 usec
using re.sub
1000
15
1.94 usec
generator testing membership in digits
1000
15
1.23 usec
generator testing membership in digits set
1000
15
1.19 usec
use translate
1000
15
0.797 usec
use re.compile
1000
15
1.52 usec
use translate but make translation table every time
20
5
1.21e+04 usec
That last row in the table is to show the setup penalty for translate. I used the default number and repeat options when creating the translation table every time, otherwise it takes too long.
The raw output from my timing script:
/bin/zsh /Users/henry.longmore/Library/Application\ Support/JetBrains/PyCharm2022.2/scratches/scratch_4.sh
+/Users/henry.longmore/Library/Application Support/JetBrains/PyCharm2022.2/scratches/scratch_4.sh:6> which python
/Users/henry.longmore/.pyenv/shims/python
+/Users/henry.longmore/Library/Application Support/JetBrains/PyCharm2022.2/scratches/scratch_4.sh:7> python --version
Python 3.10.6
+/Users/henry.longmore/Library/Application Support/JetBrains/PyCharm2022.2/scratches/scratch_4.sh:8> set +x
-----filter using isdigit
1000 loops, best of 15: 0.83 usec per loop
-----generator using isdigit
1000 loops, best of 15: 1.6 usec per loop
-----using re.sub
1000 loops, best of 15: 1.94 usec per loop
-----generator testing membership in digits
1000 loops, best of 15: 1.23 usec per loop
-----generator testing membership in digits set
1000 loops, best of 15: 1.19 usec per loop
-----use translate
1000 loops, best of 15: 0.797 usec per loop
-----use re.compile
1000 loops, best of 15: 1.52 usec per loop
-----use translate but make translation table every time
using default number and repeat, otherwise this takes too long
20 loops, best of 5: 1.21e+04 usec per loop
The script I used for the timings:
NUMBER=1000
REPEAT=15
UNIT="usec"
TEST_STRING="abc123def45ghi6789"
set -x
which python
python --version
set +x
echo "-----filter using isdigit"
python -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT "''.join(filter(str.isdigit, '${TEST_STRING}'))"
echo "-----generator using isdigit"
python -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT "''.join(c for c in '${TEST_STRING}' if c.isdigit())"
echo "-----using re.sub"
python -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT --setup="import re" "re.sub('[^0-9]', '', '${TEST_STRING}')"
echo "-----generator testing membership in digits"
python -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT --setup="from string import digits" "''.join(c for c in '${TEST_STRING}' if c in digits)"
echo "-----generator testing membership in digits set"
python -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT --setup="from string import digits; digits = {*digits}" "''.join(c for c in '${TEST_STRING}' if c in digits)"
echo "-----use translate"
python -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT --setup="import string; unicode_non_digits = dict.fromkeys([x for x in range(65536) if chr(x) not in string.digits])" "'${TEST_STRING}'.translate(unicode_non_digits)"
echo "-----use re.compile"
python -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT --setup="import re; digit_filter = re.compile('[^0-9]')" "digit_filter.sub('', '${TEST_STRING}')"
echo "-----use translate but make translation table every time"
echo " using default number and repeat, otherwise this takes too long"
python -m timeit --unit=$UNIT --setup="import string" "unicode_non_digits = dict.fromkeys([x for x in range(65536) if chr(x) not in string.digits]); '${TEST_STRING}'.translate(unicode_non_digits)"
| Removing all non-numeric characters from string in Python | How do we remove all non-numeric characters from a string in Python?
| [
">>> import re\n>>> re.sub(\"[^0-9]\", \"\", \"sdkjh987978asd098as0980a98sd\")\n'987978098098098'\n\n",
"Not sure if this is the most efficient way, but:\n>>> ''.join(c for c in \"abc123def456\" if c.isdigit())\n'123456'\n\nThe ''.join part means to combine all the resulting characters together without any characters in between. Then the rest of it is a list comprehension, where (as you can probably guess) we only take the parts of the string that match the condition isdigit.\n",
"This should work for both strings and unicode objects in Python2, and both strings and bytes in Python3:\n# python <3.0\ndef only_numerics(seq):\n return filter(type(seq).isdigit, seq)\n\n# python β₯3.0\ndef only_numerics(seq):\n seq_type= type(seq)\n return seq_type().join(filter(seq_type.isdigit, seq))\n\n",
"@Ned Batchelder and @newacct provided the right answer, but ...\nJust in case if you have comma(,) decimal(.) in your string:\nimport re\nre.sub(\"[^\\d\\.]\", \"\", \"$1,999,888.77\")\n'1999888.77'\n\n",
"Just to add another option to the mix, there are several useful constants within the string module. While more useful in other cases, they can be used here.\n>>> from string import digits\n>>> ''.join(c for c in \"abc123def456\" if c in digits)\n'123456'\n\nThere are several constants in the module, including:\n\nascii_letters (abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ)\nhexdigits (0123456789abcdefABCDEF)\n\nIf you are using these constants heavily, it can be worthwhile to covert them to a frozenset. That enables O(1) lookups, rather than O(n), where n is the length of the constant for the original strings.\n>>> digits = frozenset(digits)\n>>> ''.join(c for c in \"abc123def456\" if c in digits)\n'123456'\n\n",
"Many right answers but in case you want it in a float, directly, without using regex:\nx= '$123.45M'\n\nfloat(''.join(c for c in x if (c.isdigit() or c =='.'))\n\n123.45\nYou can change the point for a comma depending on your needs.\nchange for this if you know your number is an integer\nx='$1123' \nint(''.join(c for c in x if c.isdigit())\n\n1123\n",
"Fastest approach, if you need to perform more than just one or two such removal operations (or even just one, but on a very long string!-), is to rely on the translate method of strings, even though it does need some prep:\n>>> import string\n>>> allchars = ''.join(chr(i) for i in xrange(256))\n>>> identity = string.maketrans('', '')\n>>> nondigits = allchars.translate(identity, string.digits)\n>>> s = 'abc123def456'\n>>> s.translate(identity, nondigits)\n'123456'\n\nThe translate method is different, and maybe a tad simpler simpler to use, on Unicode strings than it is on byte strings, btw:\n>>> unondig = dict.fromkeys(xrange(65536))\n>>> for x in string.digits: del unondig[ord(x)]\n... \n>>> s = u'abc123def456'\n>>> s.translate(unondig)\nu'123456'\n\nYou might want to use a mapping class rather than an actual dict, especially if your Unicode string may potentially contain characters with very high ord values (that would make the dict excessively large;-). For example:\n>>> class keeponly(object):\n... def __init__(self, keep): \n... self.keep = set(ord(c) for c in keep)\n... def __getitem__(self, key):\n... if key in self.keep:\n... return key\n... return None\n... \n>>> s.translate(keeponly(string.digits))\nu'123456'\n>>> \n\n",
"An easy way:\nstr.isdigit() returns True if str contains only numeric characters. Call filter(predicate, iterable) with str.isdigit as predicate and the string as iterable to return an iterable containing only the string's numeric characters. Call str.join(iterable) with the empty string as str and the result of filter() as iterable to join each numeric character together into one string.\nFor example:\na_string = \"!1a2;b3c?\"\nnumeric_filter = filter(str.isdigit, a_string)\nnumeric_string = \"\".join(numeric_filter)\nprint(numeric_string)\n\nAnd the output is:\n123\n\n",
"There are a lot of correct answers here. Some are faster or slower than others. The approach used in Ehsan Akbaritabar's and tzot's answers, filter with str.isdigit, is really fast; as is translate, from Alex Martelli's answer, once the setup is done. These are the two fastest methods. However, if you are only doing the substitution once, the setup penalty for translate is significant.\nWhich way is the best may depend on your use case. One replacement in a unit test? I'd go for filter using isdigit. It requires no imports, uses builtins only, and is quick and easy:\n''.join(filter(str.isdigit, string_to_filter))\n\nIn a pandas or pyspark DataFrame, with millions of rows, the efficiency of translate is probably worth it, if you don't use the methods the DataFrame provides (which tend to rely on regex).\nIf you want to take the use translate approach, I'd recommend some changes for Python 3:\nimport string\n\nunicode_non_digits = dict.fromkeys(\n [x for x in range(65536) if chr(x) not in string.digits]\n)\nstring_to_filter.translate(unicode_non_digits)\n\n\n\n\n\nMethod\nLoops\nRepeats\nBest of result per loop\n\n\n\n\nfilter using isdigit\n1000\n15\n0.83 usec\n\n\ngenerator using isdigit\n1000\n15\n1.6 usec\n\n\nusing re.sub\n1000\n15\n1.94 usec\n\n\ngenerator testing membership in digits\n1000\n15\n1.23 usec\n\n\ngenerator testing membership in digits set\n1000\n15\n1.19 usec\n\n\nuse translate\n1000\n15\n0.797 usec\n\n\nuse re.compile\n1000\n15\n1.52 usec\n\n\nuse translate but make translation table every time\n20\n5\n1.21e+04 usec\n\n\n\n\nThat last row in the table is to show the setup penalty for translate. I used the default number and repeat options when creating the translation table every time, otherwise it takes too long.\nThe raw output from my timing script:\n/bin/zsh /Users/henry.longmore/Library/Application\\ Support/JetBrains/PyCharm2022.2/scratches/scratch_4.sh\n+/Users/henry.longmore/Library/Application Support/JetBrains/PyCharm2022.2/scratches/scratch_4.sh:6> which python\n/Users/henry.longmore/.pyenv/shims/python\n+/Users/henry.longmore/Library/Application Support/JetBrains/PyCharm2022.2/scratches/scratch_4.sh:7> python --version\nPython 3.10.6\n+/Users/henry.longmore/Library/Application Support/JetBrains/PyCharm2022.2/scratches/scratch_4.sh:8> set +x\n-----filter using isdigit\n1000 loops, best of 15: 0.83 usec per loop\n-----generator using isdigit\n1000 loops, best of 15: 1.6 usec per loop\n-----using re.sub\n1000 loops, best of 15: 1.94 usec per loop\n-----generator testing membership in digits\n1000 loops, best of 15: 1.23 usec per loop\n-----generator testing membership in digits set\n1000 loops, best of 15: 1.19 usec per loop\n-----use translate\n1000 loops, best of 15: 0.797 usec per loop\n-----use re.compile\n1000 loops, best of 15: 1.52 usec per loop\n-----use translate but make translation table every time\n using default number and repeat, otherwise this takes too long\n20 loops, best of 5: 1.21e+04 usec per loop\n\nThe script I used for the timings:\nNUMBER=1000\nREPEAT=15\nUNIT=\"usec\"\nTEST_STRING=\"abc123def45ghi6789\"\nset -x\nwhich python\npython --version\nset +x\necho \"-----filter using isdigit\"\npython -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT \"''.join(filter(str.isdigit, '${TEST_STRING}'))\"\necho \"-----generator using isdigit\"\npython -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT \"''.join(c for c in '${TEST_STRING}' if c.isdigit())\"\necho \"-----using re.sub\"\npython -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT --setup=\"import re\" \"re.sub('[^0-9]', '', '${TEST_STRING}')\"\necho \"-----generator testing membership in digits\"\npython -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT --setup=\"from string import digits\" \"''.join(c for c in '${TEST_STRING}' if c in digits)\"\necho \"-----generator testing membership in digits set\"\npython -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT --setup=\"from string import digits; digits = {*digits}\" \"''.join(c for c in '${TEST_STRING}' if c in digits)\"\necho \"-----use translate\"\npython -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT --setup=\"import string; unicode_non_digits = dict.fromkeys([x for x in range(65536) if chr(x) not in string.digits])\" \"'${TEST_STRING}'.translate(unicode_non_digits)\"\necho \"-----use re.compile\"\npython -m timeit --number=$NUMBER --repeat=$REPEAT --unit=$UNIT --setup=\"import re; digit_filter = re.compile('[^0-9]')\" \"digit_filter.sub('', '${TEST_STRING}')\"\necho \"-----use translate but make translation table every time\"\necho \" using default number and repeat, otherwise this takes too long\"\npython -m timeit --unit=$UNIT --setup=\"import string\" \"unicode_non_digits = dict.fromkeys([x for x in range(65536) if chr(x) not in string.digits]); '${TEST_STRING}'.translate(unicode_non_digits)\"\n\n\n\n"
] | [
376,
127,
23,
18,
10,
8,
5,
2,
0
] | [] | [] | [
"numbers",
"python"
] | stackoverflow_0001249388_numbers_python.txt |
Q:
How to import a python function from a sibling folder
This question has been asked before. Even though I couldn't get an answer that solves this issue.
I have the following directory and subdirectories:
I have a function hello() in test1.py that I want to import in test2.py.
test1.py:
def hello():
print("hello")
test2.py:
import demoA.test1 as test1
test1.hello()
Output:
Traceback (most recent call last):
File "c:/Users/hasli/Documents/Projects/test/demoB/test2.py", line 1, in <module>
import demoA.test1 as test1
ModuleNotFoundError: No module named 'demoA'
This is exactly as explained in https://www.freecodecamp.org/news/module-not-found-error-in-python-solved/ but I can't access hello()
I am using python 3: Python 3.8.9
A:
You need to add demoA to the list of paths used for import.
import sys
sys.path.append('..')
import demoA.test1 as test1
test1.hello()
| How to import a python function from a sibling folder | This question has been asked before. Even though I couldn't get an answer that solves this issue.
I have the following directory and subdirectories:
I have a function hello() in test1.py that I want to import in test2.py.
test1.py:
def hello():
print("hello")
test2.py:
import demoA.test1 as test1
test1.hello()
Output:
Traceback (most recent call last):
File "c:/Users/hasli/Documents/Projects/test/demoB/test2.py", line 1, in <module>
import demoA.test1 as test1
ModuleNotFoundError: No module named 'demoA'
This is exactly as explained in https://www.freecodecamp.org/news/module-not-found-error-in-python-solved/ but I can't access hello()
I am using python 3: Python 3.8.9
| [
"You need to add demoA to the list of paths used for import.\nimport sys\nsys.path.append('..')\n\nimport demoA.test1 as test1\n\ntest1.hello()\n\n"
] | [
0
] | [] | [] | [
"import",
"python"
] | stackoverflow_0074647907_import_python.txt |
Q:
How do I Check for an item in multiple lists and keep it simplified?
I'm doing connect 4 for the semester final in my high school coding class. I've got everything to work, but I cannot get the game to recognize when there is 4 in a row. I don't even know where to start. I should mention that turtles are a requirement for the assignment. all my checkers are turtles.
note that I refer to Arrays as lists many times...
I tried having 7 long custom rectangle turtles, but those turtles covered up the board that another turtle draws. so i decided to hide the turtle so that the drawing would show up. but then i discovered that you cannot click on a hidden turtle. So i decided to put an arrow and checker (the checker is the same color as whoever's turn it is) above each column, when they are clicked it drops a checker in that column, down to the lowest row.
I have tried a 7x6 list, that did not work as it updates a whole column when i try to updates just one integer. so I then made 6 individual lists of 7 integers. this is what i currently have.
I have tried checking for turtle colors that are around but all the checker turtles have the same name so that i could put them in a function and not have to code 42 turtles all with different names.
I have tried to code 42 turtles all with different names, I gave up on that for obvious reasons.
This is what i currently have below
my question
I want to know how to check for a 1 or a 2 in all of my lists and see if they make a 4 in a row like a connect 4 game would. I could do it by writing thousands of lines of code for all the possible permutations but I'm not gonna do that. I know there is a better way to check for any possible 4 in a row (diagonal negative slope, diagonal positive slope, horizontal, vertical) and i should note that it also has to count 5,6 and 7 in a rows' as vicories.
#imports
import turtle as trtl
# variables
errorstatement = "That was not an option, try again."
onetwo = 1
victory = False
x1rowcor = -80
x2rowcor = -80
x3rowcor = -80
x4rowcor = -80
x5rowcor = -80
x6rowcor = -80
x7rowcor = -80
listxcor1 = 0
listxcor2 = 0
listxcor3 = 0
listxcor4 = 0
listxcor5 = 0
listxcor6 = 0
listxcor7 = 0
# turtle for drawing
bdraw = trtl.Turtle()
# turtles for the squares
x1 = trtl.Turtle()
x2 = trtl.Turtle()
x3 = trtl.Turtle()
x4 = trtl.Turtle()
x5 = trtl.Turtle()
x6 = trtl.Turtle()
x7 = trtl.Turtle()
x1.ht()
x2.ht()
x3.ht()
x4.ht()
x5.ht()
x6.ht()
x7.ht()
# 2D list
c4board = [[0]*7 for _ in range(6)]
'''
ideas here.
each turtle will be 30-40 pixels in diameter
each square for the checkers to go will be 40 pixels wide and tall
total size in pixels will be 280 across and 200 tall
COLUMNS ARE UP AND DOWN
ROWS ARE SIDE TO SIDE
turtle shape \/
https://stackoverflow.com/questions/28444497/how-can-i-make-rectangle-shape-in-turtle-python
When you click on the red checker in the bottom left, show how many red moves are remaining, same for yellow but with yellow moves instead.
'''
# drawing the board
ny = 0 # column lines
nx = 0 # row lines
x = -140
y = -100
bdraw.speed("fastest")
bdraw.ht()
while ny < 8: # while loop for columns
bdraw.penup()
bdraw.goto(x, y)
bdraw.setheading(90)
bdraw.pendown()
bdraw.forward(240)
x += 40
ny += 1
x -=40
while nx < 7: # while loop for rows
bdraw.penup()
bdraw.goto(x, y)
bdraw.setheading(180)
bdraw.pendown()
bdraw.forward(280)
y += 40
nx += 1
### make gameboard funcional
x1.penup()
x2.penup()
x3.penup()
x4.penup()
x5.penup()
x6.penup()
x7.penup()
x1.turtlesize(2.75)
x2.turtlesize(2.75)
x3.turtlesize(2.75)
x4.turtlesize(2.75)
x5.turtlesize(2.75)
x6.turtlesize(2.75)
x7.turtlesize(2.75)
x1.goto(-120, 150)
x2.goto(-80, 150)
x3.goto(-40, 150)
x4.goto(0, 150)
x5.goto(40, 150)
x6.goto(80, 150)
x7.goto(120, 150)
x1.setheading(270)
x2.setheading(270)
x3.setheading(270)
x4.setheading(270)
x5.setheading(270)
x6.setheading(270)
x7.setheading(270)
x1.st()
x2.st()
x3.st()
x4.st()
x5.st()
x6.st()
x7.st()
###
###
### Checkers. All the checkers.
## bottom left and right checkers, show score and turns done/left when clicked.
redcbl = trtl.Turtle() # red checker bottom left
yellowcbr = trtl.Turtle() # yellow checker bottom right
redcbl.ht()
yellowcbr.ht()
redcbl.shape('circle')
redcbl.turtlesize(2)
yellowcbr.shape('circle')
yellowcbr.turtlesize(2)
redcbl.penup()
redcbl.goto(-200,-150)
yellowcbr.penup()
yellowcbr.goto(200,-150)
redcbl.color("red")
yellowcbr.color("yellow")
redcbl.st()
yellowcbr.st()
##
### turns
currentturn = "yellow"
startingturn = input("Who wants to go first? 'yellow'? or 'red'? ")
if startingturn == "red":
currentturn = "red"
print(currentturn)
onetwo = 1
elif startingturn == "yellow":
currentturn = "yellow"
print(currentturn)
onetwo = 2
else:
print(errorstatement)
startingturn = input("Who wants to go first? 'yellow'? or 'red'? ")
###
###
# checker turtles above board
c1 = trtl.Turtle()
c1.ht()
c2 = trtl.Turtle()
c2.ht()
c3 = trtl.Turtle()
c3.ht()
c4 = trtl.Turtle()
c4.ht()
c5 = trtl.Turtle()
c5.ht()
c6 = trtl.Turtle()
c6.ht()
c7 = trtl.Turtle()
c7.ht()
c1.penup()
c2.penup()
c3.penup()
c4.penup()
c5.penup()
c6.penup()
c7.penup()
c1.goto(-120, 190)
c2.goto(-80, 190)
c3.goto(-40, 190)
c4.goto(0, 190)
c5.goto(40, 190)
c6.goto(80, 190)
c7.goto(120, 190)
c1.shape('circle')
c2.shape('circle')
c3.shape('circle')
c4.shape('circle')
c5.shape('circle')
c6.shape('circle')
c7.shape('circle')
c1.color(currentturn)
c2.color(currentturn)
c3.color(currentturn)
c4.color(currentturn)
c5.color(currentturn)
c6.color(currentturn)
c7.color(currentturn)
c1.st()
c2.st()
c3.st()
c4.st()
c5.st()
c6.st()
c7.st()
##### make game itself functional
def checkerplaced():
global onetwo
global currentturn
if currentturn == "yellow":
currentturn = "red"
print("red's turn")
onetwo = 1
elif currentturn == "red":
currentturn = "yellow"
print("yellow's turn")
onetwo = 2
c1.color(currentturn)
c2.color(currentturn)
c3.color(currentturn)
c4.color(currentturn)
c5.color(currentturn)
c6.color(currentturn)
c7.color(currentturn)
'''
The 'def checkforconnect4():' would go here. I don't know what i need to do
'''
# turtle clicked functions
def x1click(x,y):
global c4board
global currentturn
global x1rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(-120, x1rowcor)
checker.st()
x1rowcor += 40
listxcor1 = ((x1rowcor+80)/40)-1
checkerplaced()
print(listxcor1)
if c4board[5][0] == 0:
c4board[5][0] = onetwo
print(c4board)
elif c4board[4][0] == 0:
c4board[4][0] = onetwo
print(c4board)
elif c4board[3][0] == 0:
c4board[3][0] = onetwo
print(c4board)
elif c4board[2][0] == 0:
c4board[2][0] = onetwo
print(c4board)
elif c4board[1][0] == 0:
c4board[1][0] = onetwo
print(c4board)
elif c4board[0][0] == 0:
c4board[0][0] = onetwo
print(c4board)
if x1rowcor > 120:
c1.ht()
x1.ht()
def x2click(x,y):
global currentturn
global x2rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(-80, x2rowcor)
checker.st()
x2rowcor += 40
listxcor2 = ((x2rowcor+80)/40)-1
print(listxcor2)
checkerplaced()
if c4board[5][1] == 0:
c4board[5][1] = onetwo
print(c4board)
elif c4board[4][1] == 0:
c4board[4][1] = onetwo
print(c4board)
elif c4board[3][1] == 0:
c4board[3][1] = onetwo
print(c4board)
elif c4board[2][1] == 0:
c4board[2][1] = onetwo
print(c4board)
elif c4board[1][1] == 0:
c4board[1][1] = onetwo
print(c4board)
elif c4board[0][1] == 0:
c4board[0][1] = onetwo
print(c4board)
if x2rowcor > 120:
c2.ht()
x2.ht()
def x3click(x,y):
global currentturn
global x3rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(-40, x3rowcor)
checker.st()
x3rowcor += 40
listxcor3 = ((x3rowcor+80)/40)-1
print(listxcor3)
checkerplaced()
if c4board[5][2] == 0:
c4board[5][2] = onetwo
print(c4board)
elif c4board[4][2] == 0:
c4board[4][2] = onetwo
print(c4board)
elif c4board[3][2] == 0:
c4board[3][2] = onetwo
print(c4board)
elif c4board[2][2] == 0:
c4board[2][2] = onetwo
print(c4board)
elif c4board[1][2] == 0:
c4board[1][2] = onetwo
print(c4board)
elif c4board[0][2] == 0:
c4board[0][2] = onetwo
print(c4board)
if x3rowcor > 120:
c3.ht()
x3.ht()
def x4click(x,y):
global currentturn
global x4rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(0, x4rowcor)
checker.st()
x4rowcor += 40
listxcor4 = ((x4rowcor+80)/40)-1
print(listxcor4)
checkerplaced()
if c4board[5][3] == 0:
c4board[5][3] = onetwo
print(c4board)
elif c4board[4][3] == 0:
c4board[4][3] = onetwo
print(c4board)
elif c4board[3][3] == 0:
c4board[3][3] = onetwo
print(c4board)
elif c4board[2][3] == 0:
c4board[2][3] = onetwo
print(c4board)
elif c4board[1][3] == 0:
c4board[1][3] = onetwo
print(c4board)
elif c4board[0][3] == 0:
c4board[0][3] = onetwo
print(c4board)
if x4rowcor > 120:
c4.ht()
x4.ht()
def x5click(x,y):
global currentturn
global x5rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(40, x5rowcor)
checker.st()
x5rowcor += 40
listxcor5 = ((x5rowcor+80)/40)-1
print(listxcor5)
checkerplaced()
if c4board[5][4] == 0:
c4board[5][4] = onetwo
print(c4board)
elif c4board[4][4] == 0:
c4board[4][4] = onetwo
print(c4board)
elif c4board[3][4] == 0:
c4board[3][4] = onetwo
print(c4board)
elif c4board[2][4] == 0:
c4board[2][4] = onetwo
print(c4board)
elif c4board[1][4] == 0:
c4board[1][4] = onetwo
print(c4board)
elif c4board[0][4] == 0:
c4board[0][4] = onetwo
print(c4board)
if x5rowcor > 120:
c5.ht()
x5.ht()
def x6click(x,y):
global currentturn
global x6rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(80, x6rowcor)
checker.st()
x6rowcor += 40
listxcor6 = ((x6rowcor+80)/40)-1
print(listxcor6)
checkerplaced()
if c4board[5][5] == 0:
c4board[5][5] = onetwo
print(c4board)
elif c4board[4][5] == 0:
c4board[4][5] = onetwo
print(c4board)
elif c4board[3][5] == 0:
c4board[3][5] = onetwo
print(c4board)
elif c4board[2][5] == 0:
c4board[2][5] = onetwo
print(c4board)
elif c4board[1][5] == 0:
c4board[1][5] = onetwo
print(c4board)
elif c4board[0][5] == 0:
c4board[0][5] = onetwo
print(c4board)
if x6rowcor > 120:
c6.ht()
x6.ht()
def x7click(x,y):
global currentturn
global x7rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(120, x7rowcor)
checker.st()
x7rowcor += 40
listxcor7 = ((x7rowcor+80)/40)-1
print(listxcor7)
checkerplaced()
if c4board[5][6] == 0:
c4board[5][6] = onetwo
print(c4board)
elif c4board[4][6] == 0:
c4board[4][6] = onetwo
print(c4board)
elif c4board[3][6] == 0:
c4board[3][6] = onetwo
print(c4board)
elif c4board[2][6] == 0:
c4board[2][6] = onetwo
print(c4board)
elif c4board[1][6] == 0:
c4board[1][6] = onetwo
print(c4board)
elif c4board[0][6] == 0:
c4board[0][6] = onetwo
print(c4board)
if x7rowcor > 120:
c7.ht()
x7.ht()
def rcclick(x,y):
print("red checker clicked")
def ycclick(x,y):
print("yellow checker clicked")
dumbvname = 1
# when click call corresponding function
x1.onclick(x1click)
c1.onclick(x1click)
x2.onclick(x2click)
c2.onclick(x2click)
x3.onclick(x3click)
c3.onclick(x3click)
x4.onclick(x4click)
c4.onclick(x4click)
x5.onclick(x5click)
c5.onclick(x5click)
x6.onclick(x6click)
c6.onclick(x6click)
x7.onclick(x7click)
c7.onclick(x7click)
redcbl.onclick(rcclick)
yellowcbr.onclick(ycclick)
| How do I Check for an item in multiple lists and keep it simplified? | I'm doing connect 4 for the semester final in my high school coding class. I've got everything to work, but I cannot get the game to recognize when there is 4 in a row. I don't even know where to start. I should mention that turtles are a requirement for the assignment. all my checkers are turtles.
note that I refer to Arrays as lists many times...
I tried having 7 long custom rectangle turtles, but those turtles covered up the board that another turtle draws. so i decided to hide the turtle so that the drawing would show up. but then i discovered that you cannot click on a hidden turtle. So i decided to put an arrow and checker (the checker is the same color as whoever's turn it is) above each column, when they are clicked it drops a checker in that column, down to the lowest row.
I have tried a 7x6 list, that did not work as it updates a whole column when i try to updates just one integer. so I then made 6 individual lists of 7 integers. this is what i currently have.
I have tried checking for turtle colors that are around but all the checker turtles have the same name so that i could put them in a function and not have to code 42 turtles all with different names.
I have tried to code 42 turtles all with different names, I gave up on that for obvious reasons.
This is what i currently have below
my question
I want to know how to check for a 1 or a 2 in all of my lists and see if they make a 4 in a row like a connect 4 game would. I could do it by writing thousands of lines of code for all the possible permutations but I'm not gonna do that. I know there is a better way to check for any possible 4 in a row (diagonal negative slope, diagonal positive slope, horizontal, vertical) and i should note that it also has to count 5,6 and 7 in a rows' as vicories.
#imports
import turtle as trtl
# variables
errorstatement = "That was not an option, try again."
onetwo = 1
victory = False
x1rowcor = -80
x2rowcor = -80
x3rowcor = -80
x4rowcor = -80
x5rowcor = -80
x6rowcor = -80
x7rowcor = -80
listxcor1 = 0
listxcor2 = 0
listxcor3 = 0
listxcor4 = 0
listxcor5 = 0
listxcor6 = 0
listxcor7 = 0
# turtle for drawing
bdraw = trtl.Turtle()
# turtles for the squares
x1 = trtl.Turtle()
x2 = trtl.Turtle()
x3 = trtl.Turtle()
x4 = trtl.Turtle()
x5 = trtl.Turtle()
x6 = trtl.Turtle()
x7 = trtl.Turtle()
x1.ht()
x2.ht()
x3.ht()
x4.ht()
x5.ht()
x6.ht()
x7.ht()
# 2D list
c4board = [[0]*7 for _ in range(6)]
'''
ideas here.
each turtle will be 30-40 pixels in diameter
each square for the checkers to go will be 40 pixels wide and tall
total size in pixels will be 280 across and 200 tall
COLUMNS ARE UP AND DOWN
ROWS ARE SIDE TO SIDE
turtle shape \/
https://stackoverflow.com/questions/28444497/how-can-i-make-rectangle-shape-in-turtle-python
When you click on the red checker in the bottom left, show how many red moves are remaining, same for yellow but with yellow moves instead.
'''
# drawing the board
ny = 0 # column lines
nx = 0 # row lines
x = -140
y = -100
bdraw.speed("fastest")
bdraw.ht()
while ny < 8: # while loop for columns
bdraw.penup()
bdraw.goto(x, y)
bdraw.setheading(90)
bdraw.pendown()
bdraw.forward(240)
x += 40
ny += 1
x -=40
while nx < 7: # while loop for rows
bdraw.penup()
bdraw.goto(x, y)
bdraw.setheading(180)
bdraw.pendown()
bdraw.forward(280)
y += 40
nx += 1
### make gameboard funcional
x1.penup()
x2.penup()
x3.penup()
x4.penup()
x5.penup()
x6.penup()
x7.penup()
x1.turtlesize(2.75)
x2.turtlesize(2.75)
x3.turtlesize(2.75)
x4.turtlesize(2.75)
x5.turtlesize(2.75)
x6.turtlesize(2.75)
x7.turtlesize(2.75)
x1.goto(-120, 150)
x2.goto(-80, 150)
x3.goto(-40, 150)
x4.goto(0, 150)
x5.goto(40, 150)
x6.goto(80, 150)
x7.goto(120, 150)
x1.setheading(270)
x2.setheading(270)
x3.setheading(270)
x4.setheading(270)
x5.setheading(270)
x6.setheading(270)
x7.setheading(270)
x1.st()
x2.st()
x3.st()
x4.st()
x5.st()
x6.st()
x7.st()
###
###
### Checkers. All the checkers.
## bottom left and right checkers, show score and turns done/left when clicked.
redcbl = trtl.Turtle() # red checker bottom left
yellowcbr = trtl.Turtle() # yellow checker bottom right
redcbl.ht()
yellowcbr.ht()
redcbl.shape('circle')
redcbl.turtlesize(2)
yellowcbr.shape('circle')
yellowcbr.turtlesize(2)
redcbl.penup()
redcbl.goto(-200,-150)
yellowcbr.penup()
yellowcbr.goto(200,-150)
redcbl.color("red")
yellowcbr.color("yellow")
redcbl.st()
yellowcbr.st()
##
### turns
currentturn = "yellow"
startingturn = input("Who wants to go first? 'yellow'? or 'red'? ")
if startingturn == "red":
currentturn = "red"
print(currentturn)
onetwo = 1
elif startingturn == "yellow":
currentturn = "yellow"
print(currentturn)
onetwo = 2
else:
print(errorstatement)
startingturn = input("Who wants to go first? 'yellow'? or 'red'? ")
###
###
# checker turtles above board
c1 = trtl.Turtle()
c1.ht()
c2 = trtl.Turtle()
c2.ht()
c3 = trtl.Turtle()
c3.ht()
c4 = trtl.Turtle()
c4.ht()
c5 = trtl.Turtle()
c5.ht()
c6 = trtl.Turtle()
c6.ht()
c7 = trtl.Turtle()
c7.ht()
c1.penup()
c2.penup()
c3.penup()
c4.penup()
c5.penup()
c6.penup()
c7.penup()
c1.goto(-120, 190)
c2.goto(-80, 190)
c3.goto(-40, 190)
c4.goto(0, 190)
c5.goto(40, 190)
c6.goto(80, 190)
c7.goto(120, 190)
c1.shape('circle')
c2.shape('circle')
c3.shape('circle')
c4.shape('circle')
c5.shape('circle')
c6.shape('circle')
c7.shape('circle')
c1.color(currentturn)
c2.color(currentturn)
c3.color(currentturn)
c4.color(currentturn)
c5.color(currentturn)
c6.color(currentturn)
c7.color(currentturn)
c1.st()
c2.st()
c3.st()
c4.st()
c5.st()
c6.st()
c7.st()
##### make game itself functional
def checkerplaced():
global onetwo
global currentturn
if currentturn == "yellow":
currentturn = "red"
print("red's turn")
onetwo = 1
elif currentturn == "red":
currentturn = "yellow"
print("yellow's turn")
onetwo = 2
c1.color(currentturn)
c2.color(currentturn)
c3.color(currentturn)
c4.color(currentturn)
c5.color(currentturn)
c6.color(currentturn)
c7.color(currentturn)
'''
The 'def checkforconnect4():' would go here. I don't know what i need to do
'''
# turtle clicked functions
def x1click(x,y):
global c4board
global currentturn
global x1rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(-120, x1rowcor)
checker.st()
x1rowcor += 40
listxcor1 = ((x1rowcor+80)/40)-1
checkerplaced()
print(listxcor1)
if c4board[5][0] == 0:
c4board[5][0] = onetwo
print(c4board)
elif c4board[4][0] == 0:
c4board[4][0] = onetwo
print(c4board)
elif c4board[3][0] == 0:
c4board[3][0] = onetwo
print(c4board)
elif c4board[2][0] == 0:
c4board[2][0] = onetwo
print(c4board)
elif c4board[1][0] == 0:
c4board[1][0] = onetwo
print(c4board)
elif c4board[0][0] == 0:
c4board[0][0] = onetwo
print(c4board)
if x1rowcor > 120:
c1.ht()
x1.ht()
def x2click(x,y):
global currentturn
global x2rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(-80, x2rowcor)
checker.st()
x2rowcor += 40
listxcor2 = ((x2rowcor+80)/40)-1
print(listxcor2)
checkerplaced()
if c4board[5][1] == 0:
c4board[5][1] = onetwo
print(c4board)
elif c4board[4][1] == 0:
c4board[4][1] = onetwo
print(c4board)
elif c4board[3][1] == 0:
c4board[3][1] = onetwo
print(c4board)
elif c4board[2][1] == 0:
c4board[2][1] = onetwo
print(c4board)
elif c4board[1][1] == 0:
c4board[1][1] = onetwo
print(c4board)
elif c4board[0][1] == 0:
c4board[0][1] = onetwo
print(c4board)
if x2rowcor > 120:
c2.ht()
x2.ht()
def x3click(x,y):
global currentturn
global x3rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(-40, x3rowcor)
checker.st()
x3rowcor += 40
listxcor3 = ((x3rowcor+80)/40)-1
print(listxcor3)
checkerplaced()
if c4board[5][2] == 0:
c4board[5][2] = onetwo
print(c4board)
elif c4board[4][2] == 0:
c4board[4][2] = onetwo
print(c4board)
elif c4board[3][2] == 0:
c4board[3][2] = onetwo
print(c4board)
elif c4board[2][2] == 0:
c4board[2][2] = onetwo
print(c4board)
elif c4board[1][2] == 0:
c4board[1][2] = onetwo
print(c4board)
elif c4board[0][2] == 0:
c4board[0][2] = onetwo
print(c4board)
if x3rowcor > 120:
c3.ht()
x3.ht()
def x4click(x,y):
global currentturn
global x4rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(0, x4rowcor)
checker.st()
x4rowcor += 40
listxcor4 = ((x4rowcor+80)/40)-1
print(listxcor4)
checkerplaced()
if c4board[5][3] == 0:
c4board[5][3] = onetwo
print(c4board)
elif c4board[4][3] == 0:
c4board[4][3] = onetwo
print(c4board)
elif c4board[3][3] == 0:
c4board[3][3] = onetwo
print(c4board)
elif c4board[2][3] == 0:
c4board[2][3] = onetwo
print(c4board)
elif c4board[1][3] == 0:
c4board[1][3] = onetwo
print(c4board)
elif c4board[0][3] == 0:
c4board[0][3] = onetwo
print(c4board)
if x4rowcor > 120:
c4.ht()
x4.ht()
def x5click(x,y):
global currentturn
global x5rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(40, x5rowcor)
checker.st()
x5rowcor += 40
listxcor5 = ((x5rowcor+80)/40)-1
print(listxcor5)
checkerplaced()
if c4board[5][4] == 0:
c4board[5][4] = onetwo
print(c4board)
elif c4board[4][4] == 0:
c4board[4][4] = onetwo
print(c4board)
elif c4board[3][4] == 0:
c4board[3][4] = onetwo
print(c4board)
elif c4board[2][4] == 0:
c4board[2][4] = onetwo
print(c4board)
elif c4board[1][4] == 0:
c4board[1][4] = onetwo
print(c4board)
elif c4board[0][4] == 0:
c4board[0][4] = onetwo
print(c4board)
if x5rowcor > 120:
c5.ht()
x5.ht()
def x6click(x,y):
global currentturn
global x6rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(80, x6rowcor)
checker.st()
x6rowcor += 40
listxcor6 = ((x6rowcor+80)/40)-1
print(listxcor6)
checkerplaced()
if c4board[5][5] == 0:
c4board[5][5] = onetwo
print(c4board)
elif c4board[4][5] == 0:
c4board[4][5] = onetwo
print(c4board)
elif c4board[3][5] == 0:
c4board[3][5] = onetwo
print(c4board)
elif c4board[2][5] == 0:
c4board[2][5] = onetwo
print(c4board)
elif c4board[1][5] == 0:
c4board[1][5] = onetwo
print(c4board)
elif c4board[0][5] == 0:
c4board[0][5] = onetwo
print(c4board)
if x6rowcor > 120:
c6.ht()
x6.ht()
def x7click(x,y):
global currentturn
global x7rowcor
checker = trtl.Turtle()
checker.ht()
checker.shape("circle")
checker.turtlesize(2)
checker.color(currentturn)
checker.penup()
checker.goto(120, x7rowcor)
checker.st()
x7rowcor += 40
listxcor7 = ((x7rowcor+80)/40)-1
print(listxcor7)
checkerplaced()
if c4board[5][6] == 0:
c4board[5][6] = onetwo
print(c4board)
elif c4board[4][6] == 0:
c4board[4][6] = onetwo
print(c4board)
elif c4board[3][6] == 0:
c4board[3][6] = onetwo
print(c4board)
elif c4board[2][6] == 0:
c4board[2][6] = onetwo
print(c4board)
elif c4board[1][6] == 0:
c4board[1][6] = onetwo
print(c4board)
elif c4board[0][6] == 0:
c4board[0][6] = onetwo
print(c4board)
if x7rowcor > 120:
c7.ht()
x7.ht()
def rcclick(x,y):
print("red checker clicked")
def ycclick(x,y):
print("yellow checker clicked")
dumbvname = 1
# when click call corresponding function
x1.onclick(x1click)
c1.onclick(x1click)
x2.onclick(x2click)
c2.onclick(x2click)
x3.onclick(x3click)
c3.onclick(x3click)
x4.onclick(x4click)
c4.onclick(x4click)
x5.onclick(x5click)
c5.onclick(x5click)
x6.onclick(x6click)
c6.onclick(x6click)
x7.onclick(x7click)
c7.onclick(x7click)
redcbl.onclick(rcclick)
yellowcbr.onclick(ycclick)
| [] | [] | [
"Instead of writing thousands of lines of code for all the possible permutations, better approach would be to use recursive functions.\nFor example you give your function your 2d array and then the function recursively moves till the end of line and then into the next line checking if checker is placed on the current node. If not function will run till the last node in 2D array and then return that there are no 4 in a row. If the function finds a checker, it will check the surronding nodes accordingly to see if there are 4 checkers in a row, horizontally, vertically or diagonally. Since you are always searching left to right, top to bottom, you only need to check 3 directions, right, down and diagonally down/right. After that be smart and dont run the same process when you hit the checker that you already checked but it did not create 4 in a row with a surrounding ones, you could maybe have a temp list that is the copy of the argument list in which after you checked the checkers and they do not constitue 4 in a row, you change their valeus to 0 per say.\nYou run this function after every checker insert. You just need to write one right recursive function (think graphs, DFS, BFS)\n"
] | [
-1
] | [
"multidimensional_array",
"python",
"python_3.x",
"python_turtle"
] | stackoverflow_0074648896_multidimensional_array_python_python_3.x_python_turtle.txt |
Q:
Numpy array with different mean and standard deviation per column
i would like to get an numpy array , shape 1000 row and 2 column.
1st column will contain - Gaussian distributed variables with standard deviation 2 and mean 1.
2nd column will contain Gaussian distributed variables with mean -1 and standard deviation 0.5.
How to create the array using define value of mean and std?
A:
You can use numpy's random generators.
import numpy as np
# as per kwinkunks suggestion
rng = np.random.default_rng()
arr1 = rng.normal(1, 2, 1000).reshape(1000, 1)
arr2 = rng.normal(-1, 0.5, 1000).reshape(1000, 1)
arr1[:5]
array([[-2.8428678 ],
[ 2.52213097],
[-0.98329961],
[-0.87854616],
[ 0.65674208]])
arr2[:5]
array([[-0.85321735],
[-1.59748405],
[-1.77794019],
[-1.02239036],
[-0.57849622]])
After that, you can concatenate.
np.concatenate([arr1, arr2], axis = 1)
# output
array([[-2.8428678 , -0.85321735],
[ 2.52213097, -1.59748405],
[-0.98329961, -1.77794019],
...,
[ 0.84249042, -0.26451526],
[ 0.6950764 , -0.86348222],
[ 3.53885426, -0.95546126]])
A:
Use np.random.normal directly:
import numpy as np
np.random.normal([1, -1], [2, 0.5], (1000, 2))
A:
You can just create two normal distributions with the mean and std for each and stack them.
np.hstack((np.random.normal(1, 2, size=(1000,1)), np.random.normal(-1, 0.5, size=(1000,1))))
| Numpy array with different mean and standard deviation per column | i would like to get an numpy array , shape 1000 row and 2 column.
1st column will contain - Gaussian distributed variables with standard deviation 2 and mean 1.
2nd column will contain Gaussian distributed variables with mean -1 and standard deviation 0.5.
How to create the array using define value of mean and std?
| [
"You can use numpy's random generators.\nimport numpy as np\n\n# as per kwinkunks suggestion\nrng = np.random.default_rng()\n\narr1 = rng.normal(1, 2, 1000).reshape(1000, 1)\narr2 = rng.normal(-1, 0.5, 1000).reshape(1000, 1)\n\narr1[:5]\n\narray([[-2.8428678 ],\n [ 2.52213097],\n [-0.98329961],\n [-0.87854616],\n [ 0.65674208]])\n\narr2[:5]\n\narray([[-0.85321735],\n [-1.59748405],\n [-1.77794019],\n [-1.02239036],\n [-0.57849622]])\n\nAfter that, you can concatenate.\nnp.concatenate([arr1, arr2], axis = 1)\n\n# output\narray([[-2.8428678 , -0.85321735],\n [ 2.52213097, -1.59748405],\n [-0.98329961, -1.77794019],\n ...,\n [ 0.84249042, -0.26451526],\n [ 0.6950764 , -0.86348222],\n [ 3.53885426, -0.95546126]])\n\n",
"Use np.random.normal directly:\nimport numpy as np\nnp.random.normal([1, -1], [2, 0.5], (1000, 2))\n\n",
"You can just create two normal distributions with the mean and std for each and stack them.\nnp.hstack((np.random.normal(1, 2, size=(1000,1)), np.random.normal(-1, 0.5, size=(1000,1))))\n\n"
] | [
1,
1,
0
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074646236_numpy_python.txt |
Q:
Pandas KeyError: value not in index
I have the following code,
df = pd.read_csv(CsvFileName)
p = df.pivot_table(index=['Hour'], columns='DOW', values='Changes', aggfunc=np.mean).round(0)
p.fillna(0, inplace=True)
p[["1Sun", "2Mon", "3Tue", "4Wed", "5Thu", "6Fri", "7Sat"]] = p[["1Sun", "2Mon", "3Tue", "4Wed", "5Thu", "6Fri", "7Sat"]].astype(int)
It has always been working until the csv file doesn't have enough coverage (of all week days). For e.g., with the following .csv file,
DOW,Hour,Changes
4Wed,01,237
3Tue,07,2533
1Sun,01,240
3Tue,12,4407
1Sun,09,2204
1Sun,01,240
1Sun,01,241
1Sun,01,241
3Tue,11,662
4Wed,01,4
2Mon,18,4737
1Sun,15,240
2Mon,02,4
6Fri,01,1
1Sun,01,240
2Mon,19,2300
2Mon,19,2532
I'll get the following error:
KeyError: "['5Thu' '7Sat'] not in index"
It seems to have a very easy fix, but I'm just too new to Python to know how to fix it.
A:
Use reindex to get all columns you need. It'll preserve the ones that are already there and put in empty columns otherwise.
p = p.reindex(columns=['1Sun', '2Mon', '3Tue', '4Wed', '5Thu', '6Fri', '7Sat'])
So, your entire code example should look like this:
df = pd.read_csv(CsvFileName)
p = df.pivot_table(index=['Hour'], columns='DOW', values='Changes', aggfunc=np.mean).round(0)
p.fillna(0, inplace=True)
columns = ["1Sun", "2Mon", "3Tue", "4Wed", "5Thu", "6Fri", "7Sat"]
p = p.reindex(columns=columns)
p[columns] = p[columns].astype(int)
A:
I had a very similar issue. I got the same error because the csv contained spaces in the header. My csv contained a header "Gender " and I had it listed as:
[['Gender']]
If it's easy enough for you to access your csv, you can use the excel formula trim() to clip any spaces of the cells.
or remove it like this
df.columns = df.columns.to_series().apply(lambda x: x.strip())
A:
please try this to clean and format your column names:
df.columns = (df.columns.str.strip().str.upper()
.str.replace(' ', '_')
.str.replace('(', '')
.str.replace(')', ''))
A:
I had the same issue.
During the 1st development I used a .csv file (comma as separator) that I've modified a bit before saving it.
After saving the commas became semicolon.
On Windows it is dependent on the "Regional and Language Options" customize screen where you find a List separator. This is the char Windows applications expect to be the CSV separator.
When testing from a brand new file I encountered that issue.
I've removed the 'sep' argument in read_csv method
before:
df1 = pd.read_csv('myfile.csv', sep=',');
after:
df1 = pd.read_csv('myfile.csv');
That way, the issue disappeared.
| Pandas KeyError: value not in index | I have the following code,
df = pd.read_csv(CsvFileName)
p = df.pivot_table(index=['Hour'], columns='DOW', values='Changes', aggfunc=np.mean).round(0)
p.fillna(0, inplace=True)
p[["1Sun", "2Mon", "3Tue", "4Wed", "5Thu", "6Fri", "7Sat"]] = p[["1Sun", "2Mon", "3Tue", "4Wed", "5Thu", "6Fri", "7Sat"]].astype(int)
It has always been working until the csv file doesn't have enough coverage (of all week days). For e.g., with the following .csv file,
DOW,Hour,Changes
4Wed,01,237
3Tue,07,2533
1Sun,01,240
3Tue,12,4407
1Sun,09,2204
1Sun,01,240
1Sun,01,241
1Sun,01,241
3Tue,11,662
4Wed,01,4
2Mon,18,4737
1Sun,15,240
2Mon,02,4
6Fri,01,1
1Sun,01,240
2Mon,19,2300
2Mon,19,2532
I'll get the following error:
KeyError: "['5Thu' '7Sat'] not in index"
It seems to have a very easy fix, but I'm just too new to Python to know how to fix it.
| [
"Use reindex to get all columns you need. It'll preserve the ones that are already there and put in empty columns otherwise.\np = p.reindex(columns=['1Sun', '2Mon', '3Tue', '4Wed', '5Thu', '6Fri', '7Sat'])\n\nSo, your entire code example should look like this:\ndf = pd.read_csv(CsvFileName)\n\np = df.pivot_table(index=['Hour'], columns='DOW', values='Changes', aggfunc=np.mean).round(0)\np.fillna(0, inplace=True)\n\ncolumns = [\"1Sun\", \"2Mon\", \"3Tue\", \"4Wed\", \"5Thu\", \"6Fri\", \"7Sat\"]\np = p.reindex(columns=columns)\np[columns] = p[columns].astype(int)\n\n",
"I had a very similar issue. I got the same error because the csv contained spaces in the header. My csv contained a header \"Gender \" and I had it listed as: \n[['Gender']]\n\nIf it's easy enough for you to access your csv, you can use the excel formula trim() to clip any spaces of the cells.\nor remove it like this \ndf.columns = df.columns.to_series().apply(lambda x: x.strip())\n",
"please try this to clean and format your column names:\ndf.columns = (df.columns.str.strip().str.upper()\n .str.replace(' ', '_')\n .str.replace('(', '')\n .str.replace(')', ''))\n\n",
"I had the same issue. \nDuring the 1st development I used a .csv file (comma as separator) that I've modified a bit before saving it. \nAfter saving the commas became semicolon.\nOn Windows it is dependent on the \"Regional and Language Options\" customize screen where you find a List separator. This is the char Windows applications expect to be the CSV separator.\nWhen testing from a brand new file I encountered that issue.\nI've removed the 'sep' argument in read_csv method\nbefore: \ndf1 = pd.read_csv('myfile.csv', sep=',');\n\nafter: \ndf1 = pd.read_csv('myfile.csv');\n\nThat way, the issue disappeared.\n"
] | [
46,
28,
2,
0
] | [
"I had a some extra space in csv file ahead of Thermal_Rating\nso i just removed the space and saved the csv file, rerun the df, and it worked\n"
] | [
-1
] | [
"dataframe",
"indexing",
"pandas",
"python"
] | stackoverflow_0038462920_dataframe_indexing_pandas_python.txt |
Q:
How to add/append new lists to an already existing zip?
I am programming a grocery list by using dictionaries and functions (i am a beginner) and all of my codes are not here. I have made a zip av 2 lists (an integer asking about the price and one string asking about the item the user want to add). I am using a while loop that allow the user to add items and their prices to those 2 lists. Now i have asked the user to add another item and its price to those 2 lists in the zip and then i should print the new lists. The problem i need help with is that im only able to print first item and its price and when i add new ones then it prints nothing. Another thing i need help with is to move the "selection_1' to the function "def alternativ_1()" and expect it to return summ and add_item.
I have also tried making new zip like:
flatlist = zip(*data)
flatlist.append(add_item)
But its not helping.
I am using "
from tabulate import tabulate
to print those two lists in a table form.
I have made 2 lists, one for item (new_item) and one for the items price (price_item) and then added them to a zip.
new_item= []
price_item= []
data = zip(new_item, price_item)
# def selection_1():
While true
if selectio1n == 1:
add_item = input('name? ')
summ = int(input('sum: '))
new_item.append(add_item) #only print the first item
price_item.append(summ) #only print the first price
elif selection == 2:
selection_2()
bla bla bla
etc
A:
zip isn't supposed to be used like that. zip returns a one-off iterator, not designed to be useful beyond the context of a single loop. If you want to loop over the items of two lists (or other iterables) together, you call zip and iterate over the iterator it gives you. If you want to iterate again, you call zip again.
It looks like you're recording item names and prices in two lists, new_item and price_item. In that case, when you need to record a new item, append the name to new_item and append the price to price_item. Don't do anything with a zip iterator at this point - you shouldn't even have a zip iterator at this point.
When you need to iterate over these two lists together, then you call zip.
(Another approach would be to just keep a list of (item, price) tuples, or a list of instances of some class that records price and other item info, but the above is how you'd do things with separate lists and zip.)
| How to add/append new lists to an already existing zip? | I am programming a grocery list by using dictionaries and functions (i am a beginner) and all of my codes are not here. I have made a zip av 2 lists (an integer asking about the price and one string asking about the item the user want to add). I am using a while loop that allow the user to add items and their prices to those 2 lists. Now i have asked the user to add another item and its price to those 2 lists in the zip and then i should print the new lists. The problem i need help with is that im only able to print first item and its price and when i add new ones then it prints nothing. Another thing i need help with is to move the "selection_1' to the function "def alternativ_1()" and expect it to return summ and add_item.
I have also tried making new zip like:
flatlist = zip(*data)
flatlist.append(add_item)
But its not helping.
I am using "
from tabulate import tabulate
to print those two lists in a table form.
I have made 2 lists, one for item (new_item) and one for the items price (price_item) and then added them to a zip.
new_item= []
price_item= []
data = zip(new_item, price_item)
# def selection_1():
While true
if selectio1n == 1:
add_item = input('name? ')
summ = int(input('sum: '))
new_item.append(add_item) #only print the first item
price_item.append(summ) #only print the first price
elif selection == 2:
selection_2()
bla bla bla
etc
| [
"zip isn't supposed to be used like that. zip returns a one-off iterator, not designed to be useful beyond the context of a single loop. If you want to loop over the items of two lists (or other iterables) together, you call zip and iterate over the iterator it gives you. If you want to iterate again, you call zip again.\nIt looks like you're recording item names and prices in two lists, new_item and price_item. In that case, when you need to record a new item, append the name to new_item and append the price to price_item. Don't do anything with a zip iterator at this point - you shouldn't even have a zip iterator at this point.\nWhen you need to iterate over these two lists together, then you call zip.\n(Another approach would be to just keep a list of (item, price) tuples, or a list of instances of some class that records price and other item info, but the above is how you'd do things with separate lists and zip.)\n"
] | [
2
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074649191_python_python_3.x.txt |
Q:
Error happend when import torch (pytorch)
Try to use pytorch, when I do
import torch
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-2-eb42ca6e4af3> in <module>
----> 1 import torch
C:\Big_Data_app\Anaconda3\lib\site-packages\torch\__init__.py in <module>
124 err = ctypes.WinError(last_error)
125 err.strerror += f' Error loading "{dll}" or one of its dependencies.'
--> 126 raise err
127 elif res is not None:
128 is_loaded = True
OSError: [WinError 182] <no description> Error loading "C:\Big_Data_app\Anaconda3\lib\site-packages\torch\lib\shm.dll" or one of its dependencies.
Not sure what happend.
The truth is, I installed pytorch around the end of last year.(I think)
I don't remember how, I installed it because I want try to use it.
But I guess I never used it after I installed. don't remember running some script with pytorch.
And now I start to use it, get this error. Have no clue What I should check first.
That means, I don't know what module, framework, driver, app are related with pytorch.
So I don't know where to beginning, check is there any other module might cause this Error.
If anyone knows how to solve this. Or where to start the problem checking. Please let me know.
Thank you.
My pytorch version is 1.11.0
OS: window 10
A:
I solved the problem.
Just reinstall your Anaconda.
!!Warning!!: you will lose your lib.
Referring solution:
Problem with Torch 1.11
A:
The version may not be exactly as same as yours, but maybe this question asked on 2020-09-04 helps.
A:
My problem was solved by creating a new conda environment and installing PyTorch there. Everything worked perfectly on the first try in the new environment. Reinstalling Anaconda will work too, but this solution is less costly.
A:
In my case PyTorch broke after playing around with TensorFlow (installing different CPU and CUDA versions). I've just run Anaconda Prompt (with administrative privileges) and ordered Anaconda to update all possible packages by running following command:
conda update --all
After that the problem with PyTorch disappeared.
| Error happend when import torch (pytorch) | Try to use pytorch, when I do
import torch
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-2-eb42ca6e4af3> in <module>
----> 1 import torch
C:\Big_Data_app\Anaconda3\lib\site-packages\torch\__init__.py in <module>
124 err = ctypes.WinError(last_error)
125 err.strerror += f' Error loading "{dll}" or one of its dependencies.'
--> 126 raise err
127 elif res is not None:
128 is_loaded = True
OSError: [WinError 182] <no description> Error loading "C:\Big_Data_app\Anaconda3\lib\site-packages\torch\lib\shm.dll" or one of its dependencies.
Not sure what happend.
The truth is, I installed pytorch around the end of last year.(I think)
I don't remember how, I installed it because I want try to use it.
But I guess I never used it after I installed. don't remember running some script with pytorch.
And now I start to use it, get this error. Have no clue What I should check first.
That means, I don't know what module, framework, driver, app are related with pytorch.
So I don't know where to beginning, check is there any other module might cause this Error.
If anyone knows how to solve this. Or where to start the problem checking. Please let me know.
Thank you.
My pytorch version is 1.11.0
OS: window 10
| [
"I solved the problem.\nJust reinstall your Anaconda.\n!!Warning!!: you will lose your lib.\nReferring solution:\nProblem with Torch 1.11\n",
"The version may not be exactly as same as yours, but maybe this question asked on 2020-09-04 helps.\n",
"My problem was solved by creating a new conda environment and installing PyTorch there. Everything worked perfectly on the first try in the new environment. Reinstalling Anaconda will work too, but this solution is less costly.\n",
"In my case PyTorch broke after playing around with TensorFlow (installing different CPU and CUDA versions). I've just run Anaconda Prompt (with administrative privileges) and ordered Anaconda to update all possible packages by running following command:\nconda update --all\n\nAfter that the problem with PyTorch disappeared.\n"
] | [
1,
0,
0,
0
] | [] | [] | [
"deep_learning",
"python",
"pytorch",
"windows"
] | stackoverflow_0073098560_deep_learning_python_pytorch_windows.txt |
Q:
Execute dynamically created function in another scope
I have got dynamically created funcion which accesses variable from another scope. For example:
def dynamically_generated_function():
print(x) # x is not defined in scope visible to this function
I would like to execute that funcion, but variable x comes from diffrent place. I have got access to scope of that place via proper frame from inspect.stack()
I see 2 solutions to achieve that:
How to inject f_locals and f_globals from that frame to current frame?
How to execute dynamically_generated_function within frame that has variable x defined?
A:
I have managed to solve the problem via different approach (executing code directLy instead of function creation from it). Thanks @chepner for inspiration!
| Execute dynamically created function in another scope | I have got dynamically created funcion which accesses variable from another scope. For example:
def dynamically_generated_function():
print(x) # x is not defined in scope visible to this function
I would like to execute that funcion, but variable x comes from diffrent place. I have got access to scope of that place via proper frame from inspect.stack()
I see 2 solutions to achieve that:
How to inject f_locals and f_globals from that frame to current frame?
How to execute dynamically_generated_function within frame that has variable x defined?
| [
"I have managed to solve the problem via different approach (executing code directLy instead of function creation from it). Thanks @chepner for inspiration!\n"
] | [
0
] | [] | [] | [
"code_generation",
"code_inspection",
"python"
] | stackoverflow_0074631814_code_generation_code_inspection_python.txt |
Q:
How can i get refreshed value of flask variable with JS and show in HTML template every 30 seconds?
This is a parking app which refresh the available parking slots every 30 seconds WITHOUT refreshing page.
This is my .py with the route and render template
@views.route('/')
def home():
while True:
try:
token=getToken()
if(token!='null' or token!=''):
plazas=getInfo(token,parkingID)
except:
print('Error en la conexion')
time.sleep(secs)
return render_template("home.html", plazas=plazas)
My HTML is:
<script src="{{ url_for('static', filename='js/main.js') }}"></script>
<script type="text/javascript">
myVar = setInterval(refresh,30000,{{plazas}});
</script>
</head>
<title>Home</title>
</head>
<body>
<table>
{% for parking in parkings %}
<tr>
<td class="par"><img src={{parking.image}} alt="img"></td>
<td class="nombre">{{parking.nombre}}</td>
{% if plazas|int >= (totalplazas*30)/100 %}
<td class="num" style="color:#39FF00">
{{plazas}}</td>
{% elif plazas|int < 1%}
<td class="num" style="color:red"><p class="an">COMPLETO</p></td>
{% elif plazas|int <= (totalplazas*10)/100%}
<td class="num" style="color:red">
{{plazas}}</td>
{% else %}
<td class="num" style="color:yellow">
{{plazas}}</td>
{% endif %}
<td class="dir"><img src={{parking.direccion}} alt="img"></td>
</tr>
{% endfor %}
</table>
</body>
</html>
And my JS:
var elements = document.getElementsByClassName("num");
function refresh(pl){
elements.innerHTML = pl;
}
My problem is that the {{plazas}} variable always takes the initial value and is not updated every 30 seconds even if i use while true: loop in my .py.
Any help?
A:
Your problem seems to be updating the rendered DOM periodically.
You'll probably need JavaScript to handle that.
There are plenty of frameworks that can enable this efficiently.
Consider looking for popular solutions, like ReactJS or VueJS.
A:
A simple solution would be to get the data by fetching it from javascript using await fetch(). You can then set up an updater by using setInterval().
Flask
@app.route('/refresh')
def refresh();
try:
token=getToken()
if(token!='null' or token!=''):
plazas=getInfo(token,parkingID)
except:
return "Error refreshing!"
Javascript
var element = document.getElementsByClassName("num")
async function reload() {
// make a request to reload
const promise = await fetch("/refresh")
// get the text from the request, alternatively you can also use .json() to
// retrieve json
const text = await promise.text()
element.innerHTML = text
}
// refresh every 30 seconds
window.onload = setInterval(reload, 30000)
| How can i get refreshed value of flask variable with JS and show in HTML template every 30 seconds? | This is a parking app which refresh the available parking slots every 30 seconds WITHOUT refreshing page.
This is my .py with the route and render template
@views.route('/')
def home():
while True:
try:
token=getToken()
if(token!='null' or token!=''):
plazas=getInfo(token,parkingID)
except:
print('Error en la conexion')
time.sleep(secs)
return render_template("home.html", plazas=plazas)
My HTML is:
<script src="{{ url_for('static', filename='js/main.js') }}"></script>
<script type="text/javascript">
myVar = setInterval(refresh,30000,{{plazas}});
</script>
</head>
<title>Home</title>
</head>
<body>
<table>
{% for parking in parkings %}
<tr>
<td class="par"><img src={{parking.image}} alt="img"></td>
<td class="nombre">{{parking.nombre}}</td>
{% if plazas|int >= (totalplazas*30)/100 %}
<td class="num" style="color:#39FF00">
{{plazas}}</td>
{% elif plazas|int < 1%}
<td class="num" style="color:red"><p class="an">COMPLETO</p></td>
{% elif plazas|int <= (totalplazas*10)/100%}
<td class="num" style="color:red">
{{plazas}}</td>
{% else %}
<td class="num" style="color:yellow">
{{plazas}}</td>
{% endif %}
<td class="dir"><img src={{parking.direccion}} alt="img"></td>
</tr>
{% endfor %}
</table>
</body>
</html>
And my JS:
var elements = document.getElementsByClassName("num");
function refresh(pl){
elements.innerHTML = pl;
}
My problem is that the {{plazas}} variable always takes the initial value and is not updated every 30 seconds even if i use while true: loop in my .py.
Any help?
| [
"Your problem seems to be updating the rendered DOM periodically.\nYou'll probably need JavaScript to handle that.\nThere are plenty of frameworks that can enable this efficiently.\nConsider looking for popular solutions, like ReactJS or VueJS.\n",
"A simple solution would be to get the data by fetching it from javascript using await fetch(). You can then set up an updater by using setInterval().\nFlask\[email protected]('/refresh')\ndef refresh();\n try:\n token=getToken()\n if(token!='null' or token!=''):\n plazas=getInfo(token,parkingID)\n except:\n return \"Error refreshing!\"\n\nJavascript\nvar element = document.getElementsByClassName(\"num\")\n\nasync function reload() {\n // make a request to reload\n const promise = await fetch(\"/refresh\")\n // get the text from the request, alternatively you can also use .json() to \n // retrieve json\n const text = await promise.text()\n element.innerHTML = text\n}\n\n// refresh every 30 seconds\nwindow.onload = setInterval(reload, 30000)\n\n"
] | [
0,
0
] | [] | [] | [
"flask",
"html",
"javascript",
"jinja2",
"python"
] | stackoverflow_0074641706_flask_html_javascript_jinja2_python.txt |
Q:
"import tensorflow" results in error: No module named 'tensorflow.python.eager.polymorphic_function' (Python in Jupyter Lab)
Python 3.9.12.
Windows 10.
jupyterlab 3.3.2.
Import tensorflow
When I try to import Tensorflow, I get the following 'tensorflow.python.eager.polymorphic_function' error.
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In [44], line 1
----> 1 import tensorflow
File ~\OD13\TFODCourse\tfod13\lib\site-packages\tensorflow\__init__.py:45
42 from tensorflow.python import tf2 as _tf2
43 _tf2.enable()
---> 45 from ._api.v2 import __internal__
46 from ._api.v2 import __operators__
47 from ._api.v2 import audio
File ~\OD13\TFODCourse\tfod13\lib\site-packages\tensorflow\_api\v2\__internal__\__init__.py:14
12 from . import eager_context
13 from . import feature_column
---> 14 from . import function
15 from . import graph_util
16 from . import mixed_precision
File ~\OD13\TFODCourse\tfod13\lib\site-packages\tensorflow\_api\v2\__internal__\function\__init__.py:8
3 """Public API for tf.__internal__.function namespace.
4 """
6 import sys as _sys
----> 8 from tensorflow.python.eager.polymorphic_function.polymorphic_function import Function
9 from tensorflow.python.eager.polymorphic_function.quarantine import defun_with_attributes
ModuleNotFoundError: No module named 'tensorflow.python.eager.polymorphic_function'
My workflow is based on this tutorial: https://www.youtube.com/watch?v=yqkISICHH-U
I found the following answer, but I'm not understanding how to implement the TFLite Authoring Tool to solve this problem:
https://stackoverflow.com/questions/74177865/tensorflow-python-eager-polymorphic-function-no-module-error-on-imports
A:
To answer my own question:
I created a conda environment and installed an older version of Python (3.7) in it and that seems to have fixed the problem.
I found these links to be helpful:
How to downgrade the Python Version from 3.8 to 3.7 on windows?
conda install downgrade python version
Jupyter Notebook - Cannot Connect to Kernel
How to find the version of jupyter notebook from within the notebook
pip command to downgrade jupyter notebook
How to check python anaconda version installed on Windows 10 PC?
https://towardsdatascience.com/get-your-conda-environment-to-show-in-jupyter-notebooks-the-easy-way-17010b76e874
| "import tensorflow" results in error: No module named 'tensorflow.python.eager.polymorphic_function' (Python in Jupyter Lab) | Python 3.9.12.
Windows 10.
jupyterlab 3.3.2.
Import tensorflow
When I try to import Tensorflow, I get the following 'tensorflow.python.eager.polymorphic_function' error.
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In [44], line 1
----> 1 import tensorflow
File ~\OD13\TFODCourse\tfod13\lib\site-packages\tensorflow\__init__.py:45
42 from tensorflow.python import tf2 as _tf2
43 _tf2.enable()
---> 45 from ._api.v2 import __internal__
46 from ._api.v2 import __operators__
47 from ._api.v2 import audio
File ~\OD13\TFODCourse\tfod13\lib\site-packages\tensorflow\_api\v2\__internal__\__init__.py:14
12 from . import eager_context
13 from . import feature_column
---> 14 from . import function
15 from . import graph_util
16 from . import mixed_precision
File ~\OD13\TFODCourse\tfod13\lib\site-packages\tensorflow\_api\v2\__internal__\function\__init__.py:8
3 """Public API for tf.__internal__.function namespace.
4 """
6 import sys as _sys
----> 8 from tensorflow.python.eager.polymorphic_function.polymorphic_function import Function
9 from tensorflow.python.eager.polymorphic_function.quarantine import defun_with_attributes
ModuleNotFoundError: No module named 'tensorflow.python.eager.polymorphic_function'
My workflow is based on this tutorial: https://www.youtube.com/watch?v=yqkISICHH-U
I found the following answer, but I'm not understanding how to implement the TFLite Authoring Tool to solve this problem:
https://stackoverflow.com/questions/74177865/tensorflow-python-eager-polymorphic-function-no-module-error-on-imports
| [
"To answer my own question:\nI created a conda environment and installed an older version of Python (3.7) in it and that seems to have fixed the problem.\nI found these links to be helpful:\nHow to downgrade the Python Version from 3.8 to 3.7 on windows?\nconda install downgrade python version\nJupyter Notebook - Cannot Connect to Kernel\nHow to find the version of jupyter notebook from within the notebook\npip command to downgrade jupyter notebook\nHow to check python anaconda version installed on Windows 10 PC?\nhttps://towardsdatascience.com/get-your-conda-environment-to-show-in-jupyter-notebooks-the-easy-way-17010b76e874\n"
] | [
0
] | [] | [] | [
"jupyter",
"object_detection_api",
"python",
"python_3.x",
"tensorflow"
] | stackoverflow_0074635830_jupyter_object_detection_api_python_python_3.x_tensorflow.txt |
Q:
Why are attributes of a tk object being 'retroactively' changed?
Personal project, I'm thinking it would be cool to be able to create a one to has many relationship between windows, so when a "parent" window is closed all of its "children" are also also closed.
So here is the window class that creates new windows via the Tk() function:
from tkinter import *
class Window:
def __init__(self, title):
self.create(title)
def create(self,title):
self.window = Tk()
self.window.title(title)
self.window.protocol("WM_DELETE_WINDOW",self.delete)
def child(self, title):
self.create(title)
def delete(self):
print(f'Destroying: {self.window.title()}')
self.window.destroy()
parentclass1 = Window("ParentClass1")
parentclass2 = Window("ParentClass2")
parentclass3 = Window("ParentClass3")
print(parentclass1.window.title())
print(parentclass2.window.title())
print(parentclass3.window.title())
mainloop()
This works fine. Each window opens, and when its title is queried each instance returns the correct title:
print(parentclass1.window.title()) #=\> "ParentClass1"
print(parentclass2.window.title()) #=\> "ParentClass2"
print(parentclass3.window.title()) #=\> "ParentClass3"
What I want to be able to do is call the child method on the parentclass2 instance and instantly set up a relationship between parentclass2 and the newly created instance. I.e parentclass2 is the parent and the newly created instance is the child of parentclass2.
However before I get even to setting up this relationship via an array, a very weird thing happens when I use the child method:
parentclass2.child("ChildOfParentClass2")
print(parentclass1.window.title()) #=> "ParentClass1"
print(parentclass2.window.title()) #=> "ChildOfParentClass2"
print(parentclass3.window.title()) #=> "ParentClass1"
parentclass2.window.title() now returns the string "ChildOfParentClass2".
This is odd. self.window = Tk() is clearly being called twice, separately, and yet somehow setting the title of "ChildOfParentClass2" is "going up the stack" and is renaming ParentClass2 to ChildOfParentClass2?
I don't think its the .title method that's doing this. I think parentclass2.window is literally being turned into childofparentclass2.window.
I am aware that tkinter is behaving weirdly because I'm trying to force it into my object orientated approach...but it would be cool to use it this way so would appreciate an answer.
Can any one explain this weird behaviour, and maybe how it could be solved and I'll be able to call parentclass2.child("ChildOfParentClass2") and have it work as expected?
I've tried using Toplevel() in child and Tk() in init but exactly the same weird behavior occurs:
def __init__(self, title):
self.window = Tk()
self.create(title)
def create(self,title):
self.window.title(title)
self.window.protocol("WM_DELETE_WINDOW",self.delete)
def child(self, title):
self.window = Toplevel() # thought this would work tbh
self.create(title)
A:
The reason for the odd behavior is that in create you're redefining self.window to be the newly created window. It no longer represents the original window. So, when you print the title of what you think is the main window you actually are printing the title of the child window.
If you want to create a child of a root window, you need to create instances of Toplevel. You can then pass the root window in as the master of the Toplevel to create the parent/child relationship.
def child(self, title):
new_window = Toplevel(master=self.window)
new_window.title(title)
When you do this, child windows will automatically be deleted when the parent dies. You don't have to do anything at all to make that happen, that's the default behavior of tkinter widgets.
Bear in mind that if you create more than one instance of Tk, each is isolated from the other. Images, variables, fonts, and other widgets created in one cannot communicate with or be moved to another. Each gets their own separate internal tcl interpreter.
| Why are attributes of a tk object being 'retroactively' changed? | Personal project, I'm thinking it would be cool to be able to create a one to has many relationship between windows, so when a "parent" window is closed all of its "children" are also also closed.
So here is the window class that creates new windows via the Tk() function:
from tkinter import *
class Window:
def __init__(self, title):
self.create(title)
def create(self,title):
self.window = Tk()
self.window.title(title)
self.window.protocol("WM_DELETE_WINDOW",self.delete)
def child(self, title):
self.create(title)
def delete(self):
print(f'Destroying: {self.window.title()}')
self.window.destroy()
parentclass1 = Window("ParentClass1")
parentclass2 = Window("ParentClass2")
parentclass3 = Window("ParentClass3")
print(parentclass1.window.title())
print(parentclass2.window.title())
print(parentclass3.window.title())
mainloop()
This works fine. Each window opens, and when its title is queried each instance returns the correct title:
print(parentclass1.window.title()) #=\> "ParentClass1"
print(parentclass2.window.title()) #=\> "ParentClass2"
print(parentclass3.window.title()) #=\> "ParentClass3"
What I want to be able to do is call the child method on the parentclass2 instance and instantly set up a relationship between parentclass2 and the newly created instance. I.e parentclass2 is the parent and the newly created instance is the child of parentclass2.
However before I get even to setting up this relationship via an array, a very weird thing happens when I use the child method:
parentclass2.child("ChildOfParentClass2")
print(parentclass1.window.title()) #=> "ParentClass1"
print(parentclass2.window.title()) #=> "ChildOfParentClass2"
print(parentclass3.window.title()) #=> "ParentClass1"
parentclass2.window.title() now returns the string "ChildOfParentClass2".
This is odd. self.window = Tk() is clearly being called twice, separately, and yet somehow setting the title of "ChildOfParentClass2" is "going up the stack" and is renaming ParentClass2 to ChildOfParentClass2?
I don't think its the .title method that's doing this. I think parentclass2.window is literally being turned into childofparentclass2.window.
I am aware that tkinter is behaving weirdly because I'm trying to force it into my object orientated approach...but it would be cool to use it this way so would appreciate an answer.
Can any one explain this weird behaviour, and maybe how it could be solved and I'll be able to call parentclass2.child("ChildOfParentClass2") and have it work as expected?
I've tried using Toplevel() in child and Tk() in init but exactly the same weird behavior occurs:
def __init__(self, title):
self.window = Tk()
self.create(title)
def create(self,title):
self.window.title(title)
self.window.protocol("WM_DELETE_WINDOW",self.delete)
def child(self, title):
self.window = Toplevel() # thought this would work tbh
self.create(title)
| [
"The reason for the odd behavior is that in create you're redefining self.window to be the newly created window. It no longer represents the original window. So, when you print the title of what you think is the main window you actually are printing the title of the child window.\nIf you want to create a child of a root window, you need to create instances of Toplevel. You can then pass the root window in as the master of the Toplevel to create the parent/child relationship.\ndef child(self, title):\n new_window = Toplevel(master=self.window)\n new_window.title(title)\n\nWhen you do this, child windows will automatically be deleted when the parent dies. You don't have to do anything at all to make that happen, that's the default behavior of tkinter widgets.\nBear in mind that if you create more than one instance of Tk, each is isolated from the other. Images, variables, fonts, and other widgets created in one cannot communicate with or be moved to another. Each gets their own separate internal tcl interpreter.\n"
] | [
0
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0074648032_python_tkinter.txt |
Q:
Recursion on odd to be front, even in the back
I am new to python.
I am writing a recusion to returns a COPY of the list with odds at front, evens in the back.
For example: [3,4,5,6] returns [3,5,6,4].
How should I break the problem into small pieces.
def oddsevens(thelist):
if thelist == []:
return []
if thelist[0] % 2 == 0:
A:
try this:
if thelist == []:
return []
if thelist[0] % 2 == 0:
return oddsevens(thelist[1:]) + thelist[:1]
else:
return thelist[:1] + oddsevens(thelist[1:])
Let me know if you have any further questions!
| Recursion on odd to be front, even in the back | I am new to python.
I am writing a recusion to returns a COPY of the list with odds at front, evens in the back.
For example: [3,4,5,6] returns [3,5,6,4].
How should I break the problem into small pieces.
def oddsevens(thelist):
if thelist == []:
return []
if thelist[0] % 2 == 0:
| [
"try this:\nif thelist == []:\n return []\nif thelist[0] % 2 == 0:\n return oddsevens(thelist[1:]) + thelist[:1]\nelse:\n return thelist[:1] + oddsevens(thelist[1:])\n\nLet me know if you have any further questions!\n"
] | [
2
] | [] | [] | [
"python",
"recursion"
] | stackoverflow_0074649317_python_recursion.txt |
Q:
Tkinter: get value from entry with button and store in variable
Say that we have an entry_object, a button button_object and a global variable called score.
I want to update the score when the button is clicked and the entry has a value. I tried looking at this answer but I need something slightly different. The main difference is the storing of the value in a variable.
I have a function to generate the elements:
def generate_gui():
root = tk.Tk()
root.geometry('900x700+50+50')
entry_object = tk.Entry(root, width=40)
entry_object.pack()
button_object = tk.Button(root, text='submit')
button_object.pack()
return root, entry_object, button_object
I now want to update score with `entry_object.get()' when the button is clicked.
generate_gui()
while accumulated_score < 10:
for player in players:
** #### pseudocode: if button_object clicked then player.accumulated_score += score#### **
How do I replace the pseudocode with the right code? Please help :)
A:
#If score is a global variable then:
score = 0
root = tk.Tk()
root.geometry('900x700+50+50')
entry_object = tk.Entry(root, width=40)
entry_object.pack()
def increment():
global score
score += 1
button_object = tk.Button(root, text='submit', command=increment() )
root.mainloop()
#If it's an atribute from an object "player" then:
root = tk.Tk()
root.geometry('900x700+50+50')
entry_object = tk.Entry(root, width=40)
entry_object.pack()
def increment():
for player in players:
if player.accumulated_score <= 10:
player.accumulated_score += score
button_object = tk.Button(root, text='submit', command=increment() )
root.mainloop()
| Tkinter: get value from entry with button and store in variable | Say that we have an entry_object, a button button_object and a global variable called score.
I want to update the score when the button is clicked and the entry has a value. I tried looking at this answer but I need something slightly different. The main difference is the storing of the value in a variable.
I have a function to generate the elements:
def generate_gui():
root = tk.Tk()
root.geometry('900x700+50+50')
entry_object = tk.Entry(root, width=40)
entry_object.pack()
button_object = tk.Button(root, text='submit')
button_object.pack()
return root, entry_object, button_object
I now want to update score with `entry_object.get()' when the button is clicked.
generate_gui()
while accumulated_score < 10:
for player in players:
** #### pseudocode: if button_object clicked then player.accumulated_score += score#### **
How do I replace the pseudocode with the right code? Please help :)
| [
"#If score is a global variable then:\nscore = 0\nroot = tk.Tk()\nroot.geometry('900x700+50+50')\nentry_object = tk.Entry(root, width=40)\nentry_object.pack()\n\ndef increment():\n global score\n score += 1\n\nbutton_object = tk.Button(root, text='submit', command=increment() )\nroot.mainloop()\n\n#If it's an atribute from an object \"player\" then:\nroot = tk.Tk()\nroot.geometry('900x700+50+50')\nentry_object = tk.Entry(root, width=40)\nentry_object.pack()\n\ndef increment():\n for player in players:\n if player.accumulated_score <= 10:\n player.accumulated_score += score\nbutton_object = tk.Button(root, text='submit', command=increment() )\nroot.mainloop()\n\n"
] | [
0
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0071945773_python_tkinter.txt |
Q:
Python pip Install failing with ModuleNotFoundError: No module named 'pyexpat' error
Complete Docker file: https://github.com/docker-library/python/blob/master/3.8/bullseye/Dockerfile
Docker file :
'''
ENV PYTHON_PIP_VERSION 22.0.4
#https://github.com/docker-library/python/issues/365
ENV PYTHON_SETUPTOOLS_VERSION 57.5.0
#https://github.com/pypa/get-pip
ENV PYTHON_GET_PIP_URL https://github.com/pypa/get-pip/raw/aeca83c7ba7f9cdfd681103c4dcbf0214f6d742e/public/get-pip.py
ENV PYTHON_GET_PIP_SHA256 d0b5909f3ab32dae9d115aa68a4b763529823ad5589c56af15cf816fca2773d6
RUN set -eux;
wget -O get-pip.py "$PYTHON_GET_PIP_URL";
echo "$PYTHON_GET_PIP_SHA256 *get-pip.py" | sha256sum -c -; \
export PYTHONDONTWRITEBYTECODE=1; \
python get-pip.py \
--disable-pip-version-check \
--no-cache-dir \
--no-compile \
"pip==$PYTHON_PIP_VERSION" \
"setuptools==$PYTHON_SETUPTOOLS_VERSION" ;\
rm -f get-pip.py; \
pip --version
'''
Error:
'''
> python get-pip.py --disable-pip-version-check --no-cache-dir --no-compile pip==22.0.4 setuptools==57.5.0
> Traceback (most recent call last):
> File "get-pip.py", line 32098, in <module>
> main()
> File "get-pip.py", line 135, in main
> bootstrap(tmpdir=tmpdir)
> File "get-pip.py", line 111, in bootstrap
> monkeypatch_for_cert(tmpdir)
> File "get-pip.py", line 92, in monkeypatch_for_cert
> from pip._internal.commands.install import InstallCommand
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_internal/commands/install.py", line 16, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_internal/cli/req_command.py", line 21, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_internal/index/package_finder.py", line
> 33, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_internal/req/__init__.py", line 8, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_internal/req/req_install.py", line 42, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_internal/operations/install/wheel.py",
> line 39, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_vendor/distlib/scripts.py", line 16, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_vendor/distlib/compat.py", line 83, in <module>
> File "/usr/local/lib/python3.8/xmlrpc/client.py", line 138, in <module>
> from xml.parsers import expat
> File "/usr/local/lib/python3.8/xml/parsers/expat.py", line 4, in <module>
> from pyexpat import *
> ModuleNotFoundError: No module named 'pyexpat'
'''
Trying to add python 3.8 to my custom image. I could install python 3.8.13. I am adding above lines to docker file to install pip but getting above error of 'pyexpat' module missing.
How do I add this module, do I have to install some package at OS level?
A:
You need to install expat-dev package before running get-pip.py
RUN \
set -eu; \
apk update --no-cache; \
apk add --no-cache \
expat-dev \
;
| Python pip Install failing with ModuleNotFoundError: No module named 'pyexpat' error | Complete Docker file: https://github.com/docker-library/python/blob/master/3.8/bullseye/Dockerfile
Docker file :
'''
ENV PYTHON_PIP_VERSION 22.0.4
#https://github.com/docker-library/python/issues/365
ENV PYTHON_SETUPTOOLS_VERSION 57.5.0
#https://github.com/pypa/get-pip
ENV PYTHON_GET_PIP_URL https://github.com/pypa/get-pip/raw/aeca83c7ba7f9cdfd681103c4dcbf0214f6d742e/public/get-pip.py
ENV PYTHON_GET_PIP_SHA256 d0b5909f3ab32dae9d115aa68a4b763529823ad5589c56af15cf816fca2773d6
RUN set -eux;
wget -O get-pip.py "$PYTHON_GET_PIP_URL";
echo "$PYTHON_GET_PIP_SHA256 *get-pip.py" | sha256sum -c -; \
export PYTHONDONTWRITEBYTECODE=1; \
python get-pip.py \
--disable-pip-version-check \
--no-cache-dir \
--no-compile \
"pip==$PYTHON_PIP_VERSION" \
"setuptools==$PYTHON_SETUPTOOLS_VERSION" ;\
rm -f get-pip.py; \
pip --version
'''
Error:
'''
> python get-pip.py --disable-pip-version-check --no-cache-dir --no-compile pip==22.0.4 setuptools==57.5.0
> Traceback (most recent call last):
> File "get-pip.py", line 32098, in <module>
> main()
> File "get-pip.py", line 135, in main
> bootstrap(tmpdir=tmpdir)
> File "get-pip.py", line 111, in bootstrap
> monkeypatch_for_cert(tmpdir)
> File "get-pip.py", line 92, in monkeypatch_for_cert
> from pip._internal.commands.install import InstallCommand
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_internal/commands/install.py", line 16, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_internal/cli/req_command.py", line 21, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_internal/index/package_finder.py", line
> 33, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_internal/req/__init__.py", line 8, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_internal/req/req_install.py", line 42, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_internal/operations/install/wheel.py",
> line 39, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_vendor/distlib/scripts.py", line 16, in <module>
> File "<frozen zipimport>", line 259, in load_module
> File "/tmp/tmpiqi24qri/pip.zip/pip/_vendor/distlib/compat.py", line 83, in <module>
> File "/usr/local/lib/python3.8/xmlrpc/client.py", line 138, in <module>
> from xml.parsers import expat
> File "/usr/local/lib/python3.8/xml/parsers/expat.py", line 4, in <module>
> from pyexpat import *
> ModuleNotFoundError: No module named 'pyexpat'
'''
Trying to add python 3.8 to my custom image. I could install python 3.8.13. I am adding above lines to docker file to install pip but getting above error of 'pyexpat' module missing.
How do I add this module, do I have to install some package at OS level?
| [
"You need to install expat-dev package before running get-pip.py\nRUN \\\n set -eu; \\\n apk update --no-cache; \\\n apk add --no-cache \\\n expat-dev \\\n ;\n\n"
] | [
0
] | [] | [] | [
"pip",
"python",
"python_3.x"
] | stackoverflow_0073332749_pip_python_python_3.x.txt |
Q:
How to write to a file from various places in the code in a pythonic and performant way
When writing to a file in python, you should typically use the using structure in order to have the file closed after writing, like so:
with open("myfile.txt", "a") as file1:
file1.write("Hello \n")
But if I, during the execution of my script, wants to write to the same file from different places in the code, I might be tempted to encapsulate the above structure in some kind of method like so:
def write_to_file(self, string_to_write):
with open(self.myfile_path, "w") as file1:
file1.write(f"{string_to_write}\n")
The above would likely give me a pretty bad performance hit since the file is opened every time I call the method.
The alternative that I can see is opening the file early in the program and having a file.close() call in some finally clause somewhere and hope for the best. But I understand this to be associated with some risk.
So given the above, how should one approach this task in a pythonic as well as a performant way?
A:
You can have your with statement early in your code and have all the other functions that use the file (and probably many that don't use the file but are called in between the ones that do) indented from it and pass the file to them.
This may not be wonderful to refactor things to this given the current code and complexity...
def main():
with open('file.txt','w') as file:
my_func_1()
my_func_2(file)
my_func_3
my_func_4(file)
...
def my_func_1():
...
def my_func_2(file):
...
file.write('thing to write')
...
def my_func_3():
...
def my_func_4(file):
...
file.write('thing to write')
...
A:
Unless you can put everything into with block like in scotscotmcc's answer, I'd probably just do it without with. Just open the file, write to it, then .close() it.
But here's an idea for using with and not putting the code into the with block. A generator that writes to the file what you send it:
def printer(filename):
with open(filename, 'w') as f:
while True:
print((yield), file=f)
# Demo usage
p = printer('test.txt')
next(p)
p.send('foo')
p.send('bar')
p.close()
# Check the resulting file
with open('test.txt') as f:
print(repr(f.read()))
Output (Try it online!):
'foo\nbar\n'
I don't think it's any better than just open+close a file, though. I just tried it for fun / out of curiosity.
A:
Based on @CharlieBONS comment and some Singleton patterns for python, I created this solution. I still suspect there is something better, closer to the file loggers in logging, but I have not had the time to dive into that.
The previous solution was based on logging to a file with a specific logging level, but since the python library needs to hook into other solutions with their own logging configurations, like AirFlow, i had to abandon this.
import json
import logging
import os
from pathlib import Path
from typing import List
class MyWriter:
__instance = None
__inited = False
def __new__(cls, path_to_file: Path) -> "MyWriter":
if cls.__instance is None:
cls.__instance = super().__new__(cls)
return cls.__instance
def __init__(self, path_to_file: Path) -> None:
if type(self).__inited:
return
self.cache: List[str] = []
self.path_to_file: Path = path_to_file
if self.path_to_file.is_file():
os.remove(self.path_to_file)
type(self).__inited = True
def write(self, string_to_write: str, flush=False):
try:
if string_to_write:
self.cache.append(f"{string_to_write)}\n")
if len(self.cache) > 1000 or flush:
with open(self.path_to_file, "a") as my_file:
extradata_file.writelines(self.cache)
self.cache = []
logging.debug("My Writer flushing the cache")
except Exception as ee:
error_message = "Something went wrong in My Writer"
logging.error(error_message)
raise ee
def flush(self):
self.write("", True)
| How to write to a file from various places in the code in a pythonic and performant way | When writing to a file in python, you should typically use the using structure in order to have the file closed after writing, like so:
with open("myfile.txt", "a") as file1:
file1.write("Hello \n")
But if I, during the execution of my script, wants to write to the same file from different places in the code, I might be tempted to encapsulate the above structure in some kind of method like so:
def write_to_file(self, string_to_write):
with open(self.myfile_path, "w") as file1:
file1.write(f"{string_to_write}\n")
The above would likely give me a pretty bad performance hit since the file is opened every time I call the method.
The alternative that I can see is opening the file early in the program and having a file.close() call in some finally clause somewhere and hope for the best. But I understand this to be associated with some risk.
So given the above, how should one approach this task in a pythonic as well as a performant way?
| [
"You can have your with statement early in your code and have all the other functions that use the file (and probably many that don't use the file but are called in between the ones that do) indented from it and pass the file to them.\nThis may not be wonderful to refactor things to this given the current code and complexity...\ndef main():\n with open('file.txt','w') as file:\n my_func_1()\n my_func_2(file)\n my_func_3\n my_func_4(file)\n ...\n\ndef my_func_1():\n ...\n\ndef my_func_2(file):\n ...\n file.write('thing to write')\n ...\n\ndef my_func_3():\n ...\n\ndef my_func_4(file):\n ...\n file.write('thing to write')\n ...\n\n",
"Unless you can put everything into with block like in scotscotmcc's answer, I'd probably just do it without with. Just open the file, write to it, then .close() it.\nBut here's an idea for using with and not putting the code into the with block. A generator that writes to the file what you send it:\ndef printer(filename):\n with open(filename, 'w') as f:\n while True:\n print((yield), file=f)\n\n# Demo usage\np = printer('test.txt')\nnext(p)\np.send('foo')\np.send('bar')\np.close()\n\n# Check the resulting file\nwith open('test.txt') as f:\n print(repr(f.read()))\n\nOutput (Try it online!):\n'foo\\nbar\\n'\n\nI don't think it's any better than just open+close a file, though. I just tried it for fun / out of curiosity.\n",
"Based on @CharlieBONS comment and some Singleton patterns for python, I created this solution. I still suspect there is something better, closer to the file loggers in logging, but I have not had the time to dive into that.\nThe previous solution was based on logging to a file with a specific logging level, but since the python library needs to hook into other solutions with their own logging configurations, like AirFlow, i had to abandon this.\nimport json\nimport logging\nimport os\nfrom pathlib import Path\nfrom typing import List\n\n\nclass MyWriter:\n\n __instance = None\n __inited = False\n\n def __new__(cls, path_to_file: Path) -> \"MyWriter\":\n if cls.__instance is None:\n cls.__instance = super().__new__(cls)\n return cls.__instance\n\n def __init__(self, path_to_file: Path) -> None:\n if type(self).__inited:\n return\n self.cache: List[str] = []\n self.path_to_file: Path = path_to_file\n if self.path_to_file.is_file():\n os.remove(self.path_to_file)\n type(self).__inited = True\n\n def write(self, string_to_write: str, flush=False):\n try:\n if string_to_write:\n self.cache.append(f\"{string_to_write)}\\n\")\n if len(self.cache) > 1000 or flush:\n with open(self.path_to_file, \"a\") as my_file:\n extradata_file.writelines(self.cache)\n self.cache = []\n logging.debug(\"My Writer flushing the cache\")\n except Exception as ee:\n error_message = \"Something went wrong in My Writer\"\n logging.error(error_message)\n raise ee\n\n def flush(self):\n self.write(\"\", True)\n\n"
] | [
2,
1,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074510331_python_python_3.x.txt |
Q:
Retry if status_code is 503
def verify_app_log_cur_day2(self, anypoint_monitoring, organization_id, int, applist, access_token):
headers = {"Authorization": f"Bearer {access_token}"}
payload = {}
log_list = []
for item in applist:
url = f"{anypoint_monitoring}/organizations/{organization_id}/environments/{int}/applications/{item}/logs}"
response = requests.request("GET", url, headers=headers, data=payload)
if response.status_code == 503:
continue
else:
log_list.append([response.json()])
return log_list
How to add functionality to my code?
If response.status_code == 503 then try again
If response.status_code again == 503 then continue
A:
Simply do it this way
def verify_app_log_cur_day2(self, anypoint_monitoring, organization_id, int, applist, access_token):
headers = {"Authorization": f"Bearer {access_token}"}
payload = {}
log_list = []
for item in applist:
url = f"{anypoint_monitoring}/organizations/{organization_id}/environments/{int}/applications/{item}/logs}"
response = requests.request("GET", url, headers=headers, data=payload)
# Functionality:
if response.status_code == 503:
response = requests.request("GET", url, headers=headers, data=payload)
if response.status_code == 503:
continue
else:
log_list.append(response.json())
return log_list
| Retry if status_code is 503 | def verify_app_log_cur_day2(self, anypoint_monitoring, organization_id, int, applist, access_token):
headers = {"Authorization": f"Bearer {access_token}"}
payload = {}
log_list = []
for item in applist:
url = f"{anypoint_monitoring}/organizations/{organization_id}/environments/{int}/applications/{item}/logs}"
response = requests.request("GET", url, headers=headers, data=payload)
if response.status_code == 503:
continue
else:
log_list.append([response.json()])
return log_list
How to add functionality to my code?
If response.status_code == 503 then try again
If response.status_code again == 503 then continue
| [
"Simply do it this way\ndef verify_app_log_cur_day2(self, anypoint_monitoring, organization_id, int, applist, access_token):\n headers = {\"Authorization\": f\"Bearer {access_token}\"}\n payload = {}\n log_list = []\n for item in applist:\n url = f\"{anypoint_monitoring}/organizations/{organization_id}/environments/{int}/applications/{item}/logs}\"\n response = requests.request(\"GET\", url, headers=headers, data=payload)\n\n # Functionality:\n if response.status_code == 503:\n response = requests.request(\"GET\", url, headers=headers, data=payload)\n if response.status_code == 503:\n continue \n else:\n log_list.append(response.json())\n\n return log_list\n\n"
] | [
0
] | [] | [] | [
"http_status_code_503",
"python",
"python_3.x",
"rest"
] | stackoverflow_0074648886_http_status_code_503_python_python_3.x_rest.txt |
Q:
Python: Pandas pd.read_excel giving ImportError: Install xlrd >= 0.9.0 for Excel support
I am trying to read a .xlsx with pandas, but get the follwing error:
data = pd.read_excel(low_memory=False, io="DataAnalysis1/temp1.xlsx").fillna(value=0)
Traceback (most recent call last):
File "/Users/Vineeth/PycharmProjects/DataAnalysis1/try1.py", line 9, in <module>
data = pd.read_excel(low_memory=False, io="DataAnalysis1/temp1.xlsx").fillna(value=0)
File "/Users/Vineeth/venv/lib/python2.7/site-packages/pandas/util/_decorators.py", line 118, in wrapper
return func(*args, **kwargs)
File "/Users/Vineeth/venv/lib/python2.7/site-packages/pandas/io/excel.py", line 230, in read_excel
io = ExcelFile(io, engine=engine)
File "/Users/Vineeth/venv/lib/python2.7/site-packages/pandas/io/excel.py", line 263, in __init__
raise ImportError(err_msg)
ImportError: Install xlrd >= 0.9.0 for Excel support
I've also tried
data = pd.read_excel("DataAnalysis1/temp1.xlsx", low_memory=False).fillna(value=0)
And I Still get the same error.
Background: I'm trying to extract an excel file with multiple worksheets as a dict of data frames.I installed xlrd version 0.9.0 and the latest version(1.1.0) but I still get the same error. Thanks!
A:
As @COLDSPEED so eloquently pointed out the error explicitly tells you to install xlrd.
pip install xlrd
And you will be good to go.
A:
Since December 2020 xlrd no longer supports xlsx-Files as explained in the official changelog. You can use openpyxl instead:
pip install openpyxl
And in your python-file:
import pandas as pd
pd.read_excel('path/to/file.xlsx', engine='openpyxl')
A:
Either use:
pip install xlrd
And if you are using conda, use
conda install -c anaconda xlrd
That's it. good luck.
A:
If you are in ubuntu this work for me:
python3 -m pip install openpyxl
python3 -m pip install xlrd
A:
This happened to me after I ran a script with cProfile a la python3 -m cProfile script.py even though xlrd was already installed and had never thrown this error before. it persisted even under python3 script.py. (Granted, I agree this wasn't what happened to OP, given the obvious import error)
However, for cases like mine, the following fixed the issue, despite being told "requirement already met" in every case.
pip install --upgrade pandas
pip install --upgrade xlrd
Pretty confounding stuff; not sure if cProfile was the cause or just a coincidence
The following should work, assuming your pip install operated on python2.
python3 -m pip install xlrd
A:
I was getting an error
"ImportError: Install xlrd >= 1.0.0 for Excel support"
on Pycharm for below code
import pandas as pd
df2 = pd.read_excel("data.xlsx")
print(df2.head(3))
print(df2.tail(3))
Solution : pip install xlrd
It resolved error after using this.
Also no need to use "import xlrd"
A:
This works for me:
For Python 3
pip3 install xlrd --user
For Python2
pip install xlrd --user
A:
I don't know if this will be helpful for someone, but I had the same problem.
I wrote pip install xlrd in the anaconda prompt while in the specific environment and it said it was installed, but when I looked at the installed packages it wasn't there.
What solved the problem was "moving" (I don't know the terminology for it) into the Scripts folder of the specific environment and do the pip install xlrd there.
Hope this is useful for someone :D
A:
Was getting the error while I was using jupyter.
ModuleNotFoundError: No module named 'xlrd'
...
ImportError: Install xlrd >= 0.9.0 for Excel support
it was resolved for me after using.
!pip install xlrd
A:
I encountered same problem and took 2 hours to figure it out.
pip install xlrd (latest)
pip install pandas (latest)
Go to C:\Python27\Lib\site-packages and check for xlrd folder (if there are 2 of them) delete the old version
open a new terminal and use pandas to read excel. It should work.
A:
I had the same problem and none of the above answers worked. If you go into the settings (CTRL + ALT + s) and search for project interpreter you will see all of the installed packages. Click the + button at the top right and search for xlrd, then click install package at the bottom left.
I had already done the "pip install xlrd" command from the file location of my python.exe before this, so you may need to do that as well. (you can find the file location by searching it in windows search bar and right click -> open file location, then type cmd into the file explorer address bar)
A:
This can be because your required libraries are been installed in Python environment instead of Spyder.
https://github.com/spyder-ide/spyder/wiki/Working-with-packages-and-environments-in-Spyder
A:
I had the same problem. Actually, the problem is that even after installing packages/libraries using pip these packages are not integrated with IDE. So, need to add libraries specifically to the ide.
A:
First of all you need to install xlrd & pandas packages. Then try below code.
import xlrd
import pandas as pd
xl = pd.ExcelFile("fileName.xlsx")
print(xl.parse(xl.sheet_names[0]))
A:
You need to install the "xlrd" lib
For Linux (Ubuntu and Derivates):
Installing via pip:
python -m pip install --user xlrd
Install system-wide via a Linux package manager:
*sudo apt-get install python-xlrd
Windows:
Installing via pip:
*pip install xlrd
Download the files:
https://pypi.org/project/xlrd/
A:
Another possibility, is the machine has an older version of xlrd installed separately, and it's not in the "..:\Python27\Scripts.." folder.
In another word, there are 2 different versions of xlrd in the machine.
when you check the version below, it reads the one not in the "..:\Python27\Scripts.." folder, no matter how updated you done with pip.
print xlrd.__version__
Delete the whole redundant sub-folder, and it works. (in addition to xlrd, I had another library encountered the same)
A:
I encountered a similar issue trying to use xlrd in jupyter notebook. I notice you are using a virtual environment and that was the key to my issue as well. I had xlrd installed in my venv, but I had not properly installed a kernel for that virtual environment in my notebook.
To get it to work, I created my virtual environment and activated it.
Then... pip install ipykernel
And then... ipython kernel install --user --name=myproject
Finally, start jupyter notebooks and when you create a new notebook, select the name you created (in this example, 'myproject')
Hope that helps.
A:
Please make sure your python or python3 can see xlrd installation. I had a situation where python3.5 and python3.7 were installed in two different locations. While xlrd was installed with python3.5, I was using python3 (from python3.7 dir) to run my script and got the same error reported above. When I used the correct python (viz. python3.5 dir) to run my script, I was able to read the excel spread sheet without a problem.
A:
As @WojciechJakubas mentioned to install openpyxl instead of xlrd, I used openpyxl and it worked.
pip install openpyxl
import openpyxl
path = "path to file.xlxs"
wb_obj = openpyxl.load_workbook(path)
sheet_obj = wb_obj.active
length_col = sheet_obj.max_row
print("Length of rows : ", length_col)
I hope it will help lot of people in 2023.
| Python: Pandas pd.read_excel giving ImportError: Install xlrd >= 0.9.0 for Excel support | I am trying to read a .xlsx with pandas, but get the follwing error:
data = pd.read_excel(low_memory=False, io="DataAnalysis1/temp1.xlsx").fillna(value=0)
Traceback (most recent call last):
File "/Users/Vineeth/PycharmProjects/DataAnalysis1/try1.py", line 9, in <module>
data = pd.read_excel(low_memory=False, io="DataAnalysis1/temp1.xlsx").fillna(value=0)
File "/Users/Vineeth/venv/lib/python2.7/site-packages/pandas/util/_decorators.py", line 118, in wrapper
return func(*args, **kwargs)
File "/Users/Vineeth/venv/lib/python2.7/site-packages/pandas/io/excel.py", line 230, in read_excel
io = ExcelFile(io, engine=engine)
File "/Users/Vineeth/venv/lib/python2.7/site-packages/pandas/io/excel.py", line 263, in __init__
raise ImportError(err_msg)
ImportError: Install xlrd >= 0.9.0 for Excel support
I've also tried
data = pd.read_excel("DataAnalysis1/temp1.xlsx", low_memory=False).fillna(value=0)
And I Still get the same error.
Background: I'm trying to extract an excel file with multiple worksheets as a dict of data frames.I installed xlrd version 0.9.0 and the latest version(1.1.0) but I still get the same error. Thanks!
| [
"As @COLDSPEED so eloquently pointed out the error explicitly tells you to install xlrd.\npip install xlrd\n\nAnd you will be good to go.\n",
"Since December 2020 xlrd no longer supports xlsx-Files as explained in the official changelog. You can use openpyxl instead:\npip install openpyxl\n\nAnd in your python-file:\nimport pandas as pd\npd.read_excel('path/to/file.xlsx', engine='openpyxl')\n\n",
"Either use:\npip install xlrd\n\nAnd if you are using conda, use \nconda install -c anaconda xlrd\n\nThat's it. good luck.\n",
"If you are in ubuntu this work for me:\npython3 -m pip install openpyxl\npython3 -m pip install xlrd\n\n",
"This happened to me after I ran a script with cProfile a la python3 -m cProfile script.py even though xlrd was already installed and had never thrown this error before. it persisted even under python3 script.py. (Granted, I agree this wasn't what happened to OP, given the obvious import error)\nHowever, for cases like mine, the following fixed the issue, despite being told \"requirement already met\" in every case.\npip install --upgrade pandas\npip install --upgrade xlrd\n\nPretty confounding stuff; not sure if cProfile was the cause or just a coincidence\nThe following should work, assuming your pip install operated on python2.\npython3 -m pip install xlrd\n\n",
"I was getting an error \n\n\"ImportError: Install xlrd >= 1.0.0 for Excel support\"\n\non Pycharm for below code\nimport pandas as pd\ndf2 = pd.read_excel(\"data.xlsx\")\nprint(df2.head(3))\nprint(df2.tail(3))\n\nSolution : pip install xlrd\nIt resolved error after using this.\nAlso no need to use \"import xlrd\"\n",
"This works for me:\nFor Python 3\n\npip3 install xlrd --user\n\nFor Python2\n\npip install xlrd --user\n\n",
"I don't know if this will be helpful for someone, but I had the same problem.\nI wrote pip install xlrd in the anaconda prompt while in the specific environment and it said it was installed, but when I looked at the installed packages it wasn't there.\nWhat solved the problem was \"moving\" (I don't know the terminology for it) into the Scripts folder of the specific environment and do the pip install xlrd there.\nHope this is useful for someone :D\n",
"Was getting the error while I was using jupyter.\nModuleNotFoundError: No module named 'xlrd'\n...\nImportError: Install xlrd >= 0.9.0 for Excel support\n\nit was resolved for me after using.\n!pip install xlrd\n\n",
"I encountered same problem and took 2 hours to figure it out.\n\npip install xlrd (latest)\npip install pandas (latest)\nGo to C:\\Python27\\Lib\\site-packages and check for xlrd folder (if there are 2 of them) delete the old version\nopen a new terminal and use pandas to read excel. It should work.\n\n",
"I had the same problem and none of the above answers worked. If you go into the settings (CTRL + ALT + s) and search for project interpreter you will see all of the installed packages. Click the + button at the top right and search for xlrd, then click install package at the bottom left. \nI had already done the \"pip install xlrd\" command from the file location of my python.exe before this, so you may need to do that as well. (you can find the file location by searching it in windows search bar and right click -> open file location, then type cmd into the file explorer address bar)\n",
"This can be because your required libraries are been installed in Python environment instead of Spyder.\nhttps://github.com/spyder-ide/spyder/wiki/Working-with-packages-and-environments-in-Spyder\n",
"I had the same problem. Actually, the problem is that even after installing packages/libraries using pip these packages are not integrated with IDE. So, need to add libraries specifically to the ide.\n",
"First of all you need to install xlrd & pandas packages. Then try below code.\nimport xlrd\nimport pandas as pd\n\nxl = pd.ExcelFile(\"fileName.xlsx\")\nprint(xl.parse(xl.sheet_names[0]))\n\n",
"You need to install the \"xlrd\" lib\nFor Linux (Ubuntu and Derivates):\nInstalling via pip:\npython -m pip install --user xlrd\nInstall system-wide via a Linux package manager:\n*sudo apt-get install python-xlrd\nWindows:\nInstalling via pip:\n*pip install xlrd\nDownload the files:\nhttps://pypi.org/project/xlrd/\n",
"Another possibility, is the machine has an older version of xlrd installed separately, and it's not in the \"..:\\Python27\\Scripts..\" folder.\nIn another word, there are 2 different versions of xlrd in the machine.\n\nwhen you check the version below, it reads the one not in the \"..:\\Python27\\Scripts..\" folder, no matter how updated you done with pip.\nprint xlrd.__version__\n\nDelete the whole redundant sub-folder, and it works. (in addition to xlrd, I had another library encountered the same)\n",
"I encountered a similar issue trying to use xlrd in jupyter notebook. I notice you are using a virtual environment and that was the key to my issue as well. I had xlrd installed in my venv, but I had not properly installed a kernel for that virtual environment in my notebook.\nTo get it to work, I created my virtual environment and activated it.\nThen... pip install ipykernel\nAnd then... ipython kernel install --user --name=myproject\nFinally, start jupyter notebooks and when you create a new notebook, select the name you created (in this example, 'myproject')\nHope that helps.\n",
"Please make sure your python or python3 can see xlrd installation. I had a situation where python3.5 and python3.7 were installed in two different locations. While xlrd was installed with python3.5, I was using python3 (from python3.7 dir) to run my script and got the same error reported above. When I used the correct python (viz. python3.5 dir) to run my script, I was able to read the excel spread sheet without a problem. \n",
"As @WojciechJakubas mentioned to install openpyxl instead of xlrd, I used openpyxl and it worked.\n\npip install openpyxl\n\nimport openpyxl\npath = \"path to file.xlxs\"\nwb_obj = openpyxl.load_workbook(path)\nsheet_obj = wb_obj.active\nlength_col = sheet_obj.max_row\nprint(\"Length of rows : \", length_col)\n\nI hope it will help lot of people in 2023.\n"
] | [
150,
110,
37,
9,
7,
6,
3,
2,
2,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"excel",
"pandas",
"python",
"python_2.7"
] | stackoverflow_0048066517_excel_pandas_python_python_2.7.txt |
Q:
Load data from MySQL to BigQuery using Dataflow
I want to load data from MySQL to BigQuery using Cloud Dataflow. Anyone can share article or work experience about load data from MySQL to BigQuery using Cloud Dataflow with Python language?
Thank you
A:
You can use apache_beam.io.jdbc to read from your MySQL database, and the BigQuery I/O to write on BigQuery.
Beam knowledge is expected, so I recommend looking at Apache Beam Programming Guide first.
If you are looking for something pre-built, we have the JDBC to BigQuery Google-provided template, which is open-source (here), but it is written in Java.
A:
If you only want to copy data from MySQL to BigQuery, you can firstly export your MySql data to Cloud Storage, then load this file to a BigQuery table.
I think no need using Dataflow in this case because you don't have complex transformations and business logics. It only corresponds to a copy.
Export the MySQL data to Cloud Storage via a sql query and gcloud cli :
gcloud sql export csv INSTANCE_NAME gs://BUCKET_NAME/FILE_NAME \
--database=DATABASE_NAME \
--offload \
--query=SELECT_QUERY \
--quote="22" \
--escape="5C" \
--fields-terminated-by="2C" \
--lines-terminated-by="0A"
Load the csv file to a BigQuery table via gcloud cli and bq :
bq load \
--source_format=CSV \
mydataset.mytable \
gs://mybucket/mydata.csv \
./myschema.json
./myschema.json is the BigQuery table schema.
| Load data from MySQL to BigQuery using Dataflow | I want to load data from MySQL to BigQuery using Cloud Dataflow. Anyone can share article or work experience about load data from MySQL to BigQuery using Cloud Dataflow with Python language?
Thank you
| [
"You can use apache_beam.io.jdbc to read from your MySQL database, and the BigQuery I/O to write on BigQuery.\nBeam knowledge is expected, so I recommend looking at Apache Beam Programming Guide first.\nIf you are looking for something pre-built, we have the JDBC to BigQuery Google-provided template, which is open-source (here), but it is written in Java.\n",
"If you only want to copy data from MySQL to BigQuery, you can firstly export your MySql data to Cloud Storage, then load this file to a BigQuery table.\nI think no need using Dataflow in this case because you don't have complex transformations and business logics. It only corresponds to a copy.\nExport the MySQL data to Cloud Storage via a sql query and gcloud cli :\ngcloud sql export csv INSTANCE_NAME gs://BUCKET_NAME/FILE_NAME \\\n--database=DATABASE_NAME \\\n--offload \\\n--query=SELECT_QUERY \\\n--quote=\"22\" \\\n--escape=\"5C\" \\\n--fields-terminated-by=\"2C\" \\\n--lines-terminated-by=\"0A\"\n\nLoad the csv file to a BigQuery table via gcloud cli and bq :\nbq load \\\n --source_format=CSV \\\n mydataset.mytable \\\n gs://mybucket/mydata.csv \\\n ./myschema.json\n\n./myschema.json is the BigQuery table schema.\n"
] | [
1,
0
] | [] | [] | [
"etl",
"google_bigquery",
"google_cloud_dataflow",
"mysql",
"python"
] | stackoverflow_0074611456_etl_google_bigquery_google_cloud_dataflow_mysql_python.txt |
Q:
What is the faster method?
Python:
I have to use the length of a list which is the value for a key in a dictionary. I have to use this value in FOR loop. Is it better to fetch the length of the list associated with the key every time or fetch the length from a different dictionary which has the same keys?
I am using len() in the for loop as of now.
A:
len() is very fast - it runs in contant time (see Cost of len() function) so I would not build a new data structure just to cache its answer. Just use it each time you need it.
Building a whole extra data structure, that would definitely be using more resources, and most likely slower. Just make sure you write your loop over my_dict.items(), not over the keys, so you don't unnecessarily redo the key lookups inside the loop.
E.g., use something like this for efficient looping over your dict:
my_dict = <some dict where the values are lists>
for key, value in my_dict.items():
# use key, value (your list) and len(value) (its length) as needed
| What is the faster method? | Python:
I have to use the length of a list which is the value for a key in a dictionary. I have to use this value in FOR loop. Is it better to fetch the length of the list associated with the key every time or fetch the length from a different dictionary which has the same keys?
I am using len() in the for loop as of now.
| [
"len() is very fast - it runs in contant time (see Cost of len() function) so I would not build a new data structure just to cache its answer. Just use it each time you need it.\nBuilding a whole extra data structure, that would definitely be using more resources, and most likely slower. Just make sure you write your loop over my_dict.items(), not over the keys, so you don't unnecessarily redo the key lookups inside the loop.\nE.g., use something like this for efficient looping over your dict:\nmy_dict = <some dict where the values are lists>\n\nfor key, value in my_dict.items():\n # use key, value (your list) and len(value) (its length) as needed\n\n"
] | [
0
] | [] | [] | [
"built_in",
"dictionary",
"for_loop",
"list",
"python"
] | stackoverflow_0074649402_built_in_dictionary_for_loop_list_python.txt |
Q:
TypeError: 'dict_keys' object is not subscriptable
I have this code that errors out in python3:
self.instance_id = get_instance_metadata(data='meta-data/instance-id').keys()[0]
TypeError: 'dict_keys' object is not subscriptable
I changed my code and I get different error (I guess I need more experience):
self.instance_id = get_instance_metadata(list(data='meta-data/instance-id').keys())[0]
TypeError: list() takes no keyword arguments
A:
.keys() is a set-like view, not a sequence, and you can only index sequences.
If you just want the first key, you can manually create an iterator for the dict (with iter) and advance it once (with next):
self.instance_id = next(iter(get_instance_metadata(data='meta-data/instance-id')))
Your second attempt was foiled by mistakes in where you performed the conversion to list, and should have been:
self.instance_id = list(get_instance_metadata(data='meta-data/instance-id').keys())[0] # The .keys() is unnecessary, but mostly harmless
but it would be less efficient than the solution I suggest, as your solution would have to make a shallow copy of the entire set of keys as a list just to get the first element (big-O O(n) work), where next(iter(thedict)) is O(1).
| TypeError: 'dict_keys' object is not subscriptable | I have this code that errors out in python3:
self.instance_id = get_instance_metadata(data='meta-data/instance-id').keys()[0]
TypeError: 'dict_keys' object is not subscriptable
I changed my code and I get different error (I guess I need more experience):
self.instance_id = get_instance_metadata(list(data='meta-data/instance-id').keys())[0]
TypeError: list() takes no keyword arguments
| [
".keys() is a set-like view, not a sequence, and you can only index sequences.\nIf you just want the first key, you can manually create an iterator for the dict (with iter) and advance it once (with next):\nself.instance_id = next(iter(get_instance_metadata(data='meta-data/instance-id')))\n\nYour second attempt was foiled by mistakes in where you performed the conversion to list, and should have been:\nself.instance_id = list(get_instance_metadata(data='meta-data/instance-id').keys())[0] # The .keys() is unnecessary, but mostly harmless\n\nbut it would be less efficient than the solution I suggest, as your solution would have to make a shallow copy of the entire set of keys as a list just to get the first element (big-O O(n) work), where next(iter(thedict)) is O(1).\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074649481_python_python_3.x.txt |
Q:
How to recreate tweepy OAuth2UserHandler across web requests
With OAuth2UserHandler included in the tweepy package, if you generate an authorization URL and later want to retrieve an OAuth2 bearer token, it only works if you reuse the exact OAuth2UserHandler() in memory.
Given an OAuth2UserHandler like this:
from tweepy import OAuth2UserHandler
def _oauth2_handler(callback_url: str) -> OAuth2UserHandler:
return OAuth2UserHandler(
client_id=MY_TWITTER_KEY,
redirect_uri=callback_url,
scope=["offline.access", "users.read", "tweet.read"],
consumer_secret=MY_TWITTER_SECRET,
)
This works:
handler = _oauth2_handler(callback_url, None)
authorize_url = handler.get_authorization_url()
# .. user does authorization flow and we have this in memory still somehow ..
token_data = handler.fetch_token(current_url_after_callback)
This does not work:
handler = _oauth2_handler(callback_url)
authorize_url = handler.get_authorization_url()
# .. user does authorization flow and we talk to a new instance later ..
handler = _oauth2_handler(callback_url)
token_data = handler.fetch_token(current_url_after_callback)
This is because the internal state of OAuth2UserHandler creates a code_verifier, which is not possible to pass into the class.
How can I resolve?
A:
My solution was to reimplement OAuth2UserHandler, exposing code_verifier and allowing the caller to store it and provide it back to the handler later.
Example implementation (fork of tweepy's implementation):
import tweepy
from oauthlib.oauth2 import OAuth2Error
from requests.auth import HTTPBasicAuth
from requests_oauthlib import OAuth2Session
class NotTweepyOAuth2UserHandler(OAuth2Session):
def __init__(
self, client_id: str, client_secret: str, redirect_uri: str, scope: list[str], code_verifier: str | None = None
):
super().__init__(client_id, redirect_uri=redirect_uri, scope=scope)
self.auth = HTTPBasicAuth(client_id, client_secret)
self.code_verifier = code_verifier or str(self._client.create_code_verifier(128))
def get_authorization_url(self) -> str:
url, state_seems_unnecessary = self.authorization_url(
"https://twitter.com/i/oauth2/authorize",
code_challenge=self._client.create_code_challenge(self.code_verifier, "S256"),
code_challenge_method="S256",
)
return url
def fetch_token(self, authorization_response):
return super().fetch_token(
"https://api.twitter.com/2/oauth2/token",
authorization_response=authorization_response,
auth=self.auth,
include_client_id=True,
code_verifier=self.code_verifier,
)
def _oauth2_handler(callback_url: str, code_verifier: str | None) -> OAuth2UserHandler:
return NotTweepyOAuth2UserHandler(
MY_TWITTER_KEY,
MY_TWITTER_SECRET,
callback_url,
["offline.access", "users.read", "tweet.read"],
code_verifier=code_verifier,
)
def get_twitter_authorize_url_and_verifier(callback_url: str) -> tuple[str, str]:
handler = _oauth2_handler(callback_url, None)
authorize_url = handler.get_authorization_url()
# the caller can now store code_verifier somehow
return authorize_url, handler.code_verifier
def get_twitter_token(callback_url: str, current_url: str, twitter_verifier: str) -> dict:
# then pass twitter_verifier back in here
handler = _oauth2_handler(callback_url, twitter_verifier)
try:
return handler.fetch_token(current_url)
except OAuth2Error as e:
raise TwitterAuthError(e.description) from e
Now we can store code_verifier somewhere and complete the authorization loop to get our twitter oauth2 bearer token.
| How to recreate tweepy OAuth2UserHandler across web requests | With OAuth2UserHandler included in the tweepy package, if you generate an authorization URL and later want to retrieve an OAuth2 bearer token, it only works if you reuse the exact OAuth2UserHandler() in memory.
Given an OAuth2UserHandler like this:
from tweepy import OAuth2UserHandler
def _oauth2_handler(callback_url: str) -> OAuth2UserHandler:
return OAuth2UserHandler(
client_id=MY_TWITTER_KEY,
redirect_uri=callback_url,
scope=["offline.access", "users.read", "tweet.read"],
consumer_secret=MY_TWITTER_SECRET,
)
This works:
handler = _oauth2_handler(callback_url, None)
authorize_url = handler.get_authorization_url()
# .. user does authorization flow and we have this in memory still somehow ..
token_data = handler.fetch_token(current_url_after_callback)
This does not work:
handler = _oauth2_handler(callback_url)
authorize_url = handler.get_authorization_url()
# .. user does authorization flow and we talk to a new instance later ..
handler = _oauth2_handler(callback_url)
token_data = handler.fetch_token(current_url_after_callback)
This is because the internal state of OAuth2UserHandler creates a code_verifier, which is not possible to pass into the class.
How can I resolve?
| [
"My solution was to reimplement OAuth2UserHandler, exposing code_verifier and allowing the caller to store it and provide it back to the handler later.\nExample implementation (fork of tweepy's implementation):\nimport tweepy\nfrom oauthlib.oauth2 import OAuth2Error\nfrom requests.auth import HTTPBasicAuth\nfrom requests_oauthlib import OAuth2Session\n\nclass NotTweepyOAuth2UserHandler(OAuth2Session):\n def __init__(\n self, client_id: str, client_secret: str, redirect_uri: str, scope: list[str], code_verifier: str | None = None\n ):\n super().__init__(client_id, redirect_uri=redirect_uri, scope=scope)\n self.auth = HTTPBasicAuth(client_id, client_secret)\n self.code_verifier = code_verifier or str(self._client.create_code_verifier(128))\n\n def get_authorization_url(self) -> str:\n url, state_seems_unnecessary = self.authorization_url(\n \"https://twitter.com/i/oauth2/authorize\",\n code_challenge=self._client.create_code_challenge(self.code_verifier, \"S256\"),\n code_challenge_method=\"S256\",\n )\n return url\n\n def fetch_token(self, authorization_response):\n return super().fetch_token(\n \"https://api.twitter.com/2/oauth2/token\",\n authorization_response=authorization_response,\n auth=self.auth,\n include_client_id=True,\n code_verifier=self.code_verifier,\n )\n\ndef _oauth2_handler(callback_url: str, code_verifier: str | None) -> OAuth2UserHandler:\n return NotTweepyOAuth2UserHandler(\n MY_TWITTER_KEY,\n MY_TWITTER_SECRET,\n callback_url,\n [\"offline.access\", \"users.read\", \"tweet.read\"],\n code_verifier=code_verifier,\n )\n\n\ndef get_twitter_authorize_url_and_verifier(callback_url: str) -> tuple[str, str]:\n handler = _oauth2_handler(callback_url, None)\n authorize_url = handler.get_authorization_url()\n # the caller can now store code_verifier somehow\n return authorize_url, handler.code_verifier\n\ndef get_twitter_token(callback_url: str, current_url: str, twitter_verifier: str) -> dict:\n# then pass twitter_verifier back in here\nhandler = _oauth2_handler(callback_url, twitter_verifier)\n try:\n return handler.fetch_token(current_url)\n except OAuth2Error as e:\n raise TwitterAuthError(e.description) from e\n\nNow we can store code_verifier somewhere and complete the authorization loop to get our twitter oauth2 bearer token.\n"
] | [
0
] | [] | [] | [
"oauth_2.0",
"python",
"tweepy",
"twitter_oauth"
] | stackoverflow_0074649514_oauth_2.0_python_tweepy_twitter_oauth.txt |
Q:
6.13 LAB: Filter and sort a list
Write a program that gets a list of integers from input, and outputs non-negative integers in ascending order (lowest to highest).
Example: If the input is:
10 -7 4 39 -6 12 2
the output is:
2 4 10 12 39
My code that I came up with looks like this:
user_input = input()
numbers = user_input.split()
nums = []
for number in numbers:
nums.append(int(number))
for item in nums:
if int(item) < 0:
nums.remove(item)
list.sort(nums)
for x in nums:
print(x, end=' ')
It gives me an 8/10 for my score but on one of the input/outputs it gives me
input is -1 -7 -2 -88 5 -6
my output is -88 -7 5
Why is it only removing some of the negative numbers and not all of them?
A:
for item in nums:
if int(item) < 0:
nums.remove(item)
problem is probbably here, since you iterate thru a list for which you remove elements in the iterations of the for el in list loop.
You should just use list comprenhension to copy positive integers to a new list and return it.
So the return list would be:
onlyPositiveIntegers = [x for x in nums if x >= 0]
| 6.13 LAB: Filter and sort a list | Write a program that gets a list of integers from input, and outputs non-negative integers in ascending order (lowest to highest).
Example: If the input is:
10 -7 4 39 -6 12 2
the output is:
2 4 10 12 39
My code that I came up with looks like this:
user_input = input()
numbers = user_input.split()
nums = []
for number in numbers:
nums.append(int(number))
for item in nums:
if int(item) < 0:
nums.remove(item)
list.sort(nums)
for x in nums:
print(x, end=' ')
It gives me an 8/10 for my score but on one of the input/outputs it gives me
input is -1 -7 -2 -88 5 -6
my output is -88 -7 5
Why is it only removing some of the negative numbers and not all of them?
| [
"for item in nums:\n if int(item) < 0:\n nums.remove(item)\n\nproblem is probbably here, since you iterate thru a list for which you remove elements in the iterations of the for el in list loop.\nYou should just use list comprenhension to copy positive integers to a new list and return it.\nSo the return list would be:\nonlyPositiveIntegers = [x for x in nums if x >= 0]\n\n"
] | [
0
] | [
"nums.sort() \nfor item in nums:\n if int(item) >= 0:\n break\n nums.remove(item)\nprint(*nums)\n\niterate only through -ve no. and remove them.\n"
] | [
-1
] | [
"filter",
"list",
"python",
"python_3.x",
"sorting"
] | stackoverflow_0074649068_filter_list_python_python_3.x_sorting.txt |
Q:
list of all docker status types
Where can I get a list of all the docker status types? e.g. Up, Exited, Created.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f0771636c8ab registry:2 "/entrypoint.sh /etcβ¦" 25 hours ago Up 3 hours 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp registry
This next part may be unrelated since it could be a completely different "status". but in the docker python api I've also seen status values of the following. I looked through the Python docker code and it doesn't appear to include these string so I think they're originating within docker itself and not the python api.
preparing
downloading
pushing
restarting
running
waiting
verifying checksum
etc.
A:
In the Docker HTTP API, the Inspect a Container API call (GET /containers/{id}/json) includes a Stats field with OpenAPI type ContainerState. That contains a field Status. Its possible values are "created" "running" "paused" "restarting" "removing" "exited" "dead"
The higher-level Docker SDKs and CLI tools all ultimately wrap this API, so any container status from docker-py or docker ps will be derived from one of these values. The Up 3 hours output, for example, looks like a combination of a running state and a calculated container uptime. The list you quote largely doesn't look like container statuses ("push" is not a valid action on a container) and it might go with some other object.
| list of all docker status types | Where can I get a list of all the docker status types? e.g. Up, Exited, Created.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f0771636c8ab registry:2 "/entrypoint.sh /etcβ¦" 25 hours ago Up 3 hours 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp registry
This next part may be unrelated since it could be a completely different "status". but in the docker python api I've also seen status values of the following. I looked through the Python docker code and it doesn't appear to include these string so I think they're originating within docker itself and not the python api.
preparing
downloading
pushing
restarting
running
waiting
verifying checksum
etc.
| [
"In the Docker HTTP API, the Inspect a Container API call (GET /containers/{id}/json) includes a Stats field with OpenAPI type ContainerState. That contains a field Status. Its possible values are \"created\" \"running\" \"paused\" \"restarting\" \"removing\" \"exited\" \"dead\"\nThe higher-level Docker SDKs and CLI tools all ultimately wrap this API, so any container status from docker-py or docker ps will be derived from one of these values. The Up 3 hours output, for example, looks like a combination of a running state and a calculated container uptime. The list you quote largely doesn't look like container statuses (\"push\" is not a valid action on a container) and it might go with some other object.\n"
] | [
1
] | [] | [] | [
"docker",
"python"
] | stackoverflow_0074648983_docker_python.txt |
Q:
Retrying connection Paramiko - Python
Wrote a function that tries to reconnect to SSH when a disconnect happens. Basically expanded my existing function that simply saved the images, which works fine. The code runs but does not work to re-establish connectivity. Any help would be appreciated.
def get_image_id_and_upload_folder_of_images(db_name, table_name, selector_list,
condition_label, condition_val, img_address):
"""
get image id using project name from db
:param db_name: str - name of the data-base (usually 'server')
:param table_name: str - name of the table (usually 'images')
:param selector_list: list of str - list of selectors for the query (usually ["id", "path"])
:param condition_label: str - condition for the sql statement (like 'path')
:param condition_val: list of str - value for the condition of the condition_label (like ['name_of_file.png'])
:param img_address: str - address of the images to send them to the ftp server
:return: returns image or project id
"""
client = paramiko.client.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
retry_interval = 1
retry_interval = float(retry_interval)
timeout = int(20)
timeout_start = time.time()
while time.time() < timeout_start + timeout:
time.sleep(retry_interval)
try:
db, cur = db_info(db_name)
cond_list = ["'" + str(x) + "'" for x in condition_val]
condition = ",".join(cond_list)
selector = ",".join(selector_list)
# make a query for sql
query_stmt = "SELECT " + selector + " FROM " + table_name + " WHERE `" + \
condition_label + "` IN (" + str(condition) + ");"
image_ids = get_image_ids(cur, db, query_stmt, condition_val)
for idx, ids in enumerate(image_ids):
print(ids)
save_img_new_server(img_address + '/' + condition_val[idx], str(ids))
save_img_new_server(img_address + '/' + condition_val[idx], str(ids), hst="site.com",
folder_path='images/')
except paramiko.ssh_exception.NoValidConnectionsError as e:
print('SSH transport is not ready...')
continue
# print(img_address + '/' + condition_val[idx], str(ids))
return image_ids
A:
Your code never calls client.connect(). In fact it doesn't interact with any paramiko module at all inside the while loop.
| Retrying connection Paramiko - Python | Wrote a function that tries to reconnect to SSH when a disconnect happens. Basically expanded my existing function that simply saved the images, which works fine. The code runs but does not work to re-establish connectivity. Any help would be appreciated.
def get_image_id_and_upload_folder_of_images(db_name, table_name, selector_list,
condition_label, condition_val, img_address):
"""
get image id using project name from db
:param db_name: str - name of the data-base (usually 'server')
:param table_name: str - name of the table (usually 'images')
:param selector_list: list of str - list of selectors for the query (usually ["id", "path"])
:param condition_label: str - condition for the sql statement (like 'path')
:param condition_val: list of str - value for the condition of the condition_label (like ['name_of_file.png'])
:param img_address: str - address of the images to send them to the ftp server
:return: returns image or project id
"""
client = paramiko.client.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
retry_interval = 1
retry_interval = float(retry_interval)
timeout = int(20)
timeout_start = time.time()
while time.time() < timeout_start + timeout:
time.sleep(retry_interval)
try:
db, cur = db_info(db_name)
cond_list = ["'" + str(x) + "'" for x in condition_val]
condition = ",".join(cond_list)
selector = ",".join(selector_list)
# make a query for sql
query_stmt = "SELECT " + selector + " FROM " + table_name + " WHERE `" + \
condition_label + "` IN (" + str(condition) + ");"
image_ids = get_image_ids(cur, db, query_stmt, condition_val)
for idx, ids in enumerate(image_ids):
print(ids)
save_img_new_server(img_address + '/' + condition_val[idx], str(ids))
save_img_new_server(img_address + '/' + condition_val[idx], str(ids), hst="site.com",
folder_path='images/')
except paramiko.ssh_exception.NoValidConnectionsError as e:
print('SSH transport is not ready...')
continue
# print(img_address + '/' + condition_val[idx], str(ids))
return image_ids
| [
"Your code never calls client.connect(). In fact it doesn't interact with any paramiko module at all inside the while loop.\n"
] | [
0
] | [] | [] | [
"paramiko",
"python",
"python_3.x"
] | stackoverflow_0071135803_paramiko_python_python_3.x.txt |
Q:
How to get Spyder to open python scripts (.py files) directly from Windows Explorer
I have recently installed the Anaconda distribution on Windows 7 (Anaconda 3-2.4.0-Windows-x86_64). Unlike IDLE, I can't right-click and open a py file in the Spyder IDE. I will have to open Spyder first and then navigate to the file or drag and drop it in the editor. Is there any way to open the file in the editor directly from Widows Explorer?
A:
With the current version of Anaconda (4.1.0) you can simply right-click on a python script in Windows File Explorer and choose "Open with". The first time you do this you need to select "Choose default program" and then browse to spyder.exe in the Script directory in your Anaconda installation. Also make sure that the "Always use the selected program to open this kind of file" is unchecked and then click OK. From now on spyder.exe will always be listed as one of the options when you select "Open with" from the right-click menu in Windows File Explorer.
A:
(Spyder maintainer here) This functionality is available as part of our Windows installer. In other words, if you install Spyder with it, then you'll see a new entry in the Open with menu of the Windows Explorer that allows you to open Python files directly on Spyder.
Unfortunately, it's not possible for us to do the same for other installation methods (i.e. when using Anaconda or pip). That's why need to implement hacks as the ones mentioned in other answers here.
A:
I have had a similar problem with another piece of software that I use.
My work around for this problem is to set the file association for .py files to C:\Anaconda\Scripts\spider-script.py via the Open with dialog. If you now try to open your File.py by double clicking you'll receive an error like
~\file.py is not a valid Win32 application.
This can be resolved by editing the spyder-script.py registry key:
HKEY_USERS\S-1-5-21-3559708500-1520960832-86631148-1002\Software\Classes\Applications\spyder-script.py\shell\open\command
and replacing the default value "C:\Anaconda\Scripts\spyder-script.py" %1 with "C:\Anaconda\python.exe" "C:\Anaconda\Scripts\spyder-script.py" %1. Use the search function for this key if the path isn't the same for your machine, and of course use the appropriate path for your python installation. spyder-script.py should now execute in a python shell.
From the docstring of ftype,
...Within an open command string, %0 or %1 are substituted with the file name being launched through the association.
A:
What is working very well for me in Windows (10), is associating the *.py files with a batch file (let's say "SpyderBATCH.bat") containing this line :
[ANACONDA_FOLDER_PATH]\pythonw.exe" "[ANACONDA_FOLDER_PATH]\cwp.py" "[ANACONDA_FOLDER_PATH]" "[ANACONDA_FOLDER_PATH]/pythonw.exe" "[ANACONDA_FOLDER_PATH]/Scripts/spyder-script.py" %1
Where [ANACONDA_FOLDER_PATH] has to be replaced with the full path to the Anaconda folder (usually under "Program Files").
What Windows does, when double-clicking on a python script (let's say "file.py"), is pass to SpyderBATCH, as parameter number %1, the full path to "file.py".
Then Spyder is launched and displays the script "file.py" in the editor view.
A:
I figured I would post my solution for this as well.
I have Spyder installed in multiple different environments. You can't simply call the spyder-script.py script without errors, because the environment must be activated.
@echo off
call [YOUR_CONDA_PATH]\Scripts\activate.bat [YOUR_CONDA_PATH]
call conda activate [YOUR ENVIRONMENT]
call start [YOUR_CONDA_PATH]\envs\[YOUR ENVIRONMENT]\pythonw.exe "[YOUR_CONDA_PATH]\envs\[YOUR ENVIRONMENT]\Scripts\spyder-script.py" %1
You can remove the second line and remove the environment extension from the third line if you have Spyder installed in your base environment.
Hopefully for anyone experiencing any weirdness with the other solution, this one will do the trick by activating the environment correctly.
A:
The solution from JoeB152 worked perfectly for me!
If you are interested in adding the spyder icon (or any other) to the .py-files and if you would like to avoid the cmd-pop-up, I found out the following workaround which is feasible without admin rights:
Download the portable version of Bat To Exe Converter (I used v3.0.10).
Open your custom .bat-file in the Bat to Exe Converter.
In the options settings, activate "Icon" and give the path to the respective icon (for me it's in: .../AppData/Local/Continuum/anaconda3/Scripts/spyder.ico).
Set Exe-Format to Invisible (no empty cmd window would pop up anymore)
Convert your .bat-file to an .exe-file.
As usual, set to open .py-files with the newly created .exe.
Enjoy!
Environment:
Windows 10,
Conda 4.8.2,
Spyder 4.0.1,
Python 3.7
A:
This problem is related to anaconda installation defaults - it does not register itself in PATH by default an dicourages users to do so. After proprly registering all directories in path, spyder.exe works as expected.
How to know, what to register?
locate and activate.bat an run it in cmd, then run echo %PATH% and manually register all directories mentioning anaconda.
Alternatively, reinstall anaconda with PATH registratin enabled.
Then you can associate .py files wit spyder.exe and association will work.
A:
System: Windows 11, Python 3.9.7 (Installed through Anaconda3)
This solution will allow you to double click .py files and have them open in the Spyder in the environment of your choice, but does NOT associate .py files with the icon for Spyder.
I'm answering because it took me over an hour to understand & piecemeal together all the (great) solutions that are provided in this thread to get something that works (thanks Martin Sorgel, JoeB152 and Max-K).
NOTE: Some commentors above say to use a Bat-to-Exe converter & that will let you get the icon too.. but, DON'T! All of the ones you're finding via google have got some bad malware in them and my computer ended up deleting the .exe's I was making using that method because they had a Trojan in them.
Full Solution:
STEP 1: Make a .bat file that will launch Spyder in a specific environment.
1.1 Open a plain text editor (e.g. Notepad, etc.) and copy/paste the text below into it.
@echo off
call [YOUR_CONDA_PATH]\Scripts\activate.bat [YOUR_CONDA_PATH]
call conda activate [YOUR ENVIRONMENT]
call start [YOUR_CONDA_PATH]\envs\[YOUR ENVIRONMENT]\pythonw.exe "[YOUR_CONDA_PATH]\envs\[YOUR ENVIRONMENT]\Scripts\spyder-script.py" %1
Update [YOUR_CONDA_PATH] in the text above with the path to Anaconda3 on your computer. Mine was C:\Users\myusername\Anaconda3\ & yours is likely similar.
1.2 Save this new file as spyderlaunch.bat and place it on your computer somewhere that you'll NEVER move it (otherwise you'll have to do STEP 2 each time you move this file. I save mine in a python_env_settings folder where I keep info on what packages I installed manually in my different environments).
NOTE: JoeB152 says you can remove the second line and remove the environment extension from the third line of the text above if you have Spyder installed in your base environment. I'm not sure if this works...
1.3 Make sure your new .bat files works! It works if when you double click on spyderlaunch.bat, that it launches and opens Spyder in the environment you want it to! (Spyder will show the environment it opens in on the bottom right hand side: ).
STEP 2: Tell your computer to associate (i.e. open) all .py files with the spyderlaunch.bat file you just created.
2.1 Open an Anaconda Terminal with "run as an administrator" (by right clicking on the application) and run the following 2, separate commands. Update[PATH_TO_YOUR.batfile] to wherever you saved spyderlaunch.bat in 1.2.
assoc .py=Python
ftype Python="[PATH_TO_YOUR.batfile]" "%1" %*
Errors?
If you don't run the Anaconda Terminal application as an administrator you will be denied access to associate .py=Python. If that's not your issue, then check that the spaces and quotation marks are exactly where they appear above. In particular, you may want to make sure there is a space in between the quotation marks around [PATH_TO_YOUR.batfile] and those around %1.
A:
I was unable to find a spyder.exe on my installation of conda. However in my users/.anaconda/navigator/scripts I found a spyder.bat file. Using this to open the file opens an anaconda prompt and shortly after spyder will open the file. The file icon is broken but it works for me. Hope this might help.
A:
(Comment in relation to the responses by JoeB152 and Jessica Haskins - I am new, so I cannot leave comments)
I found that their suggested .bat file works once you copy-paste the following file from A to B:
A) C:\Users\USERNAME\Anaconda3\Scripts\spyder-script.py
B) C:\Users\USERNAME\Anaconda3\envs\ENVRIONMENT_NAME\Scripts\
...where ENVIRONMENT_NAME is the name of your environment, such as main or test.
The .bat file contains:
@echo off
call C:\Users\bloggsj\Anaconda3\Scripts\activate.bat C:\Users\bloggsj\Anaconda3\
call conda activate C:\Users\bloggsj\Anaconda3\
call start C:\Users\bloggsj\Anaconda3\envs\main\pythonw.exe "C:\Users\bloggsj\Anaconda3\envs\main\Scripts\spyder-script.py" %1
Then associate .py files with that .bat file (e.g., via the 'Open with...' dialogue).
Alternatively, you could try using in the last line of the .bat file the file path: "C:\Users\bloggsj\Anaconda3\Scripts\spyder-script.py"
A:
Get Spyder by itself:
https://docs.spyder-ide.org/current/installation.html
Set your default file opener to your newly installed spyder
To be able to add packages:
Make sure Anaconda is installed.
Go to Spyder preferences
Go to Python interpreter
Select: "Use the following Python interpreter"
Select file path with Anaconda and hit apply
Now you should be able to open files directed using Spyder and update your environment using Anaconda.
| How to get Spyder to open python scripts (.py files) directly from Windows Explorer | I have recently installed the Anaconda distribution on Windows 7 (Anaconda 3-2.4.0-Windows-x86_64). Unlike IDLE, I can't right-click and open a py file in the Spyder IDE. I will have to open Spyder first and then navigate to the file or drag and drop it in the editor. Is there any way to open the file in the editor directly from Widows Explorer?
| [
"With the current version of Anaconda (4.1.0) you can simply right-click on a python script in Windows File Explorer and choose \"Open with\". The first time you do this you need to select \"Choose default program\" and then browse to spyder.exe in the Script directory in your Anaconda installation. Also make sure that the \"Always use the selected program to open this kind of file\" is unchecked and then click OK. From now on spyder.exe will always be listed as one of the options when you select \"Open with\" from the right-click menu in Windows File Explorer.\n",
"(Spyder maintainer here) This functionality is available as part of our Windows installer. In other words, if you install Spyder with it, then you'll see a new entry in the Open with menu of the Windows Explorer that allows you to open Python files directly on Spyder.\nUnfortunately, it's not possible for us to do the same for other installation methods (i.e. when using Anaconda or pip). That's why need to implement hacks as the ones mentioned in other answers here.\n",
"I have had a similar problem with another piece of software that I use. \nMy work around for this problem is to set the file association for .py files to C:\\Anaconda\\Scripts\\spider-script.py via the Open with dialog. If you now try to open your File.py by double clicking you'll receive an error like\n\n~\\file.py is not a valid Win32 application.\n\nThis can be resolved by editing the spyder-script.py registry key:\nHKEY_USERS\\S-1-5-21-3559708500-1520960832-86631148-1002\\Software\\Classes\\Applications\\spyder-script.py\\shell\\open\\command\n\nand replacing the default value \"C:\\Anaconda\\Scripts\\spyder-script.py\" %1 with \"C:\\Anaconda\\python.exe\" \"C:\\Anaconda\\Scripts\\spyder-script.py\" %1. Use the search function for this key if the path isn't the same for your machine, and of course use the appropriate path for your python installation. spyder-script.py should now execute in a python shell.\nFrom the docstring of ftype, \n\n...Within an open command string, %0 or %1 are substituted with the file name being launched through the association.\n\n",
"What is working very well for me in Windows (10), is associating the *.py files with a batch file (let's say \"SpyderBATCH.bat\") containing this line :\n[ANACONDA_FOLDER_PATH]\\pythonw.exe\" \"[ANACONDA_FOLDER_PATH]\\cwp.py\" \"[ANACONDA_FOLDER_PATH]\" \"[ANACONDA_FOLDER_PATH]/pythonw.exe\" \"[ANACONDA_FOLDER_PATH]/Scripts/spyder-script.py\" %1 \n\nWhere [ANACONDA_FOLDER_PATH] has to be replaced with the full path to the Anaconda folder (usually under \"Program Files\"). \nWhat Windows does, when double-clicking on a python script (let's say \"file.py\"), is pass to SpyderBATCH, as parameter number %1, the full path to \"file.py\".\nThen Spyder is launched and displays the script \"file.py\" in the editor view.\n",
"I figured I would post my solution for this as well.\nI have Spyder installed in multiple different environments. You can't simply call the spyder-script.py script without errors, because the environment must be activated.\n@echo off\ncall [YOUR_CONDA_PATH]\\Scripts\\activate.bat [YOUR_CONDA_PATH]\ncall conda activate [YOUR ENVIRONMENT]\ncall start [YOUR_CONDA_PATH]\\envs\\[YOUR ENVIRONMENT]\\pythonw.exe \"[YOUR_CONDA_PATH]\\envs\\[YOUR ENVIRONMENT]\\Scripts\\spyder-script.py\" %1\n\nYou can remove the second line and remove the environment extension from the third line if you have Spyder installed in your base environment.\nHopefully for anyone experiencing any weirdness with the other solution, this one will do the trick by activating the environment correctly.\n",
"The solution from JoeB152 worked perfectly for me!\nIf you are interested in adding the spyder icon (or any other) to the .py-files and if you would like to avoid the cmd-pop-up, I found out the following workaround which is feasible without admin rights:\n\nDownload the portable version of Bat To Exe Converter (I used v3.0.10).\nOpen your custom .bat-file in the Bat to Exe Converter.\nIn the options settings, activate \"Icon\" and give the path to the respective icon (for me it's in: .../AppData/Local/Continuum/anaconda3/Scripts/spyder.ico).\nSet Exe-Format to Invisible (no empty cmd window would pop up anymore)\nConvert your .bat-file to an .exe-file.\nAs usual, set to open .py-files with the newly created .exe.\n\nEnjoy!\nEnvironment:\nWindows 10,\nConda 4.8.2,\nSpyder 4.0.1,\nPython 3.7\n",
"This problem is related to anaconda installation defaults - it does not register itself in PATH by default an dicourages users to do so. After proprly registering all directories in path, spyder.exe works as expected.\nHow to know, what to register?\nlocate and activate.bat an run it in cmd, then run echo %PATH% and manually register all directories mentioning anaconda.\nAlternatively, reinstall anaconda with PATH registratin enabled.\nThen you can associate .py files wit spyder.exe and association will work.\n",
"System: Windows 11, Python 3.9.7 (Installed through Anaconda3)\nThis solution will allow you to double click .py files and have them open in the Spyder in the environment of your choice, but does NOT associate .py files with the icon for Spyder.\nI'm answering because it took me over an hour to understand & piecemeal together all the (great) solutions that are provided in this thread to get something that works (thanks Martin Sorgel, JoeB152 and Max-K).\nNOTE: Some commentors above say to use a Bat-to-Exe converter & that will let you get the icon too.. but, DON'T! All of the ones you're finding via google have got some bad malware in them and my computer ended up deleting the .exe's I was making using that method because they had a Trojan in them.\nFull Solution:\nSTEP 1: Make a .bat file that will launch Spyder in a specific environment.\n1.1 Open a plain text editor (e.g. Notepad, etc.) and copy/paste the text below into it.\n@echo off\ncall [YOUR_CONDA_PATH]\\Scripts\\activate.bat [YOUR_CONDA_PATH]\ncall conda activate [YOUR ENVIRONMENT]\ncall start [YOUR_CONDA_PATH]\\envs\\[YOUR ENVIRONMENT]\\pythonw.exe \"[YOUR_CONDA_PATH]\\envs\\[YOUR ENVIRONMENT]\\Scripts\\spyder-script.py\" %1\n\nUpdate [YOUR_CONDA_PATH] in the text above with the path to Anaconda3 on your computer. Mine was C:\\Users\\myusername\\Anaconda3\\ & yours is likely similar.\n1.2 Save this new file as spyderlaunch.bat and place it on your computer somewhere that you'll NEVER move it (otherwise you'll have to do STEP 2 each time you move this file. I save mine in a python_env_settings folder where I keep info on what packages I installed manually in my different environments).\nNOTE: JoeB152 says you can remove the second line and remove the environment extension from the third line of the text above if you have Spyder installed in your base environment. I'm not sure if this works...\n1.3 Make sure your new .bat files works! It works if when you double click on spyderlaunch.bat, that it launches and opens Spyder in the environment you want it to! (Spyder will show the environment it opens in on the bottom right hand side: ).\nSTEP 2: Tell your computer to associate (i.e. open) all .py files with the spyderlaunch.bat file you just created.\n2.1 Open an Anaconda Terminal with \"run as an administrator\" (by right clicking on the application) and run the following 2, separate commands. Update[PATH_TO_YOUR.batfile] to wherever you saved spyderlaunch.bat in 1.2.\nassoc .py=Python \nftype Python=\"[PATH_TO_YOUR.batfile]\" \"%1\" %*\n\nErrors?\nIf you don't run the Anaconda Terminal application as an administrator you will be denied access to associate .py=Python. If that's not your issue, then check that the spaces and quotation marks are exactly where they appear above. In particular, you may want to make sure there is a space in between the quotation marks around [PATH_TO_YOUR.batfile] and those around %1.\n",
"I was unable to find a spyder.exe on my installation of conda. However in my users/.anaconda/navigator/scripts I found a spyder.bat file. Using this to open the file opens an anaconda prompt and shortly after spyder will open the file. The file icon is broken but it works for me. Hope this might help. \n",
"(Comment in relation to the responses by JoeB152 and Jessica Haskins - I am new, so I cannot leave comments)\nI found that their suggested .bat file works once you copy-paste the following file from A to B:\nA) C:\\Users\\USERNAME\\Anaconda3\\Scripts\\spyder-script.py\nB) C:\\Users\\USERNAME\\Anaconda3\\envs\\ENVRIONMENT_NAME\\Scripts\\\n...where ENVIRONMENT_NAME is the name of your environment, such as main or test.\nThe .bat file contains:\n@echo off\ncall C:\\Users\\bloggsj\\Anaconda3\\Scripts\\activate.bat C:\\Users\\bloggsj\\Anaconda3\\\ncall conda activate C:\\Users\\bloggsj\\Anaconda3\\\ncall start C:\\Users\\bloggsj\\Anaconda3\\envs\\main\\pythonw.exe \"C:\\Users\\bloggsj\\Anaconda3\\envs\\main\\Scripts\\spyder-script.py\" %1\n\nThen associate .py files with that .bat file (e.g., via the 'Open with...' dialogue).\nAlternatively, you could try using in the last line of the .bat file the file path: \"C:\\Users\\bloggsj\\Anaconda3\\Scripts\\spyder-script.py\"\n",
"\nGet Spyder by itself:\nhttps://docs.spyder-ide.org/current/installation.html\n\nSet your default file opener to your newly installed spyder\n\n\nTo be able to add packages:\n\nMake sure Anaconda is installed.\nGo to Spyder preferences\nGo to Python interpreter\nSelect: \"Use the following Python interpreter\"\nSelect file path with Anaconda and hit apply\n\nNow you should be able to open files directed using Spyder and update your environment using Anaconda.\n"
] | [
13,
6,
6,
2,
2,
1,
1,
1,
0,
0,
0
] | [] | [] | [
"ide",
"python",
"spyder",
"windows"
] | stackoverflow_0033817046_ide_python_spyder_windows.txt |
Q:
How do I get the position of my python flet window?
I've been working with the python flet package for a while and I'd like to know how to get my window's position. Does anyone know anything?
I googled but found nothing.
A:
I haven't used this package before, but looking at the docs it seems that window_top and window_left on the root Page instance are what you're after (assuming this is a desktop app). See relevant docs here: https://flet.dev/docs/controls/page#window_top.
| How do I get the position of my python flet window? | I've been working with the python flet package for a while and I'd like to know how to get my window's position. Does anyone know anything?
I googled but found nothing.
| [
"I haven't used this package before, but looking at the docs it seems that window_top and window_left on the root Page instance are what you're after (assuming this is a desktop app). See relevant docs here: https://flet.dev/docs/controls/page#window_top.\n"
] | [
0
] | [] | [] | [
"desktop_application",
"flutter",
"position",
"python"
] | stackoverflow_0074605877_desktop_application_flutter_position_python.txt |
Q:
python text conversion into Pig Latin
I need a python program for converting an input sentence into Pig Latin which has 2 rules:
If a word begins with a consonant all consonants before the first vowel are moved to the end of the word and the letters "ay" are then added to the end. e.g. "coin" becomes "oincay" and "flute" becomes "uteflay".
If a word begins with a vowel then "yay" is added to the end. e.g."egg" becomes "eggyay" and "oak" becomes "oakyay".
I have written this program so far:
string = input('String: ')
if string[0].upper() in 'BCDFGJKLMNPQSTVXZHRWY':
print(string.replace(string[0],'') + string[0]+'ay')
if string[0].upper() in 'AEIOUY':
print(string + 'yay')
#vowels = [each for each in
but this only works for one word(whereas i need the whole sentence), and the first part only replaces the first consonant, not all (whereas I need to replace all consonants before the first vowel)
A:
We can try using a regex replacement with a callback function:
inp = "coin flute egg oak"
output = re.sub(r'\w+', lambda m: re.sub(r'([b-df-hj-np-tv-z]+)(\w+)', r'\2\1ay', m.group()) if not re.search(r'^[AEIOUaeiou]', m.group()) else m.group() + 'yay', inp)
print(output) # oincay uteflay eggyay oakyay
A:
Your input fetches the user input but string[0] refers only to the first char in the string. You need to use the .split() method to split the string into words and loop thru those words. However, now that you do this, you want to print it onto 1 line. Therefore, change the end parameter in print to something like a space.
try something like this:
sentance = input('String: ').split()
for word in sentance: # loop thru each word
if word[0].upper() in 'BCDFGJKLMNPQSTVXZHRWY':
print(word.replace(word[0],'') + word[0]+'ay', end = " ")
elif word[0].upper() in 'AEIOUY':
print(word + 'yay', end = " ")
print() # just to skip to a new line once its done
I modified the variable names for clarity. Hope this helps!
A:
You need to find the first vowel and then take case 1 or 2 depending on if that was past first char (0-indexed) or not.
def porcus(sin):
vowels = set("aeiouy")
for vowel1, ch in enumerate(sin.lower()):
#identify first vowel - `ch in "aeiouy"` would work too.
if ch in vowels:
break
else:
return f"{sin}ay"
if not vowel1:
#starts with vowel
return f"{sin}yay"
#take from vowel position to end, add start before and ay
res = sin[vowel1:] + sin[:vowel1] + "ay"
return res
inps = "coin flute egg oak The"
exps = "oincay uteflay eggyay oakyay eThay"
dataexp = zip(
inps.split(),exps.split()
)
for inp, exp in dataexp:
got = porcus(inp)
msg = f"{str(inp):100.100} \n exp :{exp}:\n got :{got}:\n"
if exp == got:
print(f"β
! {msg}")
else:
print(f"β! {msg}")
output:
β
! coin
exp :oincay:
got :oincay:
β
! flute
exp :uteflay:
got :uteflay:
β
! egg
exp :eggyay:
got :eggyay:
β
! oak
exp :oakyay:
got :oakyay:
β
! The
exp :eThay:
got :eThay:
You can plug in xkcd too. I assumed you'd want xkcday.
re. whole sentence print(" ".join([porcus(w) for w in sentence.split()])) oughta work.
| python text conversion into Pig Latin | I need a python program for converting an input sentence into Pig Latin which has 2 rules:
If a word begins with a consonant all consonants before the first vowel are moved to the end of the word and the letters "ay" are then added to the end. e.g. "coin" becomes "oincay" and "flute" becomes "uteflay".
If a word begins with a vowel then "yay" is added to the end. e.g."egg" becomes "eggyay" and "oak" becomes "oakyay".
I have written this program so far:
string = input('String: ')
if string[0].upper() in 'BCDFGJKLMNPQSTVXZHRWY':
print(string.replace(string[0],'') + string[0]+'ay')
if string[0].upper() in 'AEIOUY':
print(string + 'yay')
#vowels = [each for each in
but this only works for one word(whereas i need the whole sentence), and the first part only replaces the first consonant, not all (whereas I need to replace all consonants before the first vowel)
| [
"We can try using a regex replacement with a callback function:\ninp = \"coin flute egg oak\"\noutput = re.sub(r'\\w+', lambda m: re.sub(r'([b-df-hj-np-tv-z]+)(\\w+)', r'\\2\\1ay', m.group()) if not re.search(r'^[AEIOUaeiou]', m.group()) else m.group() + 'yay', inp)\nprint(output) # oincay uteflay eggyay oakyay\n\n",
"Your input fetches the user input but string[0] refers only to the first char in the string. You need to use the .split() method to split the string into words and loop thru those words. However, now that you do this, you want to print it onto 1 line. Therefore, change the end parameter in print to something like a space.\ntry something like this:\nsentance = input('String: ').split()\nfor word in sentance: # loop thru each word\n if word[0].upper() in 'BCDFGJKLMNPQSTVXZHRWY':\n print(word.replace(word[0],'') + word[0]+'ay', end = \" \")\n elif word[0].upper() in 'AEIOUY':\n print(word + 'yay', end = \" \")\nprint() # just to skip to a new line once its done\n\nI modified the variable names for clarity. Hope this helps!\n",
"You need to find the first vowel and then take case 1 or 2 depending on if that was past first char (0-indexed) or not.\ndef porcus(sin):\n vowels = set(\"aeiouy\")\n\n for vowel1, ch in enumerate(sin.lower()):\n #identify first vowel - `ch in \"aeiouy\"` would work too.\n if ch in vowels:\n break\n else:\n return f\"{sin}ay\"\n\n if not vowel1:\n #starts with vowel\n return f\"{sin}yay\"\n \n #take from vowel position to end, add start before and ay\n res = sin[vowel1:] + sin[:vowel1] + \"ay\"\n \n return res\n\n\ninps = \"coin flute egg oak The\" \nexps = \"oincay uteflay eggyay oakyay eThay\"\n\ndataexp = zip(\n inps.split(),exps.split()\n)\n\nfor inp, exp in dataexp:\n got = porcus(inp)\n msg = f\"{str(inp):100.100} \\n exp :{exp}:\\n got :{got}:\\n\"\n if exp == got:\n print(f\"β
! {msg}\")\n else:\n print(f\"β! {msg}\")\n\noutput:\nβ
! coin \n exp :oincay:\n got :oincay:\n\nβ
! flute \n exp :uteflay:\n got :uteflay:\n\nβ
! egg \n exp :eggyay:\n got :eggyay:\n\nβ
! oak \n exp :oakyay:\n got :oakyay:\n\nβ
! The \n exp :eThay:\n got :eThay:\n\n\nYou can plug in xkcd too. I assumed you'd want xkcday.\nre. whole sentence print(\" \".join([porcus(w) for w in sentence.split()])) oughta work.\n"
] | [
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074636955_python.txt |
Q:
How to exchange authorization code for access token Twitter API?
I am developing an app that will read some tweets stats of my company. I want to let all the employees to connect with their twitter accounts.
I am facing the following problem: I am stuck at the "Exchange authorization code for access token".
The response url after Authorize is: https://example/v1/browser-callback?state=state&code=all0UTY5TVVMYmctNjZEQVpYYYYYYYYZZZZZXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
I suppose I have to change the code all0UTY5TVVMYmctNjZEQVpYYYYYYYYZZZZZXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX for access_token and access_token_secret, but I did not find how to do that on the documentation that twitter offers to us.
A:
You need first to know the type of flow you are trying to implement
First you need to know what is the grant type of your client_id in the twitter side, i see in the callback there is code that means you are in normal authorization code or Authorization Code Flow with Proof Key for Code (PKCE), to know that check in your first call to twitter if you see in the params code_challenge and code_challenge_method if yes It's PKCE flow;
Second, I see that you have successfully done the first step of flow, then if you are in the PKCE, you need in your callback to send another request to get a final token like this:
client_id=your client_id&
code_verifier=the code generated by the application in the first step&
redirect_uri=http%3A%2F%2Flocalhost%3A8080%2Fcallback&
grant_type=authorization_code&
code=the code sent from twitter
A:
I'm not sure what the docs looked like back in March, but to do this now you simply need to build the request headers with the code argument from the redirect URL. From the example url you gave (https://example/v1/browser-callback), your code is:
all0UTY5TVVMYmctNjZEQVpYYYYYYYYZZZZZXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
And the curl request you would make for a private client to retrieve the user's bearer and refresh token would be:
curl --location --request POST 'https://api.twitter.com/2/oauth2/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--header 'Authorization: Basic YOUR_BASE64_ENCODED_ClientID:ClientSecret_HERE'\
--data-urlencode 'code=all0UTY5TVVMYmctNjZEQVpYYYYYYYYZZZZZXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' \
--data-urlencode 'grant_type=authorization_code' \
--data-urlencode 'redirect_uri=https://example/v1/browser-callback' \
--data-urlencode 'code_verifier=challenge'
where YOUR_BASE64_ENCODED_ClientID:ClientSecret_HERE is (per the docs):
To create the basic authorization header you will need to base64 encoding on your Client ID and Client Secret which can be obtained from your Appβs βKeys and Tokensβ page inside of the developer portal.
You'll need to make this request to get the initial bearer token/refresh token for private clients within 30 seconds of receiving the code at your callback URL after the user has authorized your app.
| How to exchange authorization code for access token Twitter API? | I am developing an app that will read some tweets stats of my company. I want to let all the employees to connect with their twitter accounts.
I am facing the following problem: I am stuck at the "Exchange authorization code for access token".
The response url after Authorize is: https://example/v1/browser-callback?state=state&code=all0UTY5TVVMYmctNjZEQVpYYYYYYYYZZZZZXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
I suppose I have to change the code all0UTY5TVVMYmctNjZEQVpYYYYYYYYZZZZZXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX for access_token and access_token_secret, but I did not find how to do that on the documentation that twitter offers to us.
| [
"You need first to know the type of flow you are trying to implement\nFirst you need to know what is the grant type of your client_id in the twitter side, i see in the callback there is code that means you are in normal authorization code or Authorization Code Flow with Proof Key for Code (PKCE), to know that check in your first call to twitter if you see in the params code_challenge and code_challenge_method if yes It's PKCE flow;\nSecond, I see that you have successfully done the first step of flow, then if you are in the PKCE, you need in your callback to send another request to get a final token like this:\n client_id=your client_id&\n code_verifier=the code generated by the application in the first step&\n redirect_uri=http%3A%2F%2Flocalhost%3A8080%2Fcallback&\n grant_type=authorization_code&\n code=the code sent from twitter\n\n",
"I'm not sure what the docs looked like back in March, but to do this now you simply need to build the request headers with the code argument from the redirect URL. From the example url you gave (https://example/v1/browser-callback), your code is:\nall0UTY5TVVMYmctNjZEQVpYYYYYYYYZZZZZXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nAnd the curl request you would make for a private client to retrieve the user's bearer and refresh token would be:\ncurl --location --request POST 'https://api.twitter.com/2/oauth2/token' \\\n--header 'Content-Type: application/x-www-form-urlencoded' \\\n--header 'Authorization: Basic YOUR_BASE64_ENCODED_ClientID:ClientSecret_HERE'\\\n--data-urlencode 'code=all0UTY5TVVMYmctNjZEQVpYYYYYYYYZZZZZXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' \\\n--data-urlencode 'grant_type=authorization_code' \\\n--data-urlencode 'redirect_uri=https://example/v1/browser-callback' \\\n--data-urlencode 'code_verifier=challenge'\n\nwhere YOUR_BASE64_ENCODED_ClientID:ClientSecret_HERE is (per the docs):\n\nTo create the basic authorization header you will need to base64 encoding on your Client ID and Client Secret which can be obtained from your Appβs βKeys and Tokensβ page inside of the developer portal.\n\nYou'll need to make this request to get the initial bearer token/refresh token for private clients within 30 seconds of receiving the code at your callback URL after the user has authorized your app.\n"
] | [
0,
0
] | [] | [] | [
"oauth_2.0",
"python",
"twitter",
"twitter_oauth",
"twitterapi_python"
] | stackoverflow_0071465525_oauth_2.0_python_twitter_twitter_oauth_twitterapi_python.txt |
Q:
Add Python Script as a file to AWS SSM Document (YAML)
I'm trying to write a script for a SystemsManager Automation document and would like to keep the Python code in a seperate file so it's easy to invoke on my local machine. For complex scripts they can also be tested using a tool such as unittest.
Example YAML syntax from my SSM Automation:
mainSteps:
- name: RunTestScript
action: aws:executeScript
inputs:
Runtime: python3.8
Handler: handler
InputPayload:
Action: "Create" # This is just a random payload for now I'm only testing
Script: |-
"${script}" # My script should be injected here :)
Note: Yes, I've written the script directly in YAML and it works fine. But I'd like to achieve something similar to:
locals {
script = file("${path.module}/automations/test.py")
}
resource "aws_ssm_document" "aws_test_script" {
name = "test-script"
document_format = "YAML"
document_type = "Automation"
content = templatefile("${path.module}/automations/test.yaml", {
ssm_automation_assume_role = aws_iam_role.ssm_automation.arn
script = local.script
})
}
My console shows that yes the file is being read correctly.
Terraform plan...:
+ "def handler(event, context):
+ print(event)
+ import boto3
+ iam = boto3.client('iam')
+ response = iam.get_role(
+ RoleName='test-role'
+ )
+ print(response)"
Notice how the indentation is broken? My .py file has the correct indentation.
I suspect one of the terraform functions or YAML operators I'm using it breaking the indentation - which is very important for a language such as Python.
If I go ahead and Terraform apply I receive:
Error: Error updating SSM document: InvalidDocumentContent: YAML not well-formed. at Line: 30, Column: 1
I tried changing the last line in my YAML to be Script: "${script}" and I can Terraform Plan and Apply fine, but the Python script is on a single line and fails when executing the automation in AWS.
I've also tried using indent(4, local.script) without success.
Keen to hear/see what ideas and solutions you may have.
Thanks
A:
I noticed my plan output had "'s around the code. So I tried a multiline string in Python """ and it continued to fail. Bearing in mind I assumed SSM was smart enough to strip quotes if it doesn't want them.
Anyway, the mistake was adding quotes around the template variable:
# Mistake
Script: |-
"${script}" # My script should be injected here :)
Solution
Script: |-
${script}
After that I decided okay now that it works, can I remove the indent() method I'm using and I got the YAML not well-formed error back.
So using:
locals {
script = indent(8, file("${path.module}/automations/test.py"))
}
resource "aws_ssm_document" "test_script" {
name = "test-script"
document_format = "YAML"
document_type = "Automation"
content = templatefile("${path.module}/automations/test.yaml", {
ssm_automation_assume_role = aws_iam_role.ssm_automation.arn
script = local.script
})
}
Works great with:
mainSteps:
- name: RunDMSRolesScript
action: aws:executeScript
inputs:
Runtime: python3.8
Handler: handler
InputPayload:
Action: "create"
Script: |-
${script}
If it helps anyone, this is what my SSM Script looks like in the AWS UI when it runs without errors. Formatted correctly and AWS seems to append " around it but turned into "\" when I provided quotes in my YAML template which would've broken the script as it's now a string literal.
"def handler(event, context):
print(event)
import boto3
iam = boto3.client('iam')
response = iam.get_role(
RoleName='test-role'
)
print(response)"
That's one way to lose 3 hours!
| Add Python Script as a file to AWS SSM Document (YAML) | I'm trying to write a script for a SystemsManager Automation document and would like to keep the Python code in a seperate file so it's easy to invoke on my local machine. For complex scripts they can also be tested using a tool such as unittest.
Example YAML syntax from my SSM Automation:
mainSteps:
- name: RunTestScript
action: aws:executeScript
inputs:
Runtime: python3.8
Handler: handler
InputPayload:
Action: "Create" # This is just a random payload for now I'm only testing
Script: |-
"${script}" # My script should be injected here :)
Note: Yes, I've written the script directly in YAML and it works fine. But I'd like to achieve something similar to:
locals {
script = file("${path.module}/automations/test.py")
}
resource "aws_ssm_document" "aws_test_script" {
name = "test-script"
document_format = "YAML"
document_type = "Automation"
content = templatefile("${path.module}/automations/test.yaml", {
ssm_automation_assume_role = aws_iam_role.ssm_automation.arn
script = local.script
})
}
My console shows that yes the file is being read correctly.
Terraform plan...:
+ "def handler(event, context):
+ print(event)
+ import boto3
+ iam = boto3.client('iam')
+ response = iam.get_role(
+ RoleName='test-role'
+ )
+ print(response)"
Notice how the indentation is broken? My .py file has the correct indentation.
I suspect one of the terraform functions or YAML operators I'm using it breaking the indentation - which is very important for a language such as Python.
If I go ahead and Terraform apply I receive:
Error: Error updating SSM document: InvalidDocumentContent: YAML not well-formed. at Line: 30, Column: 1
I tried changing the last line in my YAML to be Script: "${script}" and I can Terraform Plan and Apply fine, but the Python script is on a single line and fails when executing the automation in AWS.
I've also tried using indent(4, local.script) without success.
Keen to hear/see what ideas and solutions you may have.
Thanks
| [
"I noticed my plan output had \"'s around the code. So I tried a multiline string in Python \"\"\" and it continued to fail. Bearing in mind I assumed SSM was smart enough to strip quotes if it doesn't want them.\nAnyway, the mistake was adding quotes around the template variable:\n# Mistake\n Script: |-\n \"${script}\" # My script should be injected here :) \n\nSolution\n Script: |-\n ${script} \n\nAfter that I decided okay now that it works, can I remove the indent() method I'm using and I got the YAML not well-formed error back.\nSo using:\nlocals {\n script = indent(8, file(\"${path.module}/automations/test.py\"))\n}\n\nresource \"aws_ssm_document\" \"test_script\" {\n name = \"test-script\"\n document_format = \"YAML\"\n document_type = \"Automation\"\n content = templatefile(\"${path.module}/automations/test.yaml\", {\n ssm_automation_assume_role = aws_iam_role.ssm_automation.arn\n script = local.script\n })\n}\n\nWorks great with:\nmainSteps:\n - name: RunDMSRolesScript\n action: aws:executeScript\n inputs:\n Runtime: python3.8\n Handler: handler\n InputPayload:\n Action: \"create\"\n Script: |-\n ${script}\n\nIf it helps anyone, this is what my SSM Script looks like in the AWS UI when it runs without errors. Formatted correctly and AWS seems to append \" around it but turned into \"\\\" when I provided quotes in my YAML template which would've broken the script as it's now a string literal.\n\"def handler(event, context):\n print(event)\n import boto3\n\n iam = boto3.client('iam')\n response = iam.get_role(\n RoleName='test-role'\n )\n print(response)\"\n\nThat's one way to lose 3 hours!\n"
] | [
0
] | [] | [] | [
"aws_ssm",
"python",
"terraform"
] | stackoverflow_0074648609_aws_ssm_python_terraform.txt |
Q:
Python RE, problem with optional match groups
My apologies if this has been asked before. I am parsing some law numbers from the California penal code so they can be run through an existing database to return a plain-language title of the law. For example:
PC 182(A)(1); PC 25400(A)(1); PC 25850(C)(6); PC 32310; VC 12500(A); VC 22517; VC 23103(A)
Each would be split at ';' and parsed into:
{'lawType': 'PC', 'lawNumber': '182', 'subsection': 'A', 'subsubsection': '1'}
Returns: Conspiracy to commit a crime
Here's my RE search:
(?P<lawType>[A-Z]{2})[ ](?P<lawNumber>[0-9.]*[A-Z]?)\((?P<subsection>[A-Z])\)?\((?P<subsubsection>[0-9])\)?
Every law should have at least the type and number (i.e. PC 182), but sometimes they also have the subsection and subsubsection (i.e. (A)(1)). Those two subgroups need to be optional, but the search above isn't picking them up using the '?'. This code works, but I'd like to make it more compact with just one search:
lineValue = 'PC 182(A)(1); PC 25400(A)(1); PC 25850(C)(6); PC 32310; VC 12500(A); VC 22517; VC 23103(A)'
#lineValue = 'PC 148(A)(1); PC 369I; PC 587C; MC 8.80.060(F)'
chargeList = map(lambda x: x.strip(), lineValue.split(';'))
for thisCharge in chargeList:
m = re.match(r'(?P<lawType>[A-Z]{2})[ ](?P<lawNumber>[0-9.]*[A-Z]?)\((?P<subsection>[A-Z])\)\((?P<subsubsection>[0-9])\)', thisCharge)
if m:
detail = m.groupdict()
print(detail)
else:
m = re.match(r'(?P<lawType>[A-Z]{2})[ ](?P<lawNumber>[0-9.]*[A-Z]?)\((?P<subsection>[A-Z])\)', thisCharge)
if m:
detail = m.groupdict()
print(detail)
else:
m = re.match(r'(?P<lawType>[A-Z]{2})[ ](?P<lawNumber>[0-9.]*[A-Z]?)', thisCharge)
if m:
detail = m.groupdict()
print(detail)
else:
print('NO MATCH: ' + str(thisCharge))
I have three different searches, which shouldn't be necessary if the '?' optional group marker were working as expected. Can anyone offer a thought?
A:
The problem is in how you are applying the ? to make each of the subsections optional. A ? applies to just the term immediately preceding it. In your case, this is just the closing parentheses for each subsection Because of this, you are requiring the opening parentheses and the number or letter unconditionally for each subsection term. To fix this, just wrap the complete subsection terms in an extra set of parentheses, and apply the ? to those groups. This code:
import re
data = "PC 182(A)(1); PC 25400(A)(1); PC 25850(C)(6); PC 32310; VC 12500(A); VC 22517; VC 23103(A)"
exp = re.compile(r"(?P<lawType>[A-Z]{2})[ ](?P<lawNumber>[0-9.]*[A-Z]?)(?:\((?P<subsection>[A-Z])\))?(?:\((?P<subsubsection>[0-9])\))?")
def main():
r = exp.findall(data)
print(r)
main()
produces:
[('PC', '182', 'A', '1'), ('PC', '25400', 'A', '1'), ('PC', '25850', 'C', '6'), ('PC', '32310', '', ''), ('VC', '12500', 'A', ''), ('VC', '22517', '', ''), ('VC', '23103', 'A', '')]
Here's an example of how to use your expression to pick out the information for each law individually, making use of your group labels:
def main():
p = 0
while True:
m = exp.search(data[p:])
if not m:
break
print('Type:', m.group('lawType'))
print('Number:', m.group('lawNumber'))
if m.group('subsection'):
print('Subsection:', m.group('subsection'))
if m.group('subsubsection'):
print('Subsubsection:', m.group('subsubsection'))
print()
p += m.end()
Result
Type: PC
Number: 182
Subsection: A
Subsubsection: 1
Type: PC
Number: 25400
Subsection: A
Subsubsection: 1
Type: PC
Number: 25850
Subsection: C
Subsubsection: 6
Type: PC
Number: 32310
Type: VC
Number: 12500
Subsection: A
Type: VC
Number: 22517
Type: VC
Number: 23103
Subsection: A
Noticing how you were pre-splitting your data before applying your regex, I thought you might want to see how I would process each term using only regex matching.
| Python RE, problem with optional match groups | My apologies if this has been asked before. I am parsing some law numbers from the California penal code so they can be run through an existing database to return a plain-language title of the law. For example:
PC 182(A)(1); PC 25400(A)(1); PC 25850(C)(6); PC 32310; VC 12500(A); VC 22517; VC 23103(A)
Each would be split at ';' and parsed into:
{'lawType': 'PC', 'lawNumber': '182', 'subsection': 'A', 'subsubsection': '1'}
Returns: Conspiracy to commit a crime
Here's my RE search:
(?P<lawType>[A-Z]{2})[ ](?P<lawNumber>[0-9.]*[A-Z]?)\((?P<subsection>[A-Z])\)?\((?P<subsubsection>[0-9])\)?
Every law should have at least the type and number (i.e. PC 182), but sometimes they also have the subsection and subsubsection (i.e. (A)(1)). Those two subgroups need to be optional, but the search above isn't picking them up using the '?'. This code works, but I'd like to make it more compact with just one search:
lineValue = 'PC 182(A)(1); PC 25400(A)(1); PC 25850(C)(6); PC 32310; VC 12500(A); VC 22517; VC 23103(A)'
#lineValue = 'PC 148(A)(1); PC 369I; PC 587C; MC 8.80.060(F)'
chargeList = map(lambda x: x.strip(), lineValue.split(';'))
for thisCharge in chargeList:
m = re.match(r'(?P<lawType>[A-Z]{2})[ ](?P<lawNumber>[0-9.]*[A-Z]?)\((?P<subsection>[A-Z])\)\((?P<subsubsection>[0-9])\)', thisCharge)
if m:
detail = m.groupdict()
print(detail)
else:
m = re.match(r'(?P<lawType>[A-Z]{2})[ ](?P<lawNumber>[0-9.]*[A-Z]?)\((?P<subsection>[A-Z])\)', thisCharge)
if m:
detail = m.groupdict()
print(detail)
else:
m = re.match(r'(?P<lawType>[A-Z]{2})[ ](?P<lawNumber>[0-9.]*[A-Z]?)', thisCharge)
if m:
detail = m.groupdict()
print(detail)
else:
print('NO MATCH: ' + str(thisCharge))
I have three different searches, which shouldn't be necessary if the '?' optional group marker were working as expected. Can anyone offer a thought?
| [
"The problem is in how you are applying the ? to make each of the subsections optional. A ? applies to just the term immediately preceding it. In your case, this is just the closing parentheses for each subsection Because of this, you are requiring the opening parentheses and the number or letter unconditionally for each subsection term. To fix this, just wrap the complete subsection terms in an extra set of parentheses, and apply the ? to those groups. This code:\nimport re\n\ndata = \"PC 182(A)(1); PC 25400(A)(1); PC 25850(C)(6); PC 32310; VC 12500(A); VC 22517; VC 23103(A)\"\n\nexp = re.compile(r\"(?P<lawType>[A-Z]{2})[ ](?P<lawNumber>[0-9.]*[A-Z]?)(?:\\((?P<subsection>[A-Z])\\))?(?:\\((?P<subsubsection>[0-9])\\))?\")\n\ndef main():\n r = exp.findall(data)\n print(r)\n\nmain()\n\nproduces:\n[('PC', '182', 'A', '1'), ('PC', '25400', 'A', '1'), ('PC', '25850', 'C', '6'), ('PC', '32310', '', ''), ('VC', '12500', 'A', ''), ('VC', '22517', '', ''), ('VC', '23103', 'A', '')]\n\nHere's an example of how to use your expression to pick out the information for each law individually, making use of your group labels:\ndef main():\n p = 0\n while True:\n m = exp.search(data[p:])\n if not m:\n break\n print('Type:', m.group('lawType'))\n print('Number:', m.group('lawNumber'))\n if m.group('subsection'):\n print('Subsection:', m.group('subsection'))\n if m.group('subsubsection'):\n print('Subsubsection:', m.group('subsubsection'))\n print()\n p += m.end()\n\nResult\nType: PC\nNumber: 182\nSubsection: A\nSubsubsection: 1\n\nType: PC\nNumber: 25400\nSubsection: A\nSubsubsection: 1\n\nType: PC\nNumber: 25850\nSubsection: C\nSubsubsection: 6\n\nType: PC\nNumber: 32310\n\nType: VC\nNumber: 12500\nSubsection: A\n\nType: VC\nNumber: 22517\n\nType: VC\nNumber: 23103\nSubsection: A\n\nNoticing how you were pre-splitting your data before applying your regex, I thought you might want to see how I would process each term using only regex matching.\n"
] | [
0
] | [] | [] | [
"match",
"option_type",
"python",
"python_re"
] | stackoverflow_0074649500_match_option_type_python_python_re.txt |
Q:
2d convolution using python and numpy
I am trying to perform a 2d convolution in python using numpy
I have a 2d array as follows with kernel H_r for the rows and H_c for the columns
data = np.zeros((nr, nc), dtype=np.float32)
#fill array with some data here then convolve
for r in range(nr):
data[r,:] = np.convolve(data[r,:], H_r, 'same')
for c in range(nc):
data[:,c] = np.convolve(data[:,c], H_c, 'same')
data = data.astype(np.uint8);
It does not produce the output that I was expecting, does this code look OK, I think the problem is with the casting from float32 to 8bit. Whats the best way to do this
Thanks
A:
Maybe it is not the most optimized solution, but this is an implementation I used before with numpy library for Python:
def convolution2d(image, kernel, bias):
m, n = kernel.shape
if (m == n):
y, x = image.shape
y = y - m + 1
x = x - m + 1
new_image = np.zeros((y,x))
for i in range(y):
for j in range(x):
new_image[i][j] = np.sum(image[i:i+m, j:j+m]*kernel) + bias
return new_image
I hope this code helps other guys with the same doubt.
Regards.
A:
Edit [Jan 2019]
@Tashus comment bellow is correct, and @dudemeister's answer is thus probably more on the mark. The function he suggested is also more efficient, by avoiding a direct 2D convolution and the number of operations that would entail.
Possible Problem
I believe you are doing two 1d convolutions, the first per columns and the second per rows, and replacing the results from the first with the results of the second.
Notice that numpy.convolve with the 'same' argument returns an array of equal shape to the largest one provided, so when you make the first convolution you already populated the entire data array.
One good way to visualize your arrays during these steps is to use Hinton diagrams, so you can check which elements already have a value.
Possible Solution
You can try to add the results of the two convolutions (use data[:,c] += .. instead of data[:,c] = on the second for loop), if your convolution matrix is the result of using the one dimensional H_r and H_c matrices like so:
Another way to do that would be to use scipy.signal.convolve2d with a 2d convolution array, which is probably what you wanted to do in the first place.
A:
Since you already have your kernel separated you should simply use the sepfir2d function from scipy:
from scipy.signal import sepfir2d
convolved = sepfir2d(data, H_r, H_c)
On the other hand, the code you have there looks all right ...
A:
It might not be the most optimized solution either, but it is approximately ten times faster than the one proposed by @omotto and it only uses basic numpy function (as reshape, expand_dims, tile...) and no 'for' loops:
def gen_idx_conv1d(in_size, ker_size):
"""
Generates a list of indices. This indices correspond to the indices
of a 1D input tensor on which we would like to apply a 1D convolution.
For instance, with a 1D input array of size 5 and a kernel of size 3, the
1D convolution product will successively looks at elements of indices [0,1,2],
[1,2,3] and [2,3,4] in the input array. In this case, the function idx_conv1d(5,3)
outputs the following array: array([0,1,2,1,2,3,2,3,4]).
args:
in_size: (type: int) size of the input 1d array.
ker_size: (type: int) kernel size.
return:
idx_list: (type: np.array) list of the successive indices of the 1D input array
access to the 1D convolution algorithm.
example:
>>> gen_idx_conv1d(in_size=5, ker_size=3)
array([0, 1, 2, 1, 2, 3, 2, 3, 4])
"""
f = lambda dim1, dim2, axis: np.reshape(np.tile(np.expand_dims(np.arange(dim1),axis),dim2),-1)
out_size = in_size-ker_size+1
return f(ker_size, out_size, 0)+f(out_size, ker_size, 1)
def repeat_idx_2d(idx_list, nbof_rep, axis):
"""
Repeats an array of indices (idx_list) a number of time (nbof_rep) "along" an axis
(axis). This function helps to browse through a 2d array of size
(len(idx_list),nbof_rep).
args:
idx_list: (type: np.array or list) a 1D array of indices.
nbof_rep: (type: int) number of repetition.
axis: (type: int) axis "along" which the repetition will be applied.
return
idx_list: (type: np.array) a 1D array of indices of size len(idx_list)*nbof_rep.
example:
>>> a = np.array([0, 1, 2])
>>> repeat_idx_2d(a, 3, 0) # repeats array 'a' 3 times along 'axis' 0
array([0, 0, 0, 1, 1, 1, 2, 2, 2])
>>> repeat_idx_2d(a, 3, 1) # repeats array 'a' 3 times along 'axis' 1
array([0, 1, 2, 0, 1, 2, 0, 1, 2])
>>> b = np.reshape(np.arange(3*4), (3,4))
>>> b[repeat_idx_2d(np.arange(3), 4, 0), repeat_idx_2d(np.arange(4), 3, 1)]
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
"""
assert axis in [0,1], "Axis should be equal to 0 or 1."
tile_axis = (nbof_rep,1) if axis else (1,nbof_rep)
return np.reshape(np.tile(np.expand_dims(idx_list, 1),tile_axis),-1)
def conv2d(im, ker):
"""
Performs a 'valid' 2D convolution on an image. The input image may be
a 2D or a 3D array.
The output image first two dimensions will be reduced depending on the
convolution size.
The kernel may be a 2D or 3D array. If 2D, it will be applied on every
channel of the input image. If 3D, its last dimension must match the
image one.
args:
im: (type: np.array) image (2D or 3D).
ker: (type: np.array) convolution kernel (2D or 3D).
returns:
im: (type: np.array) convolved image.
example:
>>> im = np.reshape(np.arange(10*10*3),(10,10,3))/(10*10*3) # 3D image
>>> ker = np.array([[0,1,0],[-1,0,1],[0,-1,0]]) # 2D kernel
>>> conv2d(im, ker) # 3D array of shape (8,8,3)
"""
if len(im.shape)==2: # if the image is a 2D array, it is reshaped by expanding the last dimension
im = np.expand_dims(im,-1)
im_x, im_y, im_w = im.shape
if len(ker.shape)==2: # if the kernel is a 2D array, it is reshaped so it will be applied to all of the image channels
ker = np.tile(np.expand_dims(ker,-1),[1,1,im_w]) # the same kernel will be applied to all of the channels
assert ker.shape[-1]==im.shape[-1], "Kernel and image last dimension must match."
ker_x = ker.shape[0]
ker_y = ker.shape[1]
# shape of the output image
out_x = im_x - ker_x + 1
out_y = im_y - ker_y + 1
# reshapes the image to (out_x, ker_x, out_y, ker_y, im_w)
idx_list_x = gen_idx_conv1d(im_x, ker_x) # computes the indices of a 1D conv (cf. idx_conv1d doc)
idx_list_y = gen_idx_conv1d(im_y, ker_y)
idx_reshaped_x = repeat_idx_2d(idx_list_x, len(idx_list_y), 0) # repeats the previous indices to be used in 2D (cf. repeat_idx_2d doc)
idx_reshaped_y = repeat_idx_2d(idx_list_y, len(idx_list_x), 1)
im_reshaped = np.reshape(im[idx_reshaped_x, idx_reshaped_y, :], [out_x, ker_x, out_y, ker_y, im_w]) # reshapes
# reshapes the 2D kernel
ker = np.reshape(ker,[1, ker_x, 1, ker_y, im_w])
# applies the kernel to the image and reduces the dimension back to the one of original input image
return np.squeeze(np.sum(im_reshaped*ker, axis=(1,3)))
I tried to add a lot of comments to explain the method but the global idea is to reshape the 3D input image to a 5D one of shape (output_image_height, kernel_height, output_image_width, kernel_width, output_image_channel) and then to apply the kernel directly using the basic array multiplication. Of course, this methods is then using more memory (during the execution the size of the image is thus multiply by kernel_height*kernel_width) but it is faster.
To do this reshape step, I 'over-used' the indexing methods of numpy arrays, especially, the possibility of giving a numpy array as indices into a numpy array.
This methods could also be used to re-code the 2D convolution product in Pytorch or Tensorflow using the base math functions but I have no doubt in saying that it will be slower than the existing nn.conv2d operator...
I really enjoyed coding this method by only using the numpy basic tools.
A:
I checked out many implementations and found none for my purpose, which should be really simple. So here is a dead-simple implementation with for loop
def convolution2d(image, kernel, stride, padding):
image = np.pad(image, [(padding, padding), (padding, padding)], mode='constant', constant_values=0)
kernel_height, kernel_width = kernel.shape
padded_height, padded_width = image.shape
output_height = (padded_height - kernel_height) // stride + 1
output_width = (padded_width - kernel_width) // stride + 1
new_image = np.zeros((output_height, output_width)).astype(np.float32)
for y in range(0, output_height):
for x in range(0, output_width):
new_image[y][x] = np.sum(image[y * stride:y * stride + kernel_height, x * stride:x * stride + kernel_width] * kernel).astype(np.float32)
return new_image
A:
One of the most obvious is to hard code the kernel.
img = img.convert('L')
a = np.array(img)
out = np.zeros([a.shape[0]-2, a.shape[1]-2], dtype='float')
out += a[:-2, :-2]
out += a[1:-1, :-2]
out += a[2:, :-2]
out += a[:-2, 1:-1]
out += a[1:-1,1:-1]
out += a[2:, 1:-1]
out += a[:-2, 2:]
out += a[1:-1, 2:]
out += a[2:, 2:]
out /= 9.0
out = out.astype('uint8')
img = Image.fromarray(out)
This example does a box blur 3x3 completely unrolled. You can multiply the values where you have a different value and divide them by a different amount. But, if you honestly want the quickest and dirtiest method this is it. I think it beats Guillaume Mougeot's method by a factor of like 5. His method beating the others by a factor of 10.
It may lose a few steps if you're doing something like a gaussian blur. and need to multiply some stuff.
A:
Try to first round and then cast to uint8:
data = data.round().astype(np.uint8);
A:
I wrote this convolve_stride which uses numpy.lib.stride_tricks.as_strided. Moreover it supports both strides and dilation. It is also compatible to tensor with order > 2.
import numpy as np
from numpy.lib.stride_tricks import as_strided
from im2col import im2col
def conv_view(X, F_s, dr, std):
X_s = np.array(X.shape)
F_s = np.array(F_s)
dr = np.array(dr)
Fd_s = (F_s - 1) * dr + 1
if np.any(Fd_s > X_s):
raise ValueError('(Dilated) filter size must be smaller than X')
std = np.array(std)
X_ss = np.array(X.strides)
Xn_s = (X_s - Fd_s) // std + 1
Xv_s = np.append(Xn_s, F_s)
Xv_ss = np.tile(X_ss, 2) * np.append(std, dr)
return as_strided(X, Xv_s, Xv_ss, writeable=False)
def convolve_stride(X, F, dr=None, std=None):
if dr is None:
dr = np.ones(X.ndim, dtype=int)
if std is None:
std = np.ones(X.ndim, dtype=int)
if not (X.ndim == F.ndim == len(dr) == len(std)):
raise ValueError('X.ndim, F.ndim, len(dr), len(std) must be the same')
Xv = conv_view(X, F.shape, dr, std)
return np.tensordot(Xv, F, axes=X.ndim)
%timeit -n 100 -r 10 convolve_stride(A, F)
#31.2 ms Β± 1.31 ms per loop (mean Β± std. dev. of 10 runs, 100 loops each)
A:
Super simple and fast convolution using only basic numpy:
import numpy as np
def conv2d(image, kernel):
# apply kernel to image, return image of the same shape
# assume both image and kernel are 2D arrays
# kernel = np.flipud(np.fliplr(kernel)) # optionally flip the kernel
k = kernel.shape[0]
width = k//2
# place the image inside a frame to compensate for the kernel overlap
a = framed(image, width)
b = np.zeros(image.shape) # fill the output array with zeros; do not use np.empty()
# shift the image around each pixel, multiply by the corresponding kernel value and accumulate the results
for p, dp, r, dr in [(i, i + image.shape[0], j, j + image.shape[1]) for i in range(k) for j in range(k)]:
b += a[p:dp, r:dr] * kernel[p, r]
# or just write two nested for loops if you prefer
# np.clip(b, 0, 255, out=b) # optionally clip values exceeding the limits
return b
def framed(image, width):
a = np.zeros((image.shape[0]+2*width, image.shape[1]+2*width))
a[width:-width, width:-width] = image
# alternatively fill the frame with ones or copy border pixels
return a
Run it:
Image.fromarray(conv2d(image, kernel).astype('uint8'))
Instead of sliding the kernel along the image and computing the transformation pixel by pixel, create a series of shifted versions of the image corresponding to each element in the kernel and apply the corresponding kernel value to each of the shifted image versions.
This is probably the fastest you can get using just basic numpy; the speed is already comparable to C implementation of scipy convolve2d and better than fftconvolve. The idea is similar to @Tatarize. This example works only for one color component; for RGB just repeat for each (or modify the algorithm accordingly).
A:
Typically, Convolution 2D is a misnomer. Ideally, under the hood,
whats being done is a correlation of 2 matrices.
pad == same
returns the output as the same as input dimension
It can also take asymmetric images. In order to perform correlation(convolution in deep learning lingo) on a batch of 2d matrices, one can iterate over all the channels, calculate the correlation for each of the channel slices with the respective filter slice.
For example: If image is (28,28,3) and filter size is (5,5,3) then take each of the 3 slices from the image channel and perform the cross correlation using the custom function above and stack the resulting matrix in the respective dimension of the output.
def get_cross_corr_2d(W, X, pad = 'valid'):
if(pad == 'same'):
pr = int((W.shape[0] - 1)/2)
pc = int((W.shape[1] - 1)/2)
conv_2d = np.zeros((X.shape[0], X.shape[1]))
X_pad = np.zeros((X.shape[0] + 2*pr, X.shape[1] + 2*pc))
X_pad[pr:pr+X.shape[0], pc:pc+X.shape[1]] = X
for r in range(conv_2d.shape[0]):
for c in range(conv_2d.shape[1]):
conv_2d[r,c] = np.sum(np.inner(W, X_pad[r:r+W.shape[0], c:c+W.shape[1]]))
return conv_2d
else:
pr = W.shape[0] - 1
pc = W.shape[1] - 1
conv_2d = np.zeros((X.shape[0] - W.shape[0] + 2*pr + 1,
X.shape[1] - W.shape[1] + 2*pc + 1))
X_pad = np.zeros((X.shape[0] + 2*pr, X.shape[1] + 2*pc))
X_pad[pr:pr+X.shape[0], pc:pc+X.shape[1]] = X
for r in range(conv_2d.shape[0]):
for c in range(conv_2d.shape[1]):
conv_2d[r,c] = np.sum(np.multiply(W, X_pad[r:r+W.shape[0], c:c+W.shape[1]]))
return conv_2d
| 2d convolution using python and numpy | I am trying to perform a 2d convolution in python using numpy
I have a 2d array as follows with kernel H_r for the rows and H_c for the columns
data = np.zeros((nr, nc), dtype=np.float32)
#fill array with some data here then convolve
for r in range(nr):
data[r,:] = np.convolve(data[r,:], H_r, 'same')
for c in range(nc):
data[:,c] = np.convolve(data[:,c], H_c, 'same')
data = data.astype(np.uint8);
It does not produce the output that I was expecting, does this code look OK, I think the problem is with the casting from float32 to 8bit. Whats the best way to do this
Thanks
| [
"Maybe it is not the most optimized solution, but this is an implementation I used before with numpy library for Python:\ndef convolution2d(image, kernel, bias):\n m, n = kernel.shape\n if (m == n):\n y, x = image.shape\n y = y - m + 1\n x = x - m + 1\n new_image = np.zeros((y,x))\n for i in range(y):\n for j in range(x):\n new_image[i][j] = np.sum(image[i:i+m, j:j+m]*kernel) + bias\n return new_image\n\nI hope this code helps other guys with the same doubt.\nRegards.\n",
"Edit [Jan 2019]\n@Tashus comment bellow is correct, and @dudemeister's answer is thus probably more on the mark. The function he suggested is also more efficient, by avoiding a direct 2D convolution and the number of operations that would entail.\nPossible Problem\nI believe you are doing two 1d convolutions, the first per columns and the second per rows, and replacing the results from the first with the results of the second. \nNotice that numpy.convolve with the 'same' argument returns an array of equal shape to the largest one provided, so when you make the first convolution you already populated the entire data array.\nOne good way to visualize your arrays during these steps is to use Hinton diagrams, so you can check which elements already have a value.\nPossible Solution\nYou can try to add the results of the two convolutions (use data[:,c] += .. instead of data[:,c] = on the second for loop), if your convolution matrix is the result of using the one dimensional H_r and H_c matrices like so:\n\nAnother way to do that would be to use scipy.signal.convolve2d with a 2d convolution array, which is probably what you wanted to do in the first place.\n",
"Since you already have your kernel separated you should simply use the sepfir2d function from scipy:\nfrom scipy.signal import sepfir2d\nconvolved = sepfir2d(data, H_r, H_c)\n\nOn the other hand, the code you have there looks all right ...\n",
"It might not be the most optimized solution either, but it is approximately ten times faster than the one proposed by @omotto and it only uses basic numpy function (as reshape, expand_dims, tile...) and no 'for' loops:\ndef gen_idx_conv1d(in_size, ker_size):\n \"\"\"\n Generates a list of indices. This indices correspond to the indices\n of a 1D input tensor on which we would like to apply a 1D convolution.\n\n For instance, with a 1D input array of size 5 and a kernel of size 3, the\n 1D convolution product will successively looks at elements of indices [0,1,2],\n [1,2,3] and [2,3,4] in the input array. In this case, the function idx_conv1d(5,3) \n outputs the following array: array([0,1,2,1,2,3,2,3,4]).\n\n args:\n in_size: (type: int) size of the input 1d array.\n ker_size: (type: int) kernel size.\n\n return:\n idx_list: (type: np.array) list of the successive indices of the 1D input array\n access to the 1D convolution algorithm.\n\n example:\n >>> gen_idx_conv1d(in_size=5, ker_size=3)\n array([0, 1, 2, 1, 2, 3, 2, 3, 4])\n \"\"\"\n f = lambda dim1, dim2, axis: np.reshape(np.tile(np.expand_dims(np.arange(dim1),axis),dim2),-1)\n out_size = in_size-ker_size+1\n return f(ker_size, out_size, 0)+f(out_size, ker_size, 1)\n\ndef repeat_idx_2d(idx_list, nbof_rep, axis):\n \"\"\"\n Repeats an array of indices (idx_list) a number of time (nbof_rep) \"along\" an axis\n (axis). This function helps to browse through a 2d array of size\n (len(idx_list),nbof_rep).\n\n args:\n idx_list: (type: np.array or list) a 1D array of indices.\n nbof_rep: (type: int) number of repetition.\n axis: (type: int) axis \"along\" which the repetition will be applied.\n\n return\n idx_list: (type: np.array) a 1D array of indices of size len(idx_list)*nbof_rep.\n\n example:\n >>> a = np.array([0, 1, 2])\n >>> repeat_idx_2d(a, 3, 0) # repeats array 'a' 3 times along 'axis' 0\n array([0, 0, 0, 1, 1, 1, 2, 2, 2])\n\n >>> repeat_idx_2d(a, 3, 1) # repeats array 'a' 3 times along 'axis' 1\n array([0, 1, 2, 0, 1, 2, 0, 1, 2])\n\n >>> b = np.reshape(np.arange(3*4), (3,4))\n >>> b[repeat_idx_2d(np.arange(3), 4, 0), repeat_idx_2d(np.arange(4), 3, 1)]\n array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])\n \"\"\"\n assert axis in [0,1], \"Axis should be equal to 0 or 1.\"\n tile_axis = (nbof_rep,1) if axis else (1,nbof_rep)\n return np.reshape(np.tile(np.expand_dims(idx_list, 1),tile_axis),-1)\n\ndef conv2d(im, ker):\n \"\"\"\n Performs a 'valid' 2D convolution on an image. The input image may be\n a 2D or a 3D array.\n\n The output image first two dimensions will be reduced depending on the \n convolution size. \n\n The kernel may be a 2D or 3D array. If 2D, it will be applied on every\n channel of the input image. If 3D, its last dimension must match the\n image one.\n\n args:\n im: (type: np.array) image (2D or 3D).\n ker: (type: np.array) convolution kernel (2D or 3D).\n\n returns:\n im: (type: np.array) convolved image.\n\n example:\n >>> im = np.reshape(np.arange(10*10*3),(10,10,3))/(10*10*3) # 3D image\n >>> ker = np.array([[0,1,0],[-1,0,1],[0,-1,0]]) # 2D kernel\n >>> conv2d(im, ker) # 3D array of shape (8,8,3)\n \"\"\"\n if len(im.shape)==2: # if the image is a 2D array, it is reshaped by expanding the last dimension\n im = np.expand_dims(im,-1)\n\n im_x, im_y, im_w = im.shape\n\n if len(ker.shape)==2: # if the kernel is a 2D array, it is reshaped so it will be applied to all of the image channels\n ker = np.tile(np.expand_dims(ker,-1),[1,1,im_w]) # the same kernel will be applied to all of the channels \n\n assert ker.shape[-1]==im.shape[-1], \"Kernel and image last dimension must match.\"\n\n ker_x = ker.shape[0]\n ker_y = ker.shape[1]\n\n # shape of the output image\n out_x = im_x - ker_x + 1 \n out_y = im_y - ker_y + 1\n\n # reshapes the image to (out_x, ker_x, out_y, ker_y, im_w)\n idx_list_x = gen_idx_conv1d(im_x, ker_x) # computes the indices of a 1D conv (cf. idx_conv1d doc)\n idx_list_y = gen_idx_conv1d(im_y, ker_y)\n\n idx_reshaped_x = repeat_idx_2d(idx_list_x, len(idx_list_y), 0) # repeats the previous indices to be used in 2D (cf. repeat_idx_2d doc)\n idx_reshaped_y = repeat_idx_2d(idx_list_y, len(idx_list_x), 1)\n\n im_reshaped = np.reshape(im[idx_reshaped_x, idx_reshaped_y, :], [out_x, ker_x, out_y, ker_y, im_w]) # reshapes\n\n # reshapes the 2D kernel\n ker = np.reshape(ker,[1, ker_x, 1, ker_y, im_w])\n\n # applies the kernel to the image and reduces the dimension back to the one of original input image\n return np.squeeze(np.sum(im_reshaped*ker, axis=(1,3)))\n\nI tried to add a lot of comments to explain the method but the global idea is to reshape the 3D input image to a 5D one of shape (output_image_height, kernel_height, output_image_width, kernel_width, output_image_channel) and then to apply the kernel directly using the basic array multiplication. Of course, this methods is then using more memory (during the execution the size of the image is thus multiply by kernel_height*kernel_width) but it is faster.\nTo do this reshape step, I 'over-used' the indexing methods of numpy arrays, especially, the possibility of giving a numpy array as indices into a numpy array.\nThis methods could also be used to re-code the 2D convolution product in Pytorch or Tensorflow using the base math functions but I have no doubt in saying that it will be slower than the existing nn.conv2d operator...\nI really enjoyed coding this method by only using the numpy basic tools.\n",
"I checked out many implementations and found none for my purpose, which should be really simple. So here is a dead-simple implementation with for loop\ndef convolution2d(image, kernel, stride, padding):\n image = np.pad(image, [(padding, padding), (padding, padding)], mode='constant', constant_values=0)\n\n kernel_height, kernel_width = kernel.shape\n padded_height, padded_width = image.shape\n\n output_height = (padded_height - kernel_height) // stride + 1\n output_width = (padded_width - kernel_width) // stride + 1\n\n new_image = np.zeros((output_height, output_width)).astype(np.float32)\n\n for y in range(0, output_height):\n for x in range(0, output_width):\n new_image[y][x] = np.sum(image[y * stride:y * stride + kernel_height, x * stride:x * stride + kernel_width] * kernel).astype(np.float32)\n return new_image\n\n",
"One of the most obvious is to hard code the kernel.\nimg = img.convert('L')\na = np.array(img)\nout = np.zeros([a.shape[0]-2, a.shape[1]-2], dtype='float')\nout += a[:-2, :-2]\nout += a[1:-1, :-2]\nout += a[2:, :-2]\nout += a[:-2, 1:-1]\nout += a[1:-1,1:-1]\nout += a[2:, 1:-1]\nout += a[:-2, 2:]\nout += a[1:-1, 2:]\nout += a[2:, 2:]\nout /= 9.0\nout = out.astype('uint8')\nimg = Image.fromarray(out)\n\nThis example does a box blur 3x3 completely unrolled. You can multiply the values where you have a different value and divide them by a different amount. But, if you honestly want the quickest and dirtiest method this is it. I think it beats Guillaume Mougeot's method by a factor of like 5. His method beating the others by a factor of 10.\nIt may lose a few steps if you're doing something like a gaussian blur. and need to multiply some stuff.\n",
"Try to first round and then cast to uint8:\ndata = data.round().astype(np.uint8);\n\n",
"I wrote this convolve_stride which uses numpy.lib.stride_tricks.as_strided. Moreover it supports both strides and dilation. It is also compatible to tensor with order > 2.\nimport numpy as np\nfrom numpy.lib.stride_tricks import as_strided\nfrom im2col import im2col\n\ndef conv_view(X, F_s, dr, std):\n X_s = np.array(X.shape)\n F_s = np.array(F_s)\n dr = np.array(dr)\n Fd_s = (F_s - 1) * dr + 1\n if np.any(Fd_s > X_s):\n raise ValueError('(Dilated) filter size must be smaller than X')\n std = np.array(std)\n X_ss = np.array(X.strides)\n Xn_s = (X_s - Fd_s) // std + 1\n Xv_s = np.append(Xn_s, F_s)\n Xv_ss = np.tile(X_ss, 2) * np.append(std, dr)\n return as_strided(X, Xv_s, Xv_ss, writeable=False)\n\ndef convolve_stride(X, F, dr=None, std=None):\n if dr is None:\n dr = np.ones(X.ndim, dtype=int)\n if std is None:\n std = np.ones(X.ndim, dtype=int)\n if not (X.ndim == F.ndim == len(dr) == len(std)):\n raise ValueError('X.ndim, F.ndim, len(dr), len(std) must be the same')\n Xv = conv_view(X, F.shape, dr, std)\n return np.tensordot(Xv, F, axes=X.ndim)\n\n%timeit -n 100 -r 10 convolve_stride(A, F)\n#31.2 ms Β± 1.31 ms per loop (mean Β± std. dev. of 10 runs, 100 loops each)\n\n",
"Super simple and fast convolution using only basic numpy:\nimport numpy as np\n\ndef conv2d(image, kernel):\n # apply kernel to image, return image of the same shape\n # assume both image and kernel are 2D arrays\n # kernel = np.flipud(np.fliplr(kernel)) # optionally flip the kernel\n k = kernel.shape[0]\n width = k//2\n # place the image inside a frame to compensate for the kernel overlap\n a = framed(image, width)\n b = np.zeros(image.shape) # fill the output array with zeros; do not use np.empty()\n # shift the image around each pixel, multiply by the corresponding kernel value and accumulate the results\n for p, dp, r, dr in [(i, i + image.shape[0], j, j + image.shape[1]) for i in range(k) for j in range(k)]:\n b += a[p:dp, r:dr] * kernel[p, r]\n # or just write two nested for loops if you prefer\n # np.clip(b, 0, 255, out=b) # optionally clip values exceeding the limits\n return b\n\ndef framed(image, width):\n a = np.zeros((image.shape[0]+2*width, image.shape[1]+2*width))\n a[width:-width, width:-width] = image\n # alternatively fill the frame with ones or copy border pixels\n return a\n\nRun it:\nImage.fromarray(conv2d(image, kernel).astype('uint8'))\n\nInstead of sliding the kernel along the image and computing the transformation pixel by pixel, create a series of shifted versions of the image corresponding to each element in the kernel and apply the corresponding kernel value to each of the shifted image versions.\nThis is probably the fastest you can get using just basic numpy; the speed is already comparable to C implementation of scipy convolve2d and better than fftconvolve. The idea is similar to @Tatarize. This example works only for one color component; for RGB just repeat for each (or modify the algorithm accordingly).\n",
"\nTypically, Convolution 2D is a misnomer. Ideally, under the hood,\nwhats being done is a correlation of 2 matrices.\npad == same\nreturns the output as the same as input dimension\n\nIt can also take asymmetric images. In order to perform correlation(convolution in deep learning lingo) on a batch of 2d matrices, one can iterate over all the channels, calculate the correlation for each of the channel slices with the respective filter slice.\nFor example: If image is (28,28,3) and filter size is (5,5,3) then take each of the 3 slices from the image channel and perform the cross correlation using the custom function above and stack the resulting matrix in the respective dimension of the output.\ndef get_cross_corr_2d(W, X, pad = 'valid'):\n\n if(pad == 'same'):\n pr = int((W.shape[0] - 1)/2)\n pc = int((W.shape[1] - 1)/2)\n conv_2d = np.zeros((X.shape[0], X.shape[1]))\n X_pad = np.zeros((X.shape[0] + 2*pr, X.shape[1] + 2*pc))\n X_pad[pr:pr+X.shape[0], pc:pc+X.shape[1]] = X\n for r in range(conv_2d.shape[0]):\n for c in range(conv_2d.shape[1]):\n conv_2d[r,c] = np.sum(np.inner(W, X_pad[r:r+W.shape[0], c:c+W.shape[1]]))\n return conv_2d\n \n else: \n pr = W.shape[0] - 1\n pc = W.shape[1] - 1\n conv_2d = np.zeros((X.shape[0] - W.shape[0] + 2*pr + 1,\n X.shape[1] - W.shape[1] + 2*pc + 1))\n X_pad = np.zeros((X.shape[0] + 2*pr, X.shape[1] + 2*pc))\n X_pad[pr:pr+X.shape[0], pc:pc+X.shape[1]] = X\n for r in range(conv_2d.shape[0]):\n for c in range(conv_2d.shape[1]):\n conv_2d[r,c] = np.sum(np.multiply(W, X_pad[r:r+W.shape[0], c:c+W.shape[1]]))\n return conv_2d\n\n"
] | [
25,
6,
5,
2,
2,
1,
0,
0,
0,
0
] | [
"This code incorrect:\nfor r in range(nr):\n data[r,:] = np.convolve(data[r,:], H_r, 'same')\n\nfor c in range(nc):\n data[:,c] = np.convolve(data[:,c], H_c, 'same')\n\nSee Nussbaumer transformation from multidimentional convolution to one dimentional.\n"
] | [
-2
] | [
"convolution",
"numpy",
"python"
] | stackoverflow_0002448015_convolution_numpy_python.txt |
Q:
Add data in the next empty row in python pandas
I'm making a small and simple program that put one name under another in an excel file, and i dont know how i can get the next empty row
I have this excel table:
Name
Carl
And i'm making a program to add new names. Here is the function:
def modifyexcel ():
book = openpyxl.load_workbook (r'C:\Users\usuario\Desktop\prueba.xlsx')
sheet = book ["a"]
sheet ["a3"] = str(entrada1.get())
book.save (r'C:\Users\usuario\Desktop\prueba.xlsx')
But i need, instead of modifying the "a3" cell, modify the next row that is empty, so every time i add a new name it gets placed on the next empty row
A:
you can just use google colab to modify your excel !
you can mount your csv or excel to google drive or just load the csv to the side bar!
and copy path and paste it to your pandas read_csv or (read_excel is the same thing)!
https://colab.research.google.com/
from google.colab import files
import pandas as pd
#So first you read from your original excel
df=pd.read_csv('path')
list1=df.name.tolist()
##then create a variable to store your new name
name= "new name" #@param {type:"string"}
##append the new name to the list and return to pandas dataframe
list1.append(name)
df.name=list1
##output to csv and download
df.to_csv('newsheet.csv',index=False)
files.download('newsheet.csv')
| Add data in the next empty row in python pandas | I'm making a small and simple program that put one name under another in an excel file, and i dont know how i can get the next empty row
I have this excel table:
Name
Carl
And i'm making a program to add new names. Here is the function:
def modifyexcel ():
book = openpyxl.load_workbook (r'C:\Users\usuario\Desktop\prueba.xlsx')
sheet = book ["a"]
sheet ["a3"] = str(entrada1.get())
book.save (r'C:\Users\usuario\Desktop\prueba.xlsx')
But i need, instead of modifying the "a3" cell, modify the next row that is empty, so every time i add a new name it gets placed on the next empty row
| [
"you can just use google colab to modify your excel !\nyou can mount your csv or excel to google drive or just load the csv to the side bar!\nand copy path and paste it to your pandas read_csv or (read_excel is the same thing)!\nhttps://colab.research.google.com/\nfrom google.colab import files\nimport pandas as pd\n#So first you read from your original excel\ndf=pd.read_csv('path')\nlist1=df.name.tolist()\n##then create a variable to store your new name\nname= \"new name\" #@param {type:\"string\"}\n##append the new name to the list and return to pandas dataframe\nlist1.append(name)\ndf.name=list1\n\n##output to csv and download\ndf.to_csv('newsheet.csv',index=False)\nfiles.download('newsheet.csv')\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074649333_pandas_python.txt |
Q:
How to plot a differentiable function using matplotlib?
I am tryinig to plot the differential function y' = 3t-sqrt(y), but my code doesn't produce any graph output. Can someone point out my mistake please?
import sympy.plotting as sym_plot
def func(y, t):
return 3*t - np.sqrt(y)
# time points
t = np.linspace(0,5)
# initial condition
y0 = 3
# solve ODE
y = odeint(func,y0,t)
plt.plot(t,y)
plt.xlabel('time')
plt.ylabel('y(t)')
plt.show()
A:
Are you getting any output at all? And did you give func arguments here?
y = odeint(func,y0,t)
Also, you are missing some important imports for your code in the question. Maybe adding them to the code snippet will help you get a better answer.
Edit: After adding the imports and trying the code out myself, a graph is outputted. This doesn't seem like a problem with your code.
| How to plot a differentiable function using matplotlib? | I am tryinig to plot the differential function y' = 3t-sqrt(y), but my code doesn't produce any graph output. Can someone point out my mistake please?
import sympy.plotting as sym_plot
def func(y, t):
return 3*t - np.sqrt(y)
# time points
t = np.linspace(0,5)
# initial condition
y0 = 3
# solve ODE
y = odeint(func,y0,t)
plt.plot(t,y)
plt.xlabel('time')
plt.ylabel('y(t)')
plt.show()
| [
"Are you getting any output at all? And did you give func arguments here?\ny = odeint(func,y0,t)\n\nAlso, you are missing some important imports for your code in the question. Maybe adding them to the code snippet will help you get a better answer.\nEdit: After adding the imports and trying the code out myself, a graph is outputted. This doesn't seem like a problem with your code.\n"
] | [
0
] | [] | [] | [
"matplotlib",
"python"
] | stackoverflow_0074649591_matplotlib_python.txt |
Q:
How to install python package on GitHub Codespaces without having to rebuild the container?
I copied a template codespace https://github.com/github/codespaces-flask and now whenever I need to add a new package pip install redis for example I have to add it to my requirements.txt and rebuild the entire codespace again.
What is the proper way of doing this?
Thank you in advance.
I tried searching GitHub codespaces documentation and following Docker tutorials. I didn't find anything specific enough to answer my question.
A:
Try pip install -t target_directory to install directly into a specified folder
| How to install python package on GitHub Codespaces without having to rebuild the container? | I copied a template codespace https://github.com/github/codespaces-flask and now whenever I need to add a new package pip install redis for example I have to add it to my requirements.txt and rebuild the entire codespace again.
What is the proper way of doing this?
Thank you in advance.
I tried searching GitHub codespaces documentation and following Docker tutorials. I didn't find anything specific enough to answer my question.
| [
"Try pip install -t target_directory to install directly into a specified folder\n"
] | [
0
] | [] | [] | [
"codespaces",
"flask",
"github_codespaces",
"pip",
"python"
] | stackoverflow_0074648852_codespaces_flask_github_codespaces_pip_python.txt |
Q:
How to solve 404 error of jupyter lab
I installed Anaconda on my windows 10. and updated all packages.
now I am trying to open Jupyter lab by cmd.
when I type this command in cmd: jupyter lab
it just opens a tab in google chrome that shows:
"404 : Not Found You are requesting a page that does not exist!"
could you please help me to solve this problem to be able to open jupyter lab
thanks
A:
I did:
jupyter serverextension enable --py jupyterlab --user
and
conda install -c conda-forge nodejs
It's running now.
A:
If you are using Anaconda Navigator, install nodejs package within the Navigator. Once nodejs is installed, jupyterLab should be running without any error
A:
running jupyter lab in debug mode, suggests that you should first run:
jupyter lab build
A:
One reason this can happen is if you already have Jupyter running on the same port. In my case VS Code was automatically starting the jupyter daemon on the background so whenever I tried to spin up Jupyter outside of VS Code on port 8888 I would see the 404 page because my browser was actually navigating to the VS Code instance of Jupyter. One way to identify this is as follows:
In bash run lsof -i -P -n | grep "8888 (LISTEN)". You should see output like:
Python 2085 <user> 9u IPv4 0xfa77315aec2b468b 0t0 TCP 127.0.0.1:8888 (LISTEN)
Python 2085 <user> 10u IPv6 0xfa7731561f2fad13 0t0 TCP [::1]:8888 (LISTEN)
The second item is the process id, use ps u <pid> to inspect it:
Β» ps u 2085
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
<user> 2085 0.0 0.3 409501984 94656 ?? S 3:34PM 0:02.23 /<path-to-python>/Python -m vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.jupyter_daemon -v --
To fix this you can either try to kill the VS Code process (but this might mess up jupyter support in your VS Code environment) or you can simply start your external jupyter on a different port, e.g. jupyter lab --port=10000
| How to solve 404 error of jupyter lab | I installed Anaconda on my windows 10. and updated all packages.
now I am trying to open Jupyter lab by cmd.
when I type this command in cmd: jupyter lab
it just opens a tab in google chrome that shows:
"404 : Not Found You are requesting a page that does not exist!"
could you please help me to solve this problem to be able to open jupyter lab
thanks
| [
"I did:\njupyter serverextension enable --py jupyterlab --user\n\nand\nconda install -c conda-forge nodejs\n\nIt's running now.\n",
"If you are using Anaconda Navigator, install nodejs package within the Navigator. Once nodejs is installed, jupyterLab should be running without any error\n",
"running jupyter lab in debug mode, suggests that you should first run:\njupyter lab build\n\n",
"One reason this can happen is if you already have Jupyter running on the same port. In my case VS Code was automatically starting the jupyter daemon on the background so whenever I tried to spin up Jupyter outside of VS Code on port 8888 I would see the 404 page because my browser was actually navigating to the VS Code instance of Jupyter. One way to identify this is as follows:\n\nIn bash run lsof -i -P -n | grep \"8888 (LISTEN)\". You should see output like:\n\nPython 2085 <user> 9u IPv4 0xfa77315aec2b468b 0t0 TCP 127.0.0.1:8888 (LISTEN)\nPython 2085 <user> 10u IPv6 0xfa7731561f2fad13 0t0 TCP [::1]:8888 (LISTEN)\n\n\nThe second item is the process id, use ps u <pid> to inspect it:\n\nΒ» ps u 2085\nUSER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND\n<user> 2085 0.0 0.3 409501984 94656 ?? S 3:34PM 0:02.23 /<path-to-python>/Python -m vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.jupyter_daemon -v --\n\nTo fix this you can either try to kill the VS Code process (but this might mess up jupyter support in your VS Code environment) or you can simply start your external jupyter on a different port, e.g. jupyter lab --port=10000\n"
] | [
10,
1,
1,
0
] | [] | [] | [
"anaconda",
"conda",
"jupyter",
"jupyter_lab",
"python"
] | stackoverflow_0048948259_anaconda_conda_jupyter_jupyter_lab_python.txt |
Q:
Selenium loop the buttonClick
I am trying to scrape all the bikes from this page:
https://www.reconpowerbikes.com/recon-bikes/
but it only has the names without price, lets say if i want to click the number and click the "Shop Now" button from this page and go to each page to get the current price, (the bikes is switching periodically). how can i do it by selenium
!apt-get update
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
!pip install selenium
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
#set up Chrome driver
options=webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
#Define web driver as a Chrome driver
driver=webdriver.Chrome('chromedriver',options=options)
driver.implicitly_wait(10)
URL='https://www.reconpowerbikes.com/recon-bikes/'
driver.get(URL)
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//div/div[@class="blaze-pagination"]/button[@class=""]'))).click()
A:
you can try beautiful soup to extract URLs from page source or java script. following is the javascript version.
result = driver.execute_script('''
allbikes=document.querySelectorAll(".blaze-slider__description")
result=[]
for (var i = 0; i < allbikes.length; i++) {
let bike=allbikes[i]
let bike_url=bike.getElementsByClassName("blaze-slider__button")[0]
//console.log(bike_url.getAttribute("href"))
result.push(bike_url.getAttribute("href"))
}
return result
''')
for bike in result:
print(bike)
#for each bike URL, access the page and then get the price
Javascript: get all the slider divs, and then for each div, get the href attribute.
| Selenium loop the buttonClick | I am trying to scrape all the bikes from this page:
https://www.reconpowerbikes.com/recon-bikes/
but it only has the names without price, lets say if i want to click the number and click the "Shop Now" button from this page and go to each page to get the current price, (the bikes is switching periodically). how can i do it by selenium
!apt-get update
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
!pip install selenium
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
#set up Chrome driver
options=webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
#Define web driver as a Chrome driver
driver=webdriver.Chrome('chromedriver',options=options)
driver.implicitly_wait(10)
URL='https://www.reconpowerbikes.com/recon-bikes/'
driver.get(URL)
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//div/div[@class="blaze-pagination"]/button[@class=""]'))).click()
| [
"you can try beautiful soup to extract URLs from page source or java script. following is the javascript version.\nresult = driver.execute_script('''\nallbikes=document.querySelectorAll(\".blaze-slider__description\") \nresult=[]\nfor (var i = 0; i < allbikes.length; i++) {\n let bike=allbikes[i]\n let bike_url=bike.getElementsByClassName(\"blaze-slider__button\")[0]\n //console.log(bike_url.getAttribute(\"href\"))\n result.push(bike_url.getAttribute(\"href\"))\n}\nreturn result\n''')\nfor bike in result:\n print(bike)\n #for each bike URL, access the page and then get the price\n\nJavascript: get all the slider divs, and then for each div, get the href attribute.\n"
] | [
0
] | [] | [] | [
"python",
"selenium",
"web_crawler",
"web_scraping"
] | stackoverflow_0074649391_python_selenium_web_crawler_web_scraping.txt |
Q:
Generating csv files
I want to write a program that generate N number of csv files using python and I want to add an option to add a custom schema to generate the headers and values. the csv file should have 5 columns and and N number rows. Country, Capital city, population , Square meter, Continent and each column could have have different datatype.
I used the faker python package to generate a couple of such data but was not able to figure out on how to add the custom schema and datatype of each column.
A:
use "pandas" to make schema
from there you can make .csv
| Generating csv files | I want to write a program that generate N number of csv files using python and I want to add an option to add a custom schema to generate the headers and values. the csv file should have 5 columns and and N number rows. Country, Capital city, population , Square meter, Continent and each column could have have different datatype.
I used the faker python package to generate a couple of such data but was not able to figure out on how to add the custom schema and datatype of each column.
| [
"use \"pandas\" to make schema\nfrom there you can make .csv\n"
] | [
0
] | [] | [] | [
"csv",
"faker",
"python",
"python_3.x"
] | stackoverflow_0074646689_csv_faker_python_python_3.x.txt |
Q:
Find most common substring in a list of strings?
I have a Python list of string names where I would like to remove a common substring from all of the names.
And after reading this similar answer I could almost achieve the desired result using SequenceMatcher.
But only when all items have a common substring:
From List:
string 1 = myKey_apples
string 2 = myKey_appleses
string 3 = myKey_oranges
common substring = "myKey_"
To List:
string 1 = apples
string 2 = appleses
string 3 = oranges
However I have a slightly noisy list that contains a few scattered items that don't fit the same naming convention.
I would like to remove the "most common" substring from the majority:
From List:
string 1 = myKey_apples
string 2 = myKey_appleses
string 3 = myKey_oranges
string 4 = foo
string 5 = myKey_Banannas
common substring = ""
To List:
string 1 = apples
string 2 = appleses
string 3 = oranges
string 4 = foo
string 5 = Banannas
I need a way to match the "myKey_" substring so I can remove it from all names.
But when I use the SequenceMatcher the item "foo" causes the "longest match" to be equal to blank "".
I think the only way to solve this is to find the "most common substring". But how could that be accomplished?
Basic example code:
from difflib import SequenceMatcher
names = ["myKey_apples",
"myKey_appleses",
"myKey_oranges",
#"foo",
"myKey_Banannas"]
string2 = names[0]
for i in range(1, len(names)):
string1 = string2
string2 = names[i]
match = SequenceMatcher(None, string1, string2).find_longest_match(0, len(string1), 0, len(string2))
print(string1[match.a: match.a + match.size]) # -> myKey_
A:
Given names = ["myKey_apples", "myKey_appleses", "myKey_oranges", "foo", "myKey_Banannas"]
An O(n^2) solution I can think of is to find all possible substrings and storing them in a dictionary with the number of times they occur :
substring_counts={}
for i in range(0, len(names)):
for j in range(i+1,len(names)):
string1 = names[i]
string2 = names[j]
match = SequenceMatcher(None, string1, string2).find_longest_match(0, len(string1), 0, len(string2))
matching_substring=string1[match.a:match.a+match.size]
if(matching_substring not in substring_counts):
substring_counts[matching_substring]=1
else:
substring_counts[matching_substring]+=1
print(substring_counts) #{'myKey_': 5, 'myKey_apples': 1, 'o': 1, '': 3}
And then picking the maximum occurring substring
import operator
max_occurring_substring=max(substring_counts.iteritems(), key=operator.itemgetter(1))[0]
print(max_occurring_substring) #myKey_
A:
Here's a overly verbose solution to your problem:
def find_matching_key(list_in, max_key_only = True):
"""
returns the longest matching key in the list * with the highest frequency
"""
keys = {}
curr_key = ''
# If n does not exceed max_n, don't bother adding
max_n = 0
for word in list(set(list_in)): #get unique values to speed up
for i in range(len(word)):
# Look up the whole word, then one less letter, sequentially
curr_key = word[0:len(word)-i]
# if not in, count occurance
if curr_key not in keys.keys() and curr_key!='':
n = 0
for word2 in list_in:
if curr_key in word2:
n+=1
# if large n, Add to dictionary
if n > max_n:
max_n = n
keys[curr_key] = n
# Finish the word
# Finish for loop
if max_key_only:
return max(keys, key=keys.get)
else:
return keys
# Create your "from list"
From_List = [
"myKey_apples",
"myKey_appleses",
"myKey_oranges",
"foo",
"myKey_Banannas"
]
# Use the function
key = find_matching_key(From_List, True)
# Iterate over your list, replacing values
new_From_List = [x.replace(key,'') for x in From_List]
print(new_From_List)
['apples', 'appleses', 'oranges', 'foo', 'Banannas']
Needless to say, this solution would look a lot neater with recursion. Thought I'd sketch out a rough dynamic programming solution for you though.
A:
I would first find the starting letter with the most occurrences. Then I would take each word having that starting letter, and take while all these words have matching letters. Then in the end I would remove the prefix that was found from each starting word:
from collections import Counter
from itertools import takewhile
strings = ["myKey_apples", "myKey_appleses", "myKey_oranges", "berries"]
def remove_mc_prefix(words):
cnt = Counter()
for word in words:
cnt[word[0]] += 1
first_letter = list(cnt)[0]
filter_list = [word for word in words if word[0] == first_letter]
filter_list.sort(key = lambda s: len(s)) # To avoid iob
prefix = ""
length = len(filter_list[0])
for i in range(length):
test = filter_list[0][i]
if all([word[i] == test for word in filter_list]):
prefix += test
else: break
return [word[len(prefix):] if word.startswith(prefix) else word for word in words]
print(remove_mc_prefix(strings))
Out: ['apples', 'appleses', 'oranges', 'berries']
A:
To find the most-common-substring from list of python-string
I already tested on python-3.10.5 I hope it will work for you.
I have the same use case but a different kind of task, I just need to find one common-pattern-string from a list of more than 100s files. To use as a regular-expression.
Your Basic example code is not working in my case. because 1st checking with 2nd, 2nd with 3rd, 3rd with 4th and so on. So, I change it to the most common substring and will check with each one.
The downside of this code is that if something is not common with the most common substring, the final most common substring will be an empty one.
But in my case, it is working.
from difflib import SequenceMatcher
for i in range(1, len(names)):
if i==1:
string1, string2 = names[0], names[i]
else:
string1, string2 = most_common_substring, names[i]
match = SequenceMatcher(None, string1, string2).find_longest_match(0, len(string1), 0, len(string2))
most_common_substring = string1[match.a: match.a + match.size]
print(f"most_common_substring : {most_common_substring}")
python python-3python-difflib
| Find most common substring in a list of strings? | I have a Python list of string names where I would like to remove a common substring from all of the names.
And after reading this similar answer I could almost achieve the desired result using SequenceMatcher.
But only when all items have a common substring:
From List:
string 1 = myKey_apples
string 2 = myKey_appleses
string 3 = myKey_oranges
common substring = "myKey_"
To List:
string 1 = apples
string 2 = appleses
string 3 = oranges
However I have a slightly noisy list that contains a few scattered items that don't fit the same naming convention.
I would like to remove the "most common" substring from the majority:
From List:
string 1 = myKey_apples
string 2 = myKey_appleses
string 3 = myKey_oranges
string 4 = foo
string 5 = myKey_Banannas
common substring = ""
To List:
string 1 = apples
string 2 = appleses
string 3 = oranges
string 4 = foo
string 5 = Banannas
I need a way to match the "myKey_" substring so I can remove it from all names.
But when I use the SequenceMatcher the item "foo" causes the "longest match" to be equal to blank "".
I think the only way to solve this is to find the "most common substring". But how could that be accomplished?
Basic example code:
from difflib import SequenceMatcher
names = ["myKey_apples",
"myKey_appleses",
"myKey_oranges",
#"foo",
"myKey_Banannas"]
string2 = names[0]
for i in range(1, len(names)):
string1 = string2
string2 = names[i]
match = SequenceMatcher(None, string1, string2).find_longest_match(0, len(string1), 0, len(string2))
print(string1[match.a: match.a + match.size]) # -> myKey_
| [
"Given names = [\"myKey_apples\", \"myKey_appleses\", \"myKey_oranges\", \"foo\", \"myKey_Banannas\"]\nAn O(n^2) solution I can think of is to find all possible substrings and storing them in a dictionary with the number of times they occur :\nsubstring_counts={}\n\nfor i in range(0, len(names)):\n for j in range(i+1,len(names)):\n string1 = names[i]\n string2 = names[j]\n match = SequenceMatcher(None, string1, string2).find_longest_match(0, len(string1), 0, len(string2))\n matching_substring=string1[match.a:match.a+match.size]\n if(matching_substring not in substring_counts):\n substring_counts[matching_substring]=1\n else:\n substring_counts[matching_substring]+=1\n\nprint(substring_counts) #{'myKey_': 5, 'myKey_apples': 1, 'o': 1, '': 3}\n\nAnd then picking the maximum occurring substring\nimport operator\nmax_occurring_substring=max(substring_counts.iteritems(), key=operator.itemgetter(1))[0]\nprint(max_occurring_substring) #myKey_\n\n",
"Here's a overly verbose solution to your problem:\ndef find_matching_key(list_in, max_key_only = True):\n \"\"\"\n returns the longest matching key in the list * with the highest frequency\n \"\"\"\n keys = {}\n curr_key = ''\n\n # If n does not exceed max_n, don't bother adding\n max_n = 0\n\n for word in list(set(list_in)): #get unique values to speed up\n for i in range(len(word)):\n # Look up the whole word, then one less letter, sequentially\n curr_key = word[0:len(word)-i]\n # if not in, count occurance\n if curr_key not in keys.keys() and curr_key!='':\n n = 0\n for word2 in list_in:\n if curr_key in word2:\n n+=1\n # if large n, Add to dictionary\n if n > max_n:\n max_n = n\n keys[curr_key] = n\n # Finish the word\n # Finish for loop \n if max_key_only:\n return max(keys, key=keys.get)\n else:\n return keys \n\n# Create your \"from list\"\nFrom_List = [\n \"myKey_apples\",\n \"myKey_appleses\",\n \"myKey_oranges\",\n \"foo\",\n \"myKey_Banannas\"\n]\n\n# Use the function\nkey = find_matching_key(From_List, True)\n\n# Iterate over your list, replacing values\nnew_From_List = [x.replace(key,'') for x in From_List]\n\nprint(new_From_List)\n['apples', 'appleses', 'oranges', 'foo', 'Banannas']\n\n\nNeedless to say, this solution would look a lot neater with recursion. Thought I'd sketch out a rough dynamic programming solution for you though.\n",
"I would first find the starting letter with the most occurrences. Then I would take each word having that starting letter, and take while all these words have matching letters. Then in the end I would remove the prefix that was found from each starting word:\nfrom collections import Counter\nfrom itertools import takewhile\n\nstrings = [\"myKey_apples\", \"myKey_appleses\", \"myKey_oranges\", \"berries\"]\n\ndef remove_mc_prefix(words):\n cnt = Counter()\n for word in words:\n cnt[word[0]] += 1\n first_letter = list(cnt)[0]\n\n filter_list = [word for word in words if word[0] == first_letter]\n filter_list.sort(key = lambda s: len(s)) # To avoid iob\n\n prefix = \"\"\n length = len(filter_list[0])\n for i in range(length):\n test = filter_list[0][i]\n if all([word[i] == test for word in filter_list]):\n prefix += test\n else: break\n return [word[len(prefix):] if word.startswith(prefix) else word for word in words]\n\nprint(remove_mc_prefix(strings))\n\n\nOut: ['apples', 'appleses', 'oranges', 'berries']\n\n",
"To find the most-common-substring from list of python-string\nI already tested on python-3.10.5 I hope it will work for you.\nI have the same use case but a different kind of task, I just need to find one common-pattern-string from a list of more than 100s files. To use as a regular-expression.\nYour Basic example code is not working in my case. because 1st checking with 2nd, 2nd with 3rd, 3rd with 4th and so on. So, I change it to the most common substring and will check with each one.\nThe downside of this code is that if something is not common with the most common substring, the final most common substring will be an empty one.\nBut in my case, it is working.\n\nfrom difflib import SequenceMatcher\nfor i in range(1, len(names)):\n if i==1:\n string1, string2 = names[0], names[i]\n else:\n string1, string2 = most_common_substring, names[i]\n match = SequenceMatcher(None, string1, string2).find_longest_match(0, len(string1), 0, len(string2))\n most_common_substring = string1[match.a: match.a + match.size]\n\nprint(f\"most_common_substring : {most_common_substring}\")\n\n\npython python-3python-difflib\n"
] | [
11,
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0058585052_python.txt |
Q:
Convert loop to recursive function
I have written a python for loop iteration as show below. I was wondering if its possible to convert into a recursive function.
a = int(input("Please enter the first number: "))
b = int(input("Please enter the second number: "))
res = 0
for i in range(a,b+1):
temp = 1
for j in range(1,i+1):
temp = temp * j
res = res + temp
print("Sum of products from 1 to each integer in the range ",a," to ",b," is: ",res)
I am expecting something like the below example:
def recursion(a,b):
res = 0
if condition or a while condtion
....
return ....
a = int(input("Please enter the first number: "))
b = int(input("Please enter the second number: "))
print("Sum of products from 1 to each integer in the range ",a," to ",b," is: ",res)
Any idea ?
A:
Could be something like this, imagine dividing the problem into smaller sub-problems
def recursive_sum(a, b, res=0):
if a > b:
return res
temp = 1
for j in range(1, a+1):
temp = temp * j
res = res + temp
return recursive_sum(a+1, b, res)
a = int(input("Please enter the first number: "))
b = int(input("Please enter the second number: "))
res = recursive_sum(a, b)
print(f"Sum of products from 1 to each integer in the range {a} to {b} is: {res}")
A:
a = int(input("Please enter the first number: "))
b = int(input("Please enter the second number: "))
res = sum([x for x in range(int(a), int(b)+1)])
print(res)
My question is why do you need it to be recursive?
I ask that because this particular answer doesn't seem best approached through recursion.
if you need to use recursion
def recursive_sum(m,n):
if n <= m:
return n
else:
return n+recursive_sum(m, n-1)
EDIT:
the question asked included the sum of the factorial of a range, apologies for only including the sum portion.
def factorial(b):
if (b==1 or b==0):
return 1
else:
return (b * factorial(b - 1))
res = sum(map(factorial, range(a,b+1)))
The original code you're multiplying each value in the range by the product of the previous and summing.
range(5,10) for instance
total for the way the problem is written
res comes to 4500198
where the factorial sum = 4037880
If you print out just for value 5
you get 1+3+9+33+153 where !5=120
Which is the way you're trying to accomplish it?
| Convert loop to recursive function | I have written a python for loop iteration as show below. I was wondering if its possible to convert into a recursive function.
a = int(input("Please enter the first number: "))
b = int(input("Please enter the second number: "))
res = 0
for i in range(a,b+1):
temp = 1
for j in range(1,i+1):
temp = temp * j
res = res + temp
print("Sum of products from 1 to each integer in the range ",a," to ",b," is: ",res)
I am expecting something like the below example:
def recursion(a,b):
res = 0
if condition or a while condtion
....
return ....
a = int(input("Please enter the first number: "))
b = int(input("Please enter the second number: "))
print("Sum of products from 1 to each integer in the range ",a," to ",b," is: ",res)
Any idea ?
| [
"Could be something like this, imagine dividing the problem into smaller sub-problems\ndef recursive_sum(a, b, res=0):\n if a > b:\n return res\n\n temp = 1\n for j in range(1, a+1):\n temp = temp * j\n res = res + temp\n\n return recursive_sum(a+1, b, res)\n\na = int(input(\"Please enter the first number: \")) \nb = int(input(\"Please enter the second number: \")) \n\nres = recursive_sum(a, b)\nprint(f\"Sum of products from 1 to each integer in the range {a} to {b} is: {res}\") \n\n\n",
"a = int(input(\"Please enter the first number: \")) \nb = int(input(\"Please enter the second number: \")) \n\nres = sum([x for x in range(int(a), int(b)+1)])\n\nprint(res)\n\nMy question is why do you need it to be recursive?\nI ask that because this particular answer doesn't seem best approached through recursion.\nif you need to use recursion\ndef recursive_sum(m,n):\n if n <= m:\n return n\n else:\n return n+recursive_sum(m, n-1)\n\nEDIT:\nthe question asked included the sum of the factorial of a range, apologies for only including the sum portion.\ndef factorial(b):\n if (b==1 or b==0):\n return 1\n else:\n return (b * factorial(b - 1))\n\nres = sum(map(factorial, range(a,b+1)))\n\nThe original code you're multiplying each value in the range by the product of the previous and summing.\nrange(5,10) for instance\ntotal for the way the problem is written\nres comes to 4500198\nwhere the factorial sum = 4037880\nIf you print out just for value 5\nyou get 1+3+9+33+153 where !5=120\nWhich is the way you're trying to accomplish it?\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074649471_python.txt |
Q:
len() shows invalid syntax when define function
def prodListePos_rec(l, len(l)):
if (len(l)>0):
if l[len(l)-1] > 0:
product = prodListePos_rec(l,len(l)) * l[len(l)-1]
else:
product = 1
return product
l = [1,-2, 5, 0, 6,-5]
prodListePos_rec(l,len(l))
I don't get why it shows the invalid syntax and what should I do if I want to call the len() as a recursion function?
A:
Function parameters must be identifiers, so l is fine, but len(l) is not.
Although, l is a bad variable name since it looks like 1 and I; you could use lst instead.
More importantly, you don't actually need to pass the len() around. You can simply get it inside the function.
Here's a fixed up version of your code. I added variables for values that are used more than once.
def prodListePos_rec(lst):
n = len(lst)
if n > 0:
x = lst[n-1]
if x > 0:
product = prodListePos_rec(lst) * x
else:
product = 1
return product
lst = [1, -2, 5, 0, 6, -5]
prodListePos_rec(lst)
However, the code still doesn't work because it's possible for n > 0 and x <= 0, so product never gets defined. Error:
Traceback (most recent call last):
File "/home/wja/testdir/tmp.py", line 13, in <module>
prodListePos_rec(lst)
File "/home/wja/testdir/tmp.py", line 10, in prodListePos_rec
return product
UnboundLocalError: local variable 'product' referenced before assignment
A:
When you define function, you write names of parameters in parethesis.
l is a name of parameter. When you call a function, python will "replace" l with it's value to calculate result of function.
len(l) is not a name, it's an expression. So you can't use it as a name.
You can change your function to this
def prodListePos_rec(l, len_l):
if (len_l>0):
if l[len_l-1] > 0:
product = prodListePos_rec(l,len_l) * l[len_l-1]
else:
product = 1
return product
p = [1,-2, 5, 0, 6,-5]
prodListePos_rec(p,len(p))
len_l is a name of parameter. I changed l to p in function calling, to not confuse variables with parameters
Or simply you can do that
def prodListePos_rec(l):
if (len(l)>0):
if l[len(l)-1] > 0:
product = prodListePos_rec(l,len(l)) * l[len(l)-1]
else:
product = 1
return product
p = [1,-2, 5, 0, 6,-5]
prodListePos_rec(p)
In this case len(l) is an expression, that calctulates length of a given list.
A:
def prodListePos_rec(l, len(l)): ->> len(l) is inbuilt function to
find length of list, string, etc.
which cannot be used as variable to define the attribute of defined
function --> def prodListePos_rec(l, len(l)):
instead use
def prodListePos_rec(l, len_of_l):
product = 1
if (len_of_l)>0):
if l[len_of_l)-1] > 0:
product = prodListePos_rec(l,len_of_l)) * l[len_of_l)-1]
return product
l = [1,-2, 5, 0, 6,-5]
print(prodListePos_rec(l,len(l)))
Remember: this is a recursive function. So, It will not enter else block.
| len() shows invalid syntax when define function | def prodListePos_rec(l, len(l)):
if (len(l)>0):
if l[len(l)-1] > 0:
product = prodListePos_rec(l,len(l)) * l[len(l)-1]
else:
product = 1
return product
l = [1,-2, 5, 0, 6,-5]
prodListePos_rec(l,len(l))
I don't get why it shows the invalid syntax and what should I do if I want to call the len() as a recursion function?
| [
"Function parameters must be identifiers, so l is fine, but len(l) is not.\nAlthough, l is a bad variable name since it looks like 1 and I; you could use lst instead.\nMore importantly, you don't actually need to pass the len() around. You can simply get it inside the function.\nHere's a fixed up version of your code. I added variables for values that are used more than once.\ndef prodListePos_rec(lst):\n n = len(lst)\n if n > 0:\n x = lst[n-1]\n if x > 0:\n product = prodListePos_rec(lst) * x\n else:\n product = 1\n return product\n\nlst = [1, -2, 5, 0, 6, -5]\nprodListePos_rec(lst)\n\nHowever, the code still doesn't work because it's possible for n > 0 and x <= 0, so product never gets defined. Error:\nTraceback (most recent call last):\n File \"/home/wja/testdir/tmp.py\", line 13, in <module>\n prodListePos_rec(lst)\n File \"/home/wja/testdir/tmp.py\", line 10, in prodListePos_rec\n return product\nUnboundLocalError: local variable 'product' referenced before assignment\n\n",
"When you define function, you write names of parameters in parethesis.\nl is a name of parameter. When you call a function, python will \"replace\" l with it's value to calculate result of function.\nlen(l) is not a name, it's an expression. So you can't use it as a name.\nYou can change your function to this\ndef prodListePos_rec(l, len_l):\n if (len_l>0):\n if l[len_l-1] > 0:\n product = prodListePos_rec(l,len_l) * l[len_l-1]\n else:\n product = 1 \n return product\n\np = [1,-2, 5, 0, 6,-5]\nprodListePos_rec(p,len(p)) \n\nlen_l is a name of parameter. I changed l to p in function calling, to not confuse variables with parameters\nOr simply you can do that\ndef prodListePos_rec(l):\n if (len(l)>0):\n if l[len(l)-1] > 0:\n product = prodListePos_rec(l,len(l)) * l[len(l)-1]\n else:\n product = 1 \n return product\n\np = [1,-2, 5, 0, 6,-5]\nprodListePos_rec(p) \n\nIn this case len(l) is an expression, that calctulates length of a given list.\n",
"\ndef prodListePos_rec(l, len(l)): ->> len(l) is inbuilt function to\nfind length of list, string, etc.\n\nwhich cannot be used as variable to define the attribute of defined\n\nfunction --> def prodListePos_rec(l, len(l)):\n\ninstead use\ndef prodListePos_rec(l, len_of_l):\n product = 1\n if (len_of_l)>0):\n if l[len_of_l)-1] > 0:\n product = prodListePos_rec(l,len_of_l)) * l[len_of_l)-1] \n return product\n\nl = [1,-2, 5, 0, 6,-5]\nprint(prodListePos_rec(l,len(l)))\n\nRemember: this is a recursive function. So, It will not enter else block.\n"
] | [
2,
0,
0
] | [] | [] | [
"function",
"python",
"recursion"
] | stackoverflow_0074649522_function_python_recursion.txt |
Q:
Python TypeError: 'NoneType' object does not support item assignment
I try to get data from Yealink Management Cloud Service via API service by the Python scripts below. But I get the error "TypeError: 'NoneType' object does not support item assignment".
How to fix this issue? I'm using Python 3.10 to run the below Scripts.
# -*- coding: utf-8 -*-
import hmac
import hashlib
import base64
import time
import uuid
import requests
from hashlib import md5
import json
from urllib3 import encode_multipart_formdata
requests.packages.urllib3.disable_warnings()
accesskeyId = "f8517322d648ffa7ff758ccd17"
accesskeySecret = "11e47ef340e698eec4d5ea3"
base_url = 'https://api-dm.yealink.com:8445/'
def doRequest(method, uri, query, body, isForm):
url = base_url + uri
if not isForm:
headers = buildHeaders(method, uri, query, body)
headers['Content-Type'] = "application/json;charset=UTF-8"
else:
headers = buildHeaders(method, uri, {}, {})
print(headers)
if method == 'GET' and len(query) > 0:
parameter =getQueries(query)
r = requests.get(url, headers = headers, params = parameter, verify=False)
print (r.text, r.status_code)
if method =='POST':
if not isForm:
jsonBody = json.dumps(body)
r = requests.post(url, headers = headers, data = jsonBody, verify=False)
print (r.text, r.status_code)
else:
encode_data = encode_multipart_formdata(body)
body = encode_data[0]
headers['Content-Type'] = encode_data[1]
r = requests.post(url, headers = headers, data = body, verify=False)
print (r.text, r.status_code)
def buildHeaders(method, uri, query, body):
headers ={}
if len(body) > 0:
headers['Content-MD5'] = str(base64.b64encode(md5(json.dumps(body).encode("utf-8")).digest()).decode("utf-8"))
headers['X-Ca-Key'] = str(accesskeyId)
headers['X-Ca-Nonce'] = str(''.join(str(uuid.uuid4()).split('-')))
headers['X-Ca-Timestamp'] = str(int(round(time.time() * 1000)))
headers['X-Ca-Signature'] = str(base64.b64encode(hmac.new(accesskeySecret.encode("utf-8"), sign(method, uri, query, body , headers).encode("utf-8"), digestmod=hashlib.sha256).digest()).decode('utf8'))
return headers
def getQueries(query):
formattedQueryString = ""
paramIndex = 0
for p in query:
formattedQueryString += str(p) +"="+ str(query[p])
paramIndex += 1
if paramIndex != len(query):
formattedQueryString += "&"
return formattedQueryString
def buildHeaderString(headers):
header_string = ""
for key,value in headers.items():
if len(value) == 0:
continue
if len(header_string) > 0:
header_string += "\n"
header_string+=key
header_string+=":"
header_string+=value
return header_string
def sign(method, uri, query, body , header):
header_string = buildHeaderString(header)
stringToSign = method + "\n"+ str(header_string) + "\n" + uri
if len(query) > 0:
stringToSign += ("\n" +getQueries(query))
return str(stringToSign)
# Get data
print("GET the Details:")
doRequest('GET','api/open/v1/manager/mvc/MVCInfo/get', {"id": "805061D10004477"},{},False)
After I run the script, I got the error as below:
Traceback (most recent call last):
File "C:\Users\...\PythonScripts\test.py", line 105, in <module>
doRequest('GET','api/open/v1/manager/mvc/MVCInfo/get', {"id": "803061D100004477"},{},False)
File "C:\Users\...\PythonScripts\test.py", line 24, in doRequest
headers['Content-Type'] = "application/json;charset=UTF-8"
TypeError: 'NoneType' object does not support item assignment
Example of a Successful Response should be like this
Example of a Successful Response here
A:
Check that you are not at any point passing 'None' to doRequest()'s body parameter. You are trying to reassign a value to an object which is 'None', and that is what is causing your problem.
| Python TypeError: 'NoneType' object does not support item assignment | I try to get data from Yealink Management Cloud Service via API service by the Python scripts below. But I get the error "TypeError: 'NoneType' object does not support item assignment".
How to fix this issue? I'm using Python 3.10 to run the below Scripts.
# -*- coding: utf-8 -*-
import hmac
import hashlib
import base64
import time
import uuid
import requests
from hashlib import md5
import json
from urllib3 import encode_multipart_formdata
requests.packages.urllib3.disable_warnings()
accesskeyId = "f8517322d648ffa7ff758ccd17"
accesskeySecret = "11e47ef340e698eec4d5ea3"
base_url = 'https://api-dm.yealink.com:8445/'
def doRequest(method, uri, query, body, isForm):
url = base_url + uri
if not isForm:
headers = buildHeaders(method, uri, query, body)
headers['Content-Type'] = "application/json;charset=UTF-8"
else:
headers = buildHeaders(method, uri, {}, {})
print(headers)
if method == 'GET' and len(query) > 0:
parameter =getQueries(query)
r = requests.get(url, headers = headers, params = parameter, verify=False)
print (r.text, r.status_code)
if method =='POST':
if not isForm:
jsonBody = json.dumps(body)
r = requests.post(url, headers = headers, data = jsonBody, verify=False)
print (r.text, r.status_code)
else:
encode_data = encode_multipart_formdata(body)
body = encode_data[0]
headers['Content-Type'] = encode_data[1]
r = requests.post(url, headers = headers, data = body, verify=False)
print (r.text, r.status_code)
def buildHeaders(method, uri, query, body):
headers ={}
if len(body) > 0:
headers['Content-MD5'] = str(base64.b64encode(md5(json.dumps(body).encode("utf-8")).digest()).decode("utf-8"))
headers['X-Ca-Key'] = str(accesskeyId)
headers['X-Ca-Nonce'] = str(''.join(str(uuid.uuid4()).split('-')))
headers['X-Ca-Timestamp'] = str(int(round(time.time() * 1000)))
headers['X-Ca-Signature'] = str(base64.b64encode(hmac.new(accesskeySecret.encode("utf-8"), sign(method, uri, query, body , headers).encode("utf-8"), digestmod=hashlib.sha256).digest()).decode('utf8'))
return headers
def getQueries(query):
formattedQueryString = ""
paramIndex = 0
for p in query:
formattedQueryString += str(p) +"="+ str(query[p])
paramIndex += 1
if paramIndex != len(query):
formattedQueryString += "&"
return formattedQueryString
def buildHeaderString(headers):
header_string = ""
for key,value in headers.items():
if len(value) == 0:
continue
if len(header_string) > 0:
header_string += "\n"
header_string+=key
header_string+=":"
header_string+=value
return header_string
def sign(method, uri, query, body , header):
header_string = buildHeaderString(header)
stringToSign = method + "\n"+ str(header_string) + "\n" + uri
if len(query) > 0:
stringToSign += ("\n" +getQueries(query))
return str(stringToSign)
# Get data
print("GET the Details:")
doRequest('GET','api/open/v1/manager/mvc/MVCInfo/get', {"id": "805061D10004477"},{},False)
After I run the script, I got the error as below:
Traceback (most recent call last):
File "C:\Users\...\PythonScripts\test.py", line 105, in <module>
doRequest('GET','api/open/v1/manager/mvc/MVCInfo/get', {"id": "803061D100004477"},{},False)
File "C:\Users\...\PythonScripts\test.py", line 24, in doRequest
headers['Content-Type'] = "application/json;charset=UTF-8"
TypeError: 'NoneType' object does not support item assignment
Example of a Successful Response should be like this
Example of a Successful Response here
| [
"Check that you are not at any point passing 'None' to doRequest()'s body parameter. You are trying to reassign a value to an object which is 'None', and that is what is causing your problem.\n"
] | [
0
] | [] | [] | [
"api",
"python",
"python_requests"
] | stackoverflow_0074649788_api_python_python_requests.txt |
Q:
How to update exisitng json file with Python?
I have data.json and read to data variable.
f = open("data.json", "r")
data = np.array(json.loads(f.read()))
ouput of 'data' as below.
[{
"symbol" : "NZDCHF",
"timeframe" : [
{"tf":"H4","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"H1","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"M30","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"M15","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"M5","x1":0,"y1":0,"x2":0,"y2":0}
]
},
{
"symbol" : "AUDCHF",
"timeframe" : [
{"tf":"H4","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"H1","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"M30","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"M15","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"M5","x1":0,"y1":0,"x2":0,"y2":0}
]
}]
I can find some existing value, e.g. 'AUDCHF'.
selected = np.array([el for el in data if el["symbol"] == 'AUDCHF'])
rect = selected[0]['timeframe'][4]
and the output from above 'rect'
{"tf":"M5","x1":0,"y1":0,"x2":0,"y2":0}
I would like to update x1, y1, x2 and y2 then dump to file. Any advice or guidance on this would be greatly appreciated, Thanks.
A:
you don't need numpy here:
with open("data.json", "r") as f:
data = json.load(f)
for item in data:
if item['symbol'] == 'AUDCHF':
# same for x2 and y2
item['timeframe'][4]['x1'] = 1
item['timeframe'][4]['y1'] = 1
break
else: # this will only trigger if the loop finished without the break being called
print("symbol 'AUDCHF' not found)
exit(0)
with open("data_edited.json", "w") as f:
json.dump(data, f)
| How to update exisitng json file with Python? | I have data.json and read to data variable.
f = open("data.json", "r")
data = np.array(json.loads(f.read()))
ouput of 'data' as below.
[{
"symbol" : "NZDCHF",
"timeframe" : [
{"tf":"H4","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"H1","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"M30","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"M15","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"M5","x1":0,"y1":0,"x2":0,"y2":0}
]
},
{
"symbol" : "AUDCHF",
"timeframe" : [
{"tf":"H4","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"H1","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"M30","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"M15","x1":0,"y1":0,"x2":0,"y2":0},
{"tf":"M5","x1":0,"y1":0,"x2":0,"y2":0}
]
}]
I can find some existing value, e.g. 'AUDCHF'.
selected = np.array([el for el in data if el["symbol"] == 'AUDCHF'])
rect = selected[0]['timeframe'][4]
and the output from above 'rect'
{"tf":"M5","x1":0,"y1":0,"x2":0,"y2":0}
I would like to update x1, y1, x2 and y2 then dump to file. Any advice or guidance on this would be greatly appreciated, Thanks.
| [
"you don't need numpy here:\nwith open(\"data.json\", \"r\") as f:\n data = json.load(f)\nfor item in data:\n if item['symbol'] == 'AUDCHF':\n # same for x2 and y2\n item['timeframe'][4]['x1'] = 1\n item['timeframe'][4]['y1'] = 1\n break\nelse: # this will only trigger if the loop finished without the break being called\n print(\"symbol 'AUDCHF' not found)\n exit(0)\nwith open(\"data_edited.json\", \"w\") as f:\n json.dump(data, f)\n\n"
] | [
1
] | [] | [] | [
"json",
"python"
] | stackoverflow_0074649746_json_python.txt |
Q:
How to find & show a specific coordinate from a plot?
I have made a plot using plt.plot(xdata,ydata)
And I would like to find the x-coordinate when y coordinate = 125.94937644205281 + 1 (chi-squared+1) from the plot.
And I would also like to show its coordinate.
Is there any method to do that?
Example plot
I have tried locating roots method, but it is taking ages to find the two parameters that cause chi-squared + 1.
If matplotlib does not have any method for finding a specific point on the graph, I think I should use solver.
But I do not know which solver to use and how to use solver...
target = chisq_min + 1
def target_chisq_plus1 (gradient):
return chisq([gradient], xval, yval, yerr) - target
# Locating Roots
initials = [(a_soln, 0.0121320), (0.0122193, a_soln)] # <- Initial Boundaries for the two parameters causing chisquared + 1
tolerance = 0.01
chisquard_plus_one_solutions = []
while True:
for initial in initials:
if target_chisq_plus1(initial[0]) > 0:
xp = initial[0]
xn = initial[1]
else:
xp = initial[1]
xn = initial[0]
next_approximation = (xp + xn) / 2
if target_chisq_plus1(next_approximation) > 0:
xp = next_approximation
else:
xn = next_approximation
if abs(target_chisq_plus1(next_approximation)) < tolerance:
gradient_solutions.append((next_approximation, chisq([next_approximation], xval, yval, yerr)))
break
But it seems i am trapped in an infinite loop.
A:
You are in an infinite loop because you never break out of the while True loop. The break statement only breaks out of one loop at a time. See also How to break out of nested loops in python?
| How to find & show a specific coordinate from a plot? | I have made a plot using plt.plot(xdata,ydata)
And I would like to find the x-coordinate when y coordinate = 125.94937644205281 + 1 (chi-squared+1) from the plot.
And I would also like to show its coordinate.
Is there any method to do that?
Example plot
I have tried locating roots method, but it is taking ages to find the two parameters that cause chi-squared + 1.
If matplotlib does not have any method for finding a specific point on the graph, I think I should use solver.
But I do not know which solver to use and how to use solver...
target = chisq_min + 1
def target_chisq_plus1 (gradient):
return chisq([gradient], xval, yval, yerr) - target
# Locating Roots
initials = [(a_soln, 0.0121320), (0.0122193, a_soln)] # <- Initial Boundaries for the two parameters causing chisquared + 1
tolerance = 0.01
chisquard_plus_one_solutions = []
while True:
for initial in initials:
if target_chisq_plus1(initial[0]) > 0:
xp = initial[0]
xn = initial[1]
else:
xp = initial[1]
xn = initial[0]
next_approximation = (xp + xn) / 2
if target_chisq_plus1(next_approximation) > 0:
xp = next_approximation
else:
xn = next_approximation
if abs(target_chisq_plus1(next_approximation)) < tolerance:
gradient_solutions.append((next_approximation, chisq([next_approximation], xval, yval, yerr)))
break
But it seems i am trapped in an infinite loop.
| [
"You are in an infinite loop because you never break out of the while True loop. The break statement only breaks out of one loop at a time. See also How to break out of nested loops in python?\n"
] | [
0
] | [] | [] | [
"matplotlib",
"physics",
"python",
"statistics"
] | stackoverflow_0074649825_matplotlib_physics_python_statistics.txt |
Q:
Python: String to CamelCase
This is a question from Codewars:
Complete the method/function so that it converts dash/underscore delimited words into camel casing. The first word within the output should be capitalized only if the original word was capitalized (known as Upper Camel Case, also often referred to as Pascal case).
The input test cases are as follows:
test.describe("Testing function to_camel_case")
test.it("Basic tests")
test.assert_equals(to_camel_case(''), '', "An empty string was provided but not returned")
test.assert_equals(to_camel_case("the_stealth_warrior"), "theStealthWarrior", "to_camel_case('the_stealth_warrior') did not return correct value")
test.assert_equals(to_camel_case("The-Stealth-Warrior"), "TheStealthWarrior", "to_camel_case('The-Stealth-Warrior') did not return correct value")
test.assert_equals(to_camel_case("A-B-C"), "ABC", "to_camel_case('A-B-C') did not return correct value")
This is what I've tried so far:
def to_camel_case(text):
str=text
str=str.replace(' ','')
for i in str:
if ( str[i] == '-'):
str[i]=str.replace('-','')
str[i+1].toUpperCase()
elif ( str[i] == '_'):
str[i]=str.replace('-','')
str[i+1].toUpperCase()
return str
It passes the first two tests but not the main ones :
test.assert_equals(to_camel_case("the_stealth_warrior"), "theStealthWarrior", "to_camel_case('the_stealth_warrior') did not return correct value")
test.assert_equals(to_camel_case("The-Stealth-Warrior"), "TheStealthWarrior", "to_camel_case('The-Stealth-Warrior') did not return correct value")
test.assert_equals(to_camel_case("A-B-C"), "ABC", "to_camel_case('A-B-C') did not return correct value")
What am I doing wrong?
A:
You may have a working implementation with slight errors as mentioned in your comments, but I propose that you:
split by the delimiters
apply a capitalization for all but the first of the tokens
rejoin the tokens
My implementation is:
def to_camel_case(text):
s = text.replace("-", " ").replace("_", " ")
s = s.split()
if len(text) == 0:
return text
return s[0] + ''.join(i.capitalize() for i in s[1:])
IMO it makes a bit more sense.
The output from running tests is:
>>> to_camel_case("the_stealth_warrior")
'theStealthWarrior'
>>> to_camel_case("The-Stealth-Warrior")
'TheStealthWarrior'
>>> to_camel_case("A-B-C")
'ABC'
A:
from re import sub
def to_camelcase(s):
s = sub(r"(_|-)+", " ", s).title().replace(" ", "").replace("*","")
return ''.join([s[0].lower(), s[1:]])
print(to_camelcase('some_string_with_underscore'))
print(to_camelcase('Some string with Spaces'))
print(to_camelcase('some-string-with-dashes'))
print(to_camelcase('some string-with dashes_underscores and spaces'))
print(to_camelcase('some*string*with*asterisks'))
A:
I think first of you should change the syntax a little because 'i' is a string not an integer. It should be
for i in str:
if (i == '-'):
...
A:
this is my simple way
def to_camel_case(text):
#your code herdlfldfde
s = text.replace("-", " ").replace("_", " ")
s = s.split()
if len(text) == 0:
return text
return s[0] + ' '.join(s[1:]).title().replace(" ", "")
A:
Potential solution/package that supports pascal/snake conversion to camel.
# pip install camelCasing
from camelCasing import camelCasing as cc
for s in ['the_stealth_warrior', 'The-Stealth-Warrior', 'A-B-C']:
print(cc.toCamelCase(s=s, user_acronyms=None))
theStealthWarrior
theStealthWarrior
ABC
| Python: String to CamelCase | This is a question from Codewars:
Complete the method/function so that it converts dash/underscore delimited words into camel casing. The first word within the output should be capitalized only if the original word was capitalized (known as Upper Camel Case, also often referred to as Pascal case).
The input test cases are as follows:
test.describe("Testing function to_camel_case")
test.it("Basic tests")
test.assert_equals(to_camel_case(''), '', "An empty string was provided but not returned")
test.assert_equals(to_camel_case("the_stealth_warrior"), "theStealthWarrior", "to_camel_case('the_stealth_warrior') did not return correct value")
test.assert_equals(to_camel_case("The-Stealth-Warrior"), "TheStealthWarrior", "to_camel_case('The-Stealth-Warrior') did not return correct value")
test.assert_equals(to_camel_case("A-B-C"), "ABC", "to_camel_case('A-B-C') did not return correct value")
This is what I've tried so far:
def to_camel_case(text):
str=text
str=str.replace(' ','')
for i in str:
if ( str[i] == '-'):
str[i]=str.replace('-','')
str[i+1].toUpperCase()
elif ( str[i] == '_'):
str[i]=str.replace('-','')
str[i+1].toUpperCase()
return str
It passes the first two tests but not the main ones :
test.assert_equals(to_camel_case("the_stealth_warrior"), "theStealthWarrior", "to_camel_case('the_stealth_warrior') did not return correct value")
test.assert_equals(to_camel_case("The-Stealth-Warrior"), "TheStealthWarrior", "to_camel_case('The-Stealth-Warrior') did not return correct value")
test.assert_equals(to_camel_case("A-B-C"), "ABC", "to_camel_case('A-B-C') did not return correct value")
What am I doing wrong?
| [
"You may have a working implementation with slight errors as mentioned in your comments, but I propose that you:\n\nsplit by the delimiters\napply a capitalization for all but the first of the tokens\nrejoin the tokens\n\nMy implementation is:\ndef to_camel_case(text):\n s = text.replace(\"-\", \" \").replace(\"_\", \" \")\n s = s.split()\n if len(text) == 0:\n return text\n return s[0] + ''.join(i.capitalize() for i in s[1:])\n\nIMO it makes a bit more sense.\nThe output from running tests is:\n>>> to_camel_case(\"the_stealth_warrior\")\n'theStealthWarrior'\n>>> to_camel_case(\"The-Stealth-Warrior\")\n'TheStealthWarrior'\n>>> to_camel_case(\"A-B-C\")\n'ABC'\n\n",
"from re import sub\n\ndef to_camelcase(s):\n s = sub(r\"(_|-)+\", \" \", s).title().replace(\" \", \"\").replace(\"*\",\"\")\n return ''.join([s[0].lower(), s[1:]])\n\nprint(to_camelcase('some_string_with_underscore'))\nprint(to_camelcase('Some string with Spaces'))\nprint(to_camelcase('some-string-with-dashes'))\nprint(to_camelcase('some string-with dashes_underscores and spaces'))\nprint(to_camelcase('some*string*with*asterisks'))\n\n",
"I think first of you should change the syntax a little because 'i' is a string not an integer. It should be\nfor i in str:\n if (i == '-'):\n ...\n\n",
"this is my simple way\ndef to_camel_case(text):\n #your code herdlfldfde\n s = text.replace(\"-\", \" \").replace(\"_\", \" \")\n s = s.split()\n if len(text) == 0:\n return text\n return s[0] + ' '.join(s[1:]).title().replace(\" \", \"\")\n\n",
"Potential solution/package that supports pascal/snake conversion to camel.\n# pip install camelCasing\nfrom camelCasing import camelCasing as cc\n\nfor s in ['the_stealth_warrior', 'The-Stealth-Warrior', 'A-B-C']:\n print(cc.toCamelCase(s=s, user_acronyms=None))\n\ntheStealthWarrior\ntheStealthWarrior\nABC\n\n"
] | [
13,
4,
1,
1,
0
] | [] | [] | [
"python",
"string"
] | stackoverflow_0060978672_python_string.txt |
Q:
Expand pandas dataframe column of dict into dataframe columns
I have a Pandas DataFrame where one column is a Series of dicts, like this:
colA colB colC
0 7 7 {'foo': 185, 'bar': 182, 'baz': 148}
1 2 8 {'foo': 117, 'bar': 103, 'baz': 155}
2 5 10 {'foo': 165, 'bar': 184, 'baz': 170}
3 3 2 {'foo': 121, 'bar': 151, 'baz': 187}
4 5 5 {'foo': 137, 'bar': 199, 'baz': 108}
I want the foo, bar and baz key-value pairs from the dicts to be columns in my dataframe, such that I end up with this:
colA colB foo bar baz
0 7 7 185 182 148
1 2 8 117 103 155
2 5 10 165 184 170
3 3 2 121 151 187
4 5 5 137 199 108
How do I do that?
A:
TL;DR
Based on Carlos Horn's comment pd.json_normalize are perfect for this:
df_fixed = df.join(pd.json_normalize(df['colC'])).drop('colC', axis='columns')
Old answer
df = df.drop('colC', axis=1).join(pd.DataFrame(df.colC.values.tolist()))
Elaborate (old) answer
We start by defining the DataFrame to work with, as well as importing Pandas:
import pandas as pd
df = pd.DataFrame(
{
'colA': {0: 7, 1: 2, 2: 5, 3: 3, 4: 5},
'colB': {0: 7, 1: 8, 2: 10, 3: 2, 4: 5},
'colC': {
0: {'foo': 185, 'bar': 182, 'baz': 148},
1: {'foo': 117, 'bar': 103, 'baz': 155},
2: {'foo': 165, 'bar': 184, 'baz': 170},
3: {'foo': 121, 'bar': 151, 'baz': 187},
4: {'foo': 137, 'bar': 199, 'baz': 108},
},
}
)
The column colC is a pd.Series of dicts, and we can turn it into a pd.DataFrame by turning each dict into a pd.Series:
pd.DataFrame(df.colC.values.tolist())
# df.colC.apply(pd.Series). # this also works, but it is slow
which gives the pd.DataFrame:
foo bar baz
0 154 190 171
1 152 130 164
2 165 125 109
3 153 128 174
4 135 157 188
So all we need to do is:
Turn colC into a pd.DataFrame
Delete the original colC from df
Join the convert colC with df
That can be done in a one-liner:
df = df.drop('colC', axis=1).join(pd.DataFrame(df.colC.values.tolist()))
With the contents of df now being the pd.DataFrame:
colA colB foo bar baz
0 2 4 154 190 171
1 4 10 152 130 164
2 4 10 165 125 109
3 3 8 153 128 174
4 10 9 135 157 188
A:
I faced the same challenge recently and I managed to do it manually using apply and join.
import pandas as pd
def expand_dict_column(df: pd.DataFrame, column) -> pd.DataFrame:
df.drop(columns=[column], inplace=False).join(
df.apply(lambda x: pd.Series(x[column].values(), index=x[column].keys()), axis=1))
In the case of the columns of the question it would look like this:
df.drop(columns=["colC"], inplace=False).join(
df.apply(lambda x: pd.Series(x["colC"].values(), index=x["colC"].keys()), axis=1))
| Expand pandas dataframe column of dict into dataframe columns | I have a Pandas DataFrame where one column is a Series of dicts, like this:
colA colB colC
0 7 7 {'foo': 185, 'bar': 182, 'baz': 148}
1 2 8 {'foo': 117, 'bar': 103, 'baz': 155}
2 5 10 {'foo': 165, 'bar': 184, 'baz': 170}
3 3 2 {'foo': 121, 'bar': 151, 'baz': 187}
4 5 5 {'foo': 137, 'bar': 199, 'baz': 108}
I want the foo, bar and baz key-value pairs from the dicts to be columns in my dataframe, such that I end up with this:
colA colB foo bar baz
0 7 7 185 182 148
1 2 8 117 103 155
2 5 10 165 184 170
3 3 2 121 151 187
4 5 5 137 199 108
How do I do that?
| [
"TL;DR\nBased on Carlos Horn's comment pd.json_normalize are perfect for this:\ndf_fixed = df.join(pd.json_normalize(df['colC'])).drop('colC', axis='columns')\n\nOld answer\ndf = df.drop('colC', axis=1).join(pd.DataFrame(df.colC.values.tolist()))\n\nElaborate (old) answer\nWe start by defining the DataFrame to work with, as well as importing Pandas:\nimport pandas as pd\n\n\ndf = pd.DataFrame(\n {\n 'colA': {0: 7, 1: 2, 2: 5, 3: 3, 4: 5},\n 'colB': {0: 7, 1: 8, 2: 10, 3: 2, 4: 5},\n 'colC': {\n 0: {'foo': 185, 'bar': 182, 'baz': 148},\n 1: {'foo': 117, 'bar': 103, 'baz': 155},\n 2: {'foo': 165, 'bar': 184, 'baz': 170},\n 3: {'foo': 121, 'bar': 151, 'baz': 187},\n 4: {'foo': 137, 'bar': 199, 'baz': 108},\n },\n }\n)\n\nThe column colC is a pd.Series of dicts, and we can turn it into a pd.DataFrame by turning each dict into a pd.Series:\npd.DataFrame(df.colC.values.tolist())\n# df.colC.apply(pd.Series). # this also works, but it is slow\n\nwhich gives the pd.DataFrame:\n foo bar baz\n0 154 190 171\n1 152 130 164\n2 165 125 109\n3 153 128 174\n4 135 157 188\n\nSo all we need to do is:\n\nTurn colC into a pd.DataFrame\nDelete the original colC from df\nJoin the convert colC with df\n\nThat can be done in a one-liner:\ndf = df.drop('colC', axis=1).join(pd.DataFrame(df.colC.values.tolist()))\n\nWith the contents of df now being the pd.DataFrame:\n colA colB foo bar baz\n0 2 4 154 190 171\n1 4 10 152 130 164\n2 4 10 165 125 109\n3 3 8 153 128 174\n4 10 9 135 157 188\n\n",
"I faced the same challenge recently and I managed to do it manually using apply and join.\nimport pandas as pd\n\ndef expand_dict_column(df: pd.DataFrame, column) -> pd.DataFrame:\n df.drop(columns=[column], inplace=False).join(\n df.apply(lambda x: pd.Series(x[column].values(), index=x[column].keys()), axis=1))\n\nIn the case of the columns of the question it would look like this:\ndf.drop(columns=[\"colC\"], inplace=False).join(\n df.apply(lambda x: pd.Series(x[\"colC\"].values(), index=x[\"colC\"].keys()), axis=1))\n\n"
] | [
20,
0
] | [] | [] | [
"dataframe",
"dictionary",
"pandas",
"python",
"series"
] | stackoverflow_0054344114_dataframe_dictionary_pandas_python_series.txt |
Q:
Python Move between rooms Key Error message
rooms = {
'Great Hall': {'South': 'Bedroom'},
'Bedroom': {'North': 'Great Hall', 'East': 'Cellar'},
'Cellar': {'West': 'Bedroom'}
}
current_room = 'Great Hall'
user_move = ''
directions = ['North', 'South', 'East', 'West']
while user_move != 'exit':
print("You are in the", current_room)
user_move = input("Choose a direction ")
current_room = rooms[current_room][user_move]
if user_move in current_room:
print(current_room)
else:
print("Invalid move. Try again")
Hi all,
I am an extreme newbie to python and am having an issue with my if/else statement. I get a KeyError when I purposely put in an invalid direction to see the output. I am sure I am almost there but am brain dead at this point trying to figure it out.
Thanks!
A:
You need to verify that the chosen direction is valid before moving rooms.
while user_move != 'exit':
print("You are in the", current_room)
user_move = input("Choose a direction ")
# is the move a valid choice?
if user_move in rooms[current_room]:
# yes it is valid, so move there
current_room = rooms[current_room][user_move]
print(current_room)
else:
# no, it was not a valid move
print("Invalid move. Try again")
Your code was moving rooms before checking if the direction was valid.
| Python Move between rooms Key Error message | rooms = {
'Great Hall': {'South': 'Bedroom'},
'Bedroom': {'North': 'Great Hall', 'East': 'Cellar'},
'Cellar': {'West': 'Bedroom'}
}
current_room = 'Great Hall'
user_move = ''
directions = ['North', 'South', 'East', 'West']
while user_move != 'exit':
print("You are in the", current_room)
user_move = input("Choose a direction ")
current_room = rooms[current_room][user_move]
if user_move in current_room:
print(current_room)
else:
print("Invalid move. Try again")
Hi all,
I am an extreme newbie to python and am having an issue with my if/else statement. I get a KeyError when I purposely put in an invalid direction to see the output. I am sure I am almost there but am brain dead at this point trying to figure it out.
Thanks!
| [
"You need to verify that the chosen direction is valid before moving rooms.\nwhile user_move != 'exit':\n print(\"You are in the\", current_room)\n user_move = input(\"Choose a direction \")\n\n # is the move a valid choice?\n if user_move in rooms[current_room]:\n # yes it is valid, so move there\n current_room = rooms[current_room][user_move]\n print(current_room)\n else:\n # no, it was not a valid move\n print(\"Invalid move. Try again\")\n\nYour code was moving rooms before checking if the direction was valid.\n"
] | [
2
] | [] | [] | [
"if_statement",
"python"
] | stackoverflow_0074649771_if_statement_python.txt |
Q:
Python NumPy log2 - How to make it a negative log?
I just started working with Numpy because I want to use their log method. I am trying to do -log2(79/859) but can only see how to do log2(74/571) which outputs a negative value when it should be positive. Read the Doc but don't see how to make it a negative log?
How can I fix this?
print(np.log2(79/859))
Output
-2.947893569733893
Output I want
2.947893569733893
Tried searching through NumPy Docs
A:
As I posted in my comment, if you always want a positive value for your logarithm, you can use an absolute value:
np.abs(np.log2(74/571))
# 2.948
Also, as suggested above, you don't need the NumPy library if you only want to use logarithms. You can accomplish the same with the math standard module and even with built-in functions:
import math
abs(math.log2(74/571))
# 2.948
A:
print(-np.log2(74/571))
2.947893569733893
if I correctly understand what you want.
| Python NumPy log2 - How to make it a negative log? | I just started working with Numpy because I want to use their log method. I am trying to do -log2(79/859) but can only see how to do log2(74/571) which outputs a negative value when it should be positive. Read the Doc but don't see how to make it a negative log?
How can I fix this?
print(np.log2(79/859))
Output
-2.947893569733893
Output I want
2.947893569733893
Tried searching through NumPy Docs
| [
"As I posted in my comment, if you always want a positive value for your logarithm, you can use an absolute value:\nnp.abs(np.log2(74/571))\n# 2.948\n\nAlso, as suggested above, you don't need the NumPy library if you only want to use logarithms. You can accomplish the same with the math standard module and even with built-in functions:\nimport math\n\nabs(math.log2(74/571))\n# 2.948\n\n",
"print(-np.log2(74/571))\n\n\n2.947893569733893\n\nif I correctly understand what you want.\n"
] | [
1,
0
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074649757_numpy_python.txt |
Q:
ipython: get the result of `??` (double question mark) magic command as string
The IPython builtin help system says:
Within IPython you have various way to access help:
? -> Introduction and overview of IPython's features (this screen).
object? -> Details about 'object'.
object?? -> More detailed, verbose information about 'object'.
The double question mark magic command (??) thereby prints the type, the docstring and β if available β also the source code of the respective object. I find this information really useful and I want it inside a str-variable (instead of printed directly).
I know thath obj.__doc__ gives the docstring but I guess there is a convenient way to get all the information that ?? produces at once. I look for something like:
# pseudo code
from IPython import magic
report = magic.double_question_mark(obj)
Is this possible? If so, how?
A:
you can use "pinfo2", https://ipython.readthedocs.io/en/stable/interactive/magics.html
for example
def test(a, b):
import numpy as np
cds = data.range(1000)
cds = cds.random_shuffle()
a = np.array([a])
return a, b
from IPython import get_ipython
ipython = get_ipython()
ipython.run_line_magic("pinfo2", "test")
it will print:
Signature: test(a, b)
Docstring: <no docstring>
Source:
def test(a, b):
import numpy as np
cds = data.range(1000)
cds = cds.random_shuffle()
a = np.array([a])
return a, b
File: ~/PycharmProjects/aib/<ipython-input-14-4adc6bbd758b>
Type: function
| ipython: get the result of `??` (double question mark) magic command as string | The IPython builtin help system says:
Within IPython you have various way to access help:
? -> Introduction and overview of IPython's features (this screen).
object? -> Details about 'object'.
object?? -> More detailed, verbose information about 'object'.
The double question mark magic command (??) thereby prints the type, the docstring and β if available β also the source code of the respective object. I find this information really useful and I want it inside a str-variable (instead of printed directly).
I know thath obj.__doc__ gives the docstring but I guess there is a convenient way to get all the information that ?? produces at once. I look for something like:
# pseudo code
from IPython import magic
report = magic.double_question_mark(obj)
Is this possible? If so, how?
| [
"you can use \"pinfo2\", https://ipython.readthedocs.io/en/stable/interactive/magics.html\nfor example\ndef test(a, b):\n import numpy as np\n cds = data.range(1000)\n cds = cds.random_shuffle()\n a = np.array([a])\n return a, b\n\n\nfrom IPython import get_ipython\nipython = get_ipython()\nipython.run_line_magic(\"pinfo2\", \"test\")\n\nit will print:\nSignature: test(a, b)\nDocstring: <no docstring>\nSource: \ndef test(a, b):\n import numpy as np\n cds = data.range(1000)\n cds = cds.random_shuffle()\n a = np.array([a])\n return a, b\nFile: ~/PycharmProjects/aib/<ipython-input-14-4adc6bbd758b>\nType: function\n\n"
] | [
0
] | [] | [] | [
"ipython",
"python"
] | stackoverflow_0070833723_ipython_python.txt |
Q:
AttributeError: module 'sipbuild.api' has no attribute 'prepare_metadata_for_build_wheel' for PyQt5 5.15.0
A program I am trying to install requires the installation of PyQt5 5.15.0 , which gives me this error. The odd thing is that the installation works fine for the latest version of PyQt5 (5.15.2), but this program requires 5.15.0 specifically.
Command Output:
Collecting PyQt5==5.15.0
Using cached PyQt5-5.15.0.tar.gz (3.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\mshal\appdata\local\programs\python\python39\python.exe' 'C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\mshal\AppData\Local\Temp\tmp41s11ev6'
cwd: C:\Users\mshal\AppData\Local\Temp\pip-install-sfw90hvc\pyqt5_e2cc46859b554da7b84798abae5378ba
Complete output (31 lines):
Traceback (most recent call last):
File "C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py", line 126, in prepare_metadata_for_build_wheel
hook = backend.prepare_metadata_for_build_wheel
AttributeError: module 'sipbuild.api' has no attribute 'prepare_metadata_for_build_wheel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py", line 280, in <module>
main()
File "C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py", line 130, in prepare_metadata_for_build_wheel
return _get_wheel_metadata_from_wheel(backend, metadata_directory,
File "C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py", line 159, in _get_wheel_metadata_from_wheel
whl_basename = backend.build_wheel(metadata_directory, config_settings)
File "C:\Users\mshal\AppData\Local\Temp\pip-build-env-nnx_yu09\overlay\Lib\site-packages\sipbuild\api.py", line 51, in build_wheel
project = AbstractProject.bootstrap('pep517')
File "C:\Users\mshal\AppData\Local\Temp\pip-build-env-nnx_yu09\overlay\Lib\site-packages\sipbuild\abstract_project.py", line 83, in bootstrap
project.setup(pyproject, tool, tool_description)
File "C:\Users\mshal\AppData\Local\Temp\pip-build-env-nnx_yu09\overlay\Lib\site-packages\sipbuild\project.py", line 479, in setup
self.apply_user_defaults(tool)
File "project.py", line 62, in apply_user_defaults
super().apply_user_defaults(tool)
File "C:\Users\mshal\AppData\Local\Temp\pip-build-env-nnx_yu09\overlay\Lib\site-packages\pyqtbuild\project.py", line 79, in apply_user_defaults
super().apply_user_defaults(tool)
File "C:\Users\mshal\AppData\Local\Temp\pip-build-env-nnx_yu09\overlay\Lib\site-packages\sipbuild\project.py", line 225, in apply_user_defaults
self.builder.apply_user_defaults(tool)
File "C:\Users\mshal\AppData\Local\Temp\pip-build-env-nnx_yu09\overlay\Lib\site-packages\pyqtbuild\builder.py", line 66, in apply_user_defaults
raise PyProjectOptionException('qmake',
sipbuild.pyproject.PyProjectOptionException
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\mshal\appdata\local\programs\python\python39\python.exe' 'C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\mshal\AppData\Local\Temp\tmp41s11ev6' Check the logs for full command output.
I am on the latest version of pip. Any ideas on the root cause of this issue?
A:
What helped me is upgrading pip from 20.2.3 to the latest one (in my case 21.1.1)
A:
For Mac/Homebrew users.
The answer https://stackoverflow.com/a/72046110/5327611 is leading in the right direction. On a Mac with QT5 installed via Homebrew the qmake binary just needs to be added to the path. This can be achieved through
export PATH="/opt/homebrew/opt/qt5/bin:$PATH"
(of course depending on where the homebrew files are installed)
A:
Running on arm with python3.6 (ubuntu18 on nvidia Xavier):
sudo apt install qt5-default
A:
For MacOS users.
I am on Apple M1 silicon using Python 3.9.8. What worked for me was @Apaul's comment in the original question section. Install pyqt5-sip prior to pyqt5.
I also have an Intel Mac and on that machine, I do not need to do this.
A:
Checking the binaries that PyQt5 provides in pypi for version 5.15.0 I see that it does not provide the binaries for python3.9 in windows, so pip is trying to compile using the source code which is complicated and can generate several dependency problems (for example you must have Qt 5.15 installed, etc). So my recommendation is to install a more updated version of PyQt5, for example 5.15.2 since if it provides the binaries for python3.9 on windows, in addition to being a wrapper of an LTS version of Qt then it will have solved several bugs.
python -m pip install PyQt5==5.15.2
Another solution is to use python3.8 instead of python3.9 so that you can install pyqt5 5.15.0 from pypi without problems.
A:
Upgrading your pip enables you to install PyQt5. Personally, I had the same issue while installing PyQt6 and I upgraded my pip, and everything installed perfectly. I think both python and pip versions play an important role in installing PyQt so make sure you have later versions.
This is the command I used in Linux:
pip install --upgrade pip
A:
Since qt5-default was not available, I installed qt5-default's dependencies
sudo apt-get install qtbase5-dev qtchooser qt5-qmake qtbase5-dev-tools
after that I installed pyqt5 via apt-get first and afterwards via pip
sudo apt-get install pyqt5-dev
pip install pyqt5
now wheel seems to work
side-note:
I am not sure if sudo apt-get install pyqt5-dev is even necessary
A:
The error message thrown here is misleading - it's not an issue with a sipbuild.api attribute. Indeed, in this case program qmake is missing, see last line of the Python traceback. Have a look if it's installed on your system and add it to your PATH variable. Otherwise, install it. On Linux this would be done with
sudo apt-get install qt5-qmake
A:
I had this problem on my M1 Mac using Python 3.9.12 when I was trying to install a library: pip install pixellib.
The first thing I did was: pip install pixellib --verbose to see the whole log, and there I noticed that PyQt5 was waiting for an input. So then I found someone else with that issue, and used pip install pyqt5 --config-settings --confirm-license= --verbose which took some time to compile, but worked!
A:
I could not get any of the above solutions to work but I managed to get it working using python3.9, PyQt5=5.15.2, pip=22.0.2 and sip=6.5.0 by using sudo apt-get install PyQt5. If you need it in a virtual environment, you can manually copy the PyQt5 folder from your default /usr/lib/python3/dist-packages to the site-packages folder in your virtual environment.
A:
To all those that are struggling with Apple M1 installation, here is a working solution, specifically addressing the problem of installing the pixellib library that depends on PyQt5 but you can apply it equally to other libs:
PyQt5 is not supported on Apple M1, it needs qt6: https://www.reddit.com/r/learnpython/comments/o4w1ut/comment/h2jele3/?utm_source=share&utm_medium=web2x&context=3 , https://www.qt.io/product/qt6
this means you need to install PyQt6: python3 -m pip install PyQt6
go to the lib you need, in my case pixellib: https://pypi.org/project/pixellib/#files and
download the wheel file
get the wheel tool: pip install wheel
unpack the wheel wheel unpack pixellib-0.7.1-py3-none-any.whl
Change its dependency of PyQt5 to PyQt6
edit pixellib-0.7.1/pixellib-0.7.1.dist-info/METADATA
pyQt5 => pyQt6
pack it back wheel pack pixellib-0.7.1
install it: pip install pixellib-0.7.1-py3-none-any.whl
test in python: `
# should work
import pixellib
P.S. thanks to Terra and ChaOS for supporting work on the project underlying this report.
| AttributeError: module 'sipbuild.api' has no attribute 'prepare_metadata_for_build_wheel' for PyQt5 5.15.0 | A program I am trying to install requires the installation of PyQt5 5.15.0 , which gives me this error. The odd thing is that the installation works fine for the latest version of PyQt5 (5.15.2), but this program requires 5.15.0 specifically.
Command Output:
Collecting PyQt5==5.15.0
Using cached PyQt5-5.15.0.tar.gz (3.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\mshal\appdata\local\programs\python\python39\python.exe' 'C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\mshal\AppData\Local\Temp\tmp41s11ev6'
cwd: C:\Users\mshal\AppData\Local\Temp\pip-install-sfw90hvc\pyqt5_e2cc46859b554da7b84798abae5378ba
Complete output (31 lines):
Traceback (most recent call last):
File "C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py", line 126, in prepare_metadata_for_build_wheel
hook = backend.prepare_metadata_for_build_wheel
AttributeError: module 'sipbuild.api' has no attribute 'prepare_metadata_for_build_wheel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py", line 280, in <module>
main()
File "C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py", line 130, in prepare_metadata_for_build_wheel
return _get_wheel_metadata_from_wheel(backend, metadata_directory,
File "C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py", line 159, in _get_wheel_metadata_from_wheel
whl_basename = backend.build_wheel(metadata_directory, config_settings)
File "C:\Users\mshal\AppData\Local\Temp\pip-build-env-nnx_yu09\overlay\Lib\site-packages\sipbuild\api.py", line 51, in build_wheel
project = AbstractProject.bootstrap('pep517')
File "C:\Users\mshal\AppData\Local\Temp\pip-build-env-nnx_yu09\overlay\Lib\site-packages\sipbuild\abstract_project.py", line 83, in bootstrap
project.setup(pyproject, tool, tool_description)
File "C:\Users\mshal\AppData\Local\Temp\pip-build-env-nnx_yu09\overlay\Lib\site-packages\sipbuild\project.py", line 479, in setup
self.apply_user_defaults(tool)
File "project.py", line 62, in apply_user_defaults
super().apply_user_defaults(tool)
File "C:\Users\mshal\AppData\Local\Temp\pip-build-env-nnx_yu09\overlay\Lib\site-packages\pyqtbuild\project.py", line 79, in apply_user_defaults
super().apply_user_defaults(tool)
File "C:\Users\mshal\AppData\Local\Temp\pip-build-env-nnx_yu09\overlay\Lib\site-packages\sipbuild\project.py", line 225, in apply_user_defaults
self.builder.apply_user_defaults(tool)
File "C:\Users\mshal\AppData\Local\Temp\pip-build-env-nnx_yu09\overlay\Lib\site-packages\pyqtbuild\builder.py", line 66, in apply_user_defaults
raise PyProjectOptionException('qmake',
sipbuild.pyproject.PyProjectOptionException
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\mshal\appdata\local\programs\python\python39\python.exe' 'C:\Users\mshal\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\mshal\AppData\Local\Temp\tmp41s11ev6' Check the logs for full command output.
I am on the latest version of pip. Any ideas on the root cause of this issue?
| [
"What helped me is upgrading pip from 20.2.3 to the latest one (in my case 21.1.1)\n",
"For Mac/Homebrew users.\nThe answer https://stackoverflow.com/a/72046110/5327611 is leading in the right direction. On a Mac with QT5 installed via Homebrew the qmake binary just needs to be added to the path. This can be achieved through\nexport PATH=\"/opt/homebrew/opt/qt5/bin:$PATH\"\n\n(of course depending on where the homebrew files are installed)\n",
"Running on arm with python3.6 (ubuntu18 on nvidia Xavier):\nsudo apt install qt5-default\n\n",
"For MacOS users.\nI am on Apple M1 silicon using Python 3.9.8. What worked for me was @Apaul's comment in the original question section. Install pyqt5-sip prior to pyqt5.\nI also have an Intel Mac and on that machine, I do not need to do this.\n",
"Checking the binaries that PyQt5 provides in pypi for version 5.15.0 I see that it does not provide the binaries for python3.9 in windows, so pip is trying to compile using the source code which is complicated and can generate several dependency problems (for example you must have Qt 5.15 installed, etc). So my recommendation is to install a more updated version of PyQt5, for example 5.15.2 since if it provides the binaries for python3.9 on windows, in addition to being a wrapper of an LTS version of Qt then it will have solved several bugs.\npython -m pip install PyQt5==5.15.2\n\nAnother solution is to use python3.8 instead of python3.9 so that you can install pyqt5 5.15.0 from pypi without problems.\n",
"Upgrading your pip enables you to install PyQt5. Personally, I had the same issue while installing PyQt6 and I upgraded my pip, and everything installed perfectly. I think both python and pip versions play an important role in installing PyQt so make sure you have later versions.\nThis is the command I used in Linux:\npip install --upgrade pip\n\n",
"Since qt5-default was not available, I installed qt5-default's dependencies\nsudo apt-get install qtbase5-dev qtchooser qt5-qmake qtbase5-dev-tools\n\nafter that I installed pyqt5 via apt-get first and afterwards via pip\nsudo apt-get install pyqt5-dev\npip install pyqt5\n\nnow wheel seems to work\n\nside-note:\n\nI am not sure if sudo apt-get install pyqt5-dev is even necessary\n\n",
"The error message thrown here is misleading - it's not an issue with a sipbuild.api attribute. Indeed, in this case program qmake is missing, see last line of the Python traceback. Have a look if it's installed on your system and add it to your PATH variable. Otherwise, install it. On Linux this would be done with\nsudo apt-get install qt5-qmake\n\n",
"I had this problem on my M1 Mac using Python 3.9.12 when I was trying to install a library: pip install pixellib.\nThe first thing I did was: pip install pixellib --verbose to see the whole log, and there I noticed that PyQt5 was waiting for an input. So then I found someone else with that issue, and used pip install pyqt5 --config-settings --confirm-license= --verbose which took some time to compile, but worked!\n",
"I could not get any of the above solutions to work but I managed to get it working using python3.9, PyQt5=5.15.2, pip=22.0.2 and sip=6.5.0 by using sudo apt-get install PyQt5. If you need it in a virtual environment, you can manually copy the PyQt5 folder from your default /usr/lib/python3/dist-packages to the site-packages folder in your virtual environment.\n",
"To all those that are struggling with Apple M1 installation, here is a working solution, specifically addressing the problem of installing the pixellib library that depends on PyQt5 but you can apply it equally to other libs:\n\nPyQt5 is not supported on Apple M1, it needs qt6: https://www.reddit.com/r/learnpython/comments/o4w1ut/comment/h2jele3/?utm_source=share&utm_medium=web2x&context=3 , https://www.qt.io/product/qt6\nthis means you need to install PyQt6: python3 -m pip install PyQt6\ngo to the lib you need, in my case pixellib: https://pypi.org/project/pixellib/#files and\ndownload the wheel file\nget the wheel tool: pip install wheel\nunpack the wheel wheel unpack pixellib-0.7.1-py3-none-any.whl\nChange its dependency of PyQt5 to PyQt6\n\nedit pixellib-0.7.1/pixellib-0.7.1.dist-info/METADATA\npyQt5 => pyQt6\n\n\npack it back wheel pack pixellib-0.7.1\ninstall it: pip install pixellib-0.7.1-py3-none-any.whl\ntest in python: `\n\n# should work\nimport pixellib\n\nP.S. thanks to Terra and ChaOS for supporting work on the project underlying this report.\n"
] | [
16,
11,
7,
7,
6,
2,
1,
1,
1,
0,
0
] | [
"This can be resolved by switching to an environment with Python >= 3.8\n"
] | [
-7
] | [
"pip",
"pyqt5",
"python",
"python_3.x"
] | stackoverflow_0065447314_pip_pyqt5_python_python_3.x.txt |
Q:
When trying to run a pyglet window, I get this error: "AttributeError: 'scipy.spatial.transform._rotation.Rotation' object has no attribute 'as_dcm'"
This is all my code
import pyglet
import ratcave as rc
window = pyglet.window.Window()
pyglet.app.run()
When running this, the following shows in terminal
Traceback (most recent call last):
File "c:\CODING\pyopengl\Mudge-David-Homework-8.py", line 14, in <module>
import ratcave as rc
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\__init__.py", line 5, in <module>
from . import resources
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\resources.py", line 40, in <module>
default_camera = Camera()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\camera.py", line 260, in __init__
self.projection = PerspectiveProjection() if not projection else projection
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\utils\observers.py", line 56, in __setattr__
super(AutoRegisterObserver, self).__setattr__(key, value)
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\camera.py", line 299, in projection
self.reset_uniforms()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\camera.py", line 302, in reset_uniforms
self.uniforms['projection_matrix'] = self.projection_matrix.view()
PS C:\Users\David> & C:/Users/David/AppData/Local/Microsoft/WindowsApps/python3.10.exe c:/CODING/pyopengl/Mudge-David-Homework-8.py
Traceback (most recent call last):
File "c:\CODING\pyopengl\Mudge-David-Homework-8.py", line 14, in <module>
import ratcave as rc
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\__init__.py", line 5, in <module>
from . import resources
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\resources.py", line 40, in <module>
default_camera = Camera()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\camera.py", line 260, in __init__
self.projection = PerspectiveProjection() if not projection else projection
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\utils\observers.py", line 56, in __setattr__
super(AutoRegisterObserver, self).__setattr__(key, value)
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\camera.py", line 299, in projection
self.reset_uniforms()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\camera.py", line 302, in reset_uniforms
self.uniforms['projection_matrix'] = self.projection_matrix.view()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\shader.py", line 139, in uniforms
self.update()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\utils\observers.py", line 47, in update
self.on_change()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\physical.py", line 186, in on_change
Physical.on_change(self)
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\physical.py", line 138, in on_change
self.model_matrix = np.dot(self.position.to_matrix(), self.rotation.to_matrix())
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\coordinates.py", line 126, in to_matrix
return self.to_radians().to_matrix()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\coordinates.py", line 95, in to_matrix
mat[:3, :3] = R.from_euler(self.axes[1:],self._array,degrees=False).as_dcm() # scipy as_matrix() not available
AttributeError: 'scipy.spatial.transform._rotation.Rotation' object has no attribute 'as_dcm'
With seemingly the error being that final line
AttributeError: 'scipy.spatial.transform._rotation.Rotation' object has no attribute 'as_dcm'
Im trying to follow this tutorial
The code should create a new window, which in turn means Pyglet is working.
From what I have researched, this has been solved through SciPy methods being changed, which I have attempted with no luck. Another thread of this issue resolved it by installing the correct version, which would correct these methods. However I have attempted to install different versions of SciPy and still get the same error.
A:
The as_dcm() method of the Rotation class was deprecated in SciPy version 1.4.0 and removed from SciPy version 1.6.0. You'll have to use an older version of SciPy, or find out if there is a version of ratcave that works with the latest version of SciPy.
| When trying to run a pyglet window, I get this error: "AttributeError: 'scipy.spatial.transform._rotation.Rotation' object has no attribute 'as_dcm'" | This is all my code
import pyglet
import ratcave as rc
window = pyglet.window.Window()
pyglet.app.run()
When running this, the following shows in terminal
Traceback (most recent call last):
File "c:\CODING\pyopengl\Mudge-David-Homework-8.py", line 14, in <module>
import ratcave as rc
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\__init__.py", line 5, in <module>
from . import resources
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\resources.py", line 40, in <module>
default_camera = Camera()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\camera.py", line 260, in __init__
self.projection = PerspectiveProjection() if not projection else projection
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\utils\observers.py", line 56, in __setattr__
super(AutoRegisterObserver, self).__setattr__(key, value)
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\camera.py", line 299, in projection
self.reset_uniforms()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\camera.py", line 302, in reset_uniforms
self.uniforms['projection_matrix'] = self.projection_matrix.view()
PS C:\Users\David> & C:/Users/David/AppData/Local/Microsoft/WindowsApps/python3.10.exe c:/CODING/pyopengl/Mudge-David-Homework-8.py
Traceback (most recent call last):
File "c:\CODING\pyopengl\Mudge-David-Homework-8.py", line 14, in <module>
import ratcave as rc
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\__init__.py", line 5, in <module>
from . import resources
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\resources.py", line 40, in <module>
default_camera = Camera()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\camera.py", line 260, in __init__
self.projection = PerspectiveProjection() if not projection else projection
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\utils\observers.py", line 56, in __setattr__
super(AutoRegisterObserver, self).__setattr__(key, value)
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\camera.py", line 299, in projection
self.reset_uniforms()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\camera.py", line 302, in reset_uniforms
self.uniforms['projection_matrix'] = self.projection_matrix.view()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\shader.py", line 139, in uniforms
self.update()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\utils\observers.py", line 47, in update
self.on_change()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\physical.py", line 186, in on_change
Physical.on_change(self)
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\physical.py", line 138, in on_change
self.model_matrix = np.dot(self.position.to_matrix(), self.rotation.to_matrix())
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\coordinates.py", line 126, in to_matrix
return self.to_radians().to_matrix()
File "C:\Users\David\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ratcave\coordinates.py", line 95, in to_matrix
mat[:3, :3] = R.from_euler(self.axes[1:],self._array,degrees=False).as_dcm() # scipy as_matrix() not available
AttributeError: 'scipy.spatial.transform._rotation.Rotation' object has no attribute 'as_dcm'
With seemingly the error being that final line
AttributeError: 'scipy.spatial.transform._rotation.Rotation' object has no attribute 'as_dcm'
Im trying to follow this tutorial
The code should create a new window, which in turn means Pyglet is working.
From what I have researched, this has been solved through SciPy methods being changed, which I have attempted with no luck. Another thread of this issue resolved it by installing the correct version, which would correct these methods. However I have attempted to install different versions of SciPy and still get the same error.
| [
"The as_dcm() method of the Rotation class was deprecated in SciPy version 1.4.0 and removed from SciPy version 1.6.0. You'll have to use an older version of SciPy, or find out if there is a version of ratcave that works with the latest version of SciPy.\n"
] | [
0
] | [] | [] | [
"pyglet",
"python",
"ratcave",
"scipy"
] | stackoverflow_0074648836_pyglet_python_ratcave_scipy.txt |
Q:
VSCode/Jupyter interfering with Rich (log formatting library for Python) and I see an for each output
Here is what I am seeing, the sign to the left of each output
How do I make that go away? Again I am using VSCode/Python and Jupyter Notebooks
The outputs or like log.info("some text")
From what I have read so far it seems to be because rich is using markup that is like HTML and then Jupyter renders this as HTLM or something
import logging
import os
import sys
from rich.logging import RichHandler
def set_logging():
Β Β FORMAT = "Func/Mod:%(funcName)s Β %(message)s"
Β Β logging.basicConfig(level=logging.INFO, format=FORMAT, datefmt="[%X]", handlers=[RichHandler(markup=True, show_path=False)])
Β Β if sys.platform.lower() == "win32":
Β Β Β Β os.system('color')
Β Β log = logging.getLogger(__name__)
Β Β #log = logging.getLogger("mylog")
Β Β log.setLevel(logging.INFO)
Β Β return log
if __name__=="__main__":
Β Β log=set_logging()
Β Β log.info("This is a test")
Β Β log.info("This is a test")
Β Β log.info("This is a test")
A:
According to the answer on github.
Unfortunately, there is currently no way to remove <\> on vscode-jupyter.
Because it is a button that appears on each output. It lets you change the renderer type for that output.
You can use jupyter notebook if you like. This symbol <\> does not appear in my use of jupyter notebook.
| VSCode/Jupyter interfering with Rich (log formatting library for Python) and I see an for each output | Here is what I am seeing, the sign to the left of each output
How do I make that go away? Again I am using VSCode/Python and Jupyter Notebooks
The outputs or like log.info("some text")
From what I have read so far it seems to be because rich is using markup that is like HTML and then Jupyter renders this as HTLM or something
import logging
import os
import sys
from rich.logging import RichHandler
def set_logging():
Β Β FORMAT = "Func/Mod:%(funcName)s Β %(message)s"
Β Β logging.basicConfig(level=logging.INFO, format=FORMAT, datefmt="[%X]", handlers=[RichHandler(markup=True, show_path=False)])
Β Β if sys.platform.lower() == "win32":
Β Β Β Β os.system('color')
Β Β log = logging.getLogger(__name__)
Β Β #log = logging.getLogger("mylog")
Β Β log.setLevel(logging.INFO)
Β Β return log
if __name__=="__main__":
Β Β log=set_logging()
Β Β log.info("This is a test")
Β Β log.info("This is a test")
Β Β log.info("This is a test")
| [
"According to the answer on github.\nUnfortunately, there is currently no way to remove <\\> on vscode-jupyter.\nBecause it is a button that appears on each output. It lets you change the renderer type for that output.\nYou can use jupyter notebook if you like. This symbol <\\> does not appear in my use of jupyter notebook.\n\n"
] | [
0
] | [] | [] | [
"jupyter_notebook",
"python",
"rich",
"visual_studio_code"
] | stackoverflow_0074632228_jupyter_notebook_python_rich_visual_studio_code.txt |
Q:
Python: Trying to extract a value from a list of dictionaries that is stored as a string
I am getting data from an API and storing it in json format. The data I pull is in a list of dictionaries. I am using Python. My task is to only grab the information from the dictionary that matches the ticker symbol.
This is the short version of my data printing using json dumps
[
{
"ticker": "BYDDF.US",
"name": "BYD Co Ltd-H",
"price": 25.635,
"change_1d_prc": 9.927101200686117
},
{
"ticker": "BYDDY.US",
"name": "BYD Co Ltd ADR",
"price": 51.22,
"change_1d_prc": 9.843448423761526
},
{
"ticker": "TSLA.US",
"name": "Tesla Inc",
"price": 194.7,
"change_1d_prc": 7.67018746889343
}
]
Task only gets the dictionary for ticker = TSLA.US. If possible, only get the price associated with this ticker.
I am unaware of how to reference "ticker" or loop through all of them to get the one I need.
I tried the following, but it says that its a string, so it doesn't work:
if "ticker" == "TESLA.US":
print(i)
A:
This is a solution that I've seen divide the python community. Some say that it's a feature and "very pythonic"; others say that it's a bad design choice we're stuck with now, and bad practice. I'm personally not a fan, but it is a way to solve this problem, so do with it what you will. :)
Python function loops aren't a new scope; the loop variable persists even after the loop. So, either of these are a solution. Assuming that your list of dictionaries is stored as json_dict:
for target_dict in json_dict:
if target_dict["ticker"] == "TESLA.US":
break
At this point, target_dict will be the dictionary you want.
A:
Try (mylist is your list of dictionaries)
for entry in mylist:
print(entry['ticker'])
Then try this to get what you want:
for entry in mylist:
if entry['ticker'] == 'TSLA.US':
print(entry)
A:
It is possible to iterate through a list of dictionaries using a for loop.
for stock in list:
if stock["ticker"] == "TSLA.US":
return stock["price"]
This essentially loops through every item in the list, and for each item (which is a dictionary) looks for the key "ticker" and checks if its value is equal to "TSLA.US". If it is, then it returns the value associated with the "price" key.
| Python: Trying to extract a value from a list of dictionaries that is stored as a string | I am getting data from an API and storing it in json format. The data I pull is in a list of dictionaries. I am using Python. My task is to only grab the information from the dictionary that matches the ticker symbol.
This is the short version of my data printing using json dumps
[
{
"ticker": "BYDDF.US",
"name": "BYD Co Ltd-H",
"price": 25.635,
"change_1d_prc": 9.927101200686117
},
{
"ticker": "BYDDY.US",
"name": "BYD Co Ltd ADR",
"price": 51.22,
"change_1d_prc": 9.843448423761526
},
{
"ticker": "TSLA.US",
"name": "Tesla Inc",
"price": 194.7,
"change_1d_prc": 7.67018746889343
}
]
Task only gets the dictionary for ticker = TSLA.US. If possible, only get the price associated with this ticker.
I am unaware of how to reference "ticker" or loop through all of them to get the one I need.
I tried the following, but it says that its a string, so it doesn't work:
if "ticker" == "TESLA.US":
print(i)
| [
"This is a solution that I've seen divide the python community. Some say that it's a feature and \"very pythonic\"; others say that it's a bad design choice we're stuck with now, and bad practice. I'm personally not a fan, but it is a way to solve this problem, so do with it what you will. :)\nPython function loops aren't a new scope; the loop variable persists even after the loop. So, either of these are a solution. Assuming that your list of dictionaries is stored as json_dict:\nfor target_dict in json_dict:\n if target_dict[\"ticker\"] == \"TESLA.US\":\n break\n\nAt this point, target_dict will be the dictionary you want.\n",
"Try (mylist is your list of dictionaries)\nfor entry in mylist:\n print(entry['ticker'])\n\nThen try this to get what you want:\nfor entry in mylist:\n if entry['ticker'] == 'TSLA.US':\n print(entry)\n\n",
"It is possible to iterate through a list of dictionaries using a for loop.\n for stock in list:\n if stock[\"ticker\"] == \"TSLA.US\":\n return stock[\"price\"]\n\n\nThis essentially loops through every item in the list, and for each item (which is a dictionary) looks for the key \"ticker\" and checks if its value is equal to \"TSLA.US\". If it is, then it returns the value associated with the \"price\" key.\n"
] | [
1,
0,
0
] | [] | [] | [
"api",
"json",
"python"
] | stackoverflow_0074649029_api_json_python.txt |
Q:
How can I contour an image in opencv?
cv2.error: OpenCV(4.6.0) /io/opencv/modules/imgproc/src/contours.cpp:195: error: (-210:Unsupported format or combination of formats) [Start]FindContours supports only CV_8UC1 images when mode != CV_RETR_FLOODFILL otherwise supports CV_32SC1 images only in function 'cvStartFindContours_Impl'
here's my code. whats wrong?
import cv2
import numpy as np
image = cv2.imread('j.jpg')
cv2.waitKey(0)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edged = cv2.Canny(gray, 30, 200)
cv2.waitKey(0)
contours, hierarchy = cv2.findContours(edged,
cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.imshow('Canny Edges After Contouring', edged)
cv2.waitKey(0)
print("Number of Contours found = " + str(len(contours)))
cv2.drawContours(image, contours, -1, (0, 255, 0), 3)
cv2.imshow('Contours', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
i'm trying to countour the lines at the image
A:
I just tried your code in python3 and everything works normal, your code works.
here is the coded I tested, just remove some waitkeys.
import cv2
import numpy as np
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edged = cv2.Canny(gray, 30, 200)
contours, hierarchy = cv2.findContours(edged,
cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.imshow('Canny Edges After Contouring', edged)
print("Number of Contours found = " + str(len(contours)))
cv2.drawContours(image, contours, -1, (0, 0, 255), 3)
cv2.imshow('Contours', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
ALso I changed the image for a jpg image, here are the results.
enter image description here
As you can see the countours are drawed in red color.
I have opencv 4.2 version, maybe yours are a bit older.
| How can I contour an image in opencv? | cv2.error: OpenCV(4.6.0) /io/opencv/modules/imgproc/src/contours.cpp:195: error: (-210:Unsupported format or combination of formats) [Start]FindContours supports only CV_8UC1 images when mode != CV_RETR_FLOODFILL otherwise supports CV_32SC1 images only in function 'cvStartFindContours_Impl'
here's my code. whats wrong?
import cv2
import numpy as np
image = cv2.imread('j.jpg')
cv2.waitKey(0)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edged = cv2.Canny(gray, 30, 200)
cv2.waitKey(0)
contours, hierarchy = cv2.findContours(edged,
cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.imshow('Canny Edges After Contouring', edged)
cv2.waitKey(0)
print("Number of Contours found = " + str(len(contours)))
cv2.drawContours(image, contours, -1, (0, 255, 0), 3)
cv2.imshow('Contours', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
i'm trying to countour the lines at the image
| [
"I just tried your code in python3 and everything works normal, your code works.\nhere is the coded I tested, just remove some waitkeys.\nimport cv2 \n import numpy as np \n image = cv2.imread('1.png') \n\n gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) \n edged = cv2.Canny(gray, 30, 200) \n\n \n contours, hierarchy = cv2.findContours(edged, \n cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) \n \n cv2.imshow('Canny Edges After Contouring', edged) \n\n \n print(\"Number of Contours found = \" + str(len(contours))) \n cv2.drawContours(image, contours, -1, (0, 0, 255), 3) \n \n cv2.imshow('Contours', image) \n cv2.waitKey(0) \n cv2.destroyAllWindows() \n\nALso I changed the image for a jpg image, here are the results.\nenter image description here\nAs you can see the countours are drawed in red color.\nI have opencv 4.2 version, maybe yours are a bit older.\n"
] | [
0
] | [] | [] | [
"image_processing",
"opencv",
"python"
] | stackoverflow_0074647732_image_processing_opencv_python.txt |
Q:
Autocomplete in vscode not showing some code
some code does not appear. I have to type it manually. For example, if I want to add an upper, it does not appear
print(f'Hello {first.upper().capitalize()})
It does not complete here, but in other things it completes
Nothing works for meenter image description here
A:
Because you have defined a variable named first, but it has no content. It can be of any type, so vscode won't provide any prompt.
I guess that you want to define the string and first is it's content.
You can define x="first", here the type of x is string, then x has the upper() method.
| Autocomplete in vscode not showing some code | some code does not appear. I have to type it manually. For example, if I want to add an upper, it does not appear
print(f'Hello {first.upper().capitalize()})
It does not complete here, but in other things it completes
Nothing works for meenter image description here
| [
"Because you have defined a variable named first, but it has no content. It can be of any type, so vscode won't provide any prompt.\nI guess that you want to define the string and first is it's content.\n\nYou can define x=\"first\", here the type of x is string, then x has the upper() method.\n"
] | [
0
] | [] | [] | [
"autocomplete",
"python",
"visual_studio_code"
] | stackoverflow_0074644119_autocomplete_python_visual_studio_code.txt |
Q:
How to fuzzy match two lists in Python
I have two lists: ref_list and inp_list. How can one make use of FuzzyWuzzy to match the input list from the reference list?
inp_list = pd.DataFrame(['ADAMS SEBASTIAN', 'HAIMBILI SEUN', 'MUTESI
JOHN', 'SHEETEKELA MATT', 'MUTESI JOHN KUTALIKA',
'ADAMS SEBASTIAN HAUSIKU', 'PETERS WILSON',
'PETERS MARIO', 'SHEETEKELA MATT NICKY'],
columns =['Names'])
ref_list = pd.DataFrame(['ADAMS SEBASTIAN HAUSIKU', 'HAIMBILI MIKE', 'HAIMBILI SEUN', 'MUTESI JOHN
KUTALIKA', 'PETERS WILSON MARIO', 'SHEETEKELA MATT NICKY MBILI'], columns =
['Names'])
After some research, I modified some codes I found on the internet. Problems with these codes - they work very well on small sample size. In my case the inp_list and ref_list are 29k and 18k respectively in length and it takes more than a day to run.
Below are the codes, first a helper function was defined.
def match_term(term, inp_list, min_score=0):
# -1 score in case I don't get any matches
max_score = -1
# return empty for no match
max_name = ''
# iterate over all names in the other
for term2 in inp_list:
# find the fuzzy match score
score = fuzz.token_sort_ratio(term, term2)
# checking if I am above my threshold and have a better score
if (score > min_score) & (score > max_score):
max_name = term2
max_score = score
return (max_name, max_score)
# list for dicts for easy dataframe creation
dict_list = []
#iterating over the sales file
for name in inp_list:
#use the defined function above to find the best match, also set the threshold to a chosen #
match = match_term(name, ref_list, 94)
#new dict for storing data
dict_ = {}
dict_.update({'passenger_name': name})
dict_.update({'match_name': match[0]})
dict_.update({'score': match[1]})
dict_list.append(dict_)
Where can these codes be improved to run smoothly and perhaps avoid evaluating items that have already been assessed?
A:
You can try to vectorized the operations instead of evaluate the scores in a loop.
Make a df where the firse col ref is ref_list and the second col inp is each name in inp_list. Then call df.apply(lambda row:process.extractOne(row['inp'], row['ref']), axis=1). Finally you'll get the best match name and score in ref_list for each name in inp_list.
A:
The measures you are using are computationally demanding with a number of pairs of strings that high. Alternatively to fuzzywuzzy, you could try to use instead a library called string-grouper which exploits a faster Tf-idf method and the cosine similarity measure to find similar words. As an example:
import random, string, time
import pandas as pd
from string_grouper import match_strings
alphabet = list(string.ascii_lowercase)
from_r, to_r = 0, len(alphabet)-1
random_strings_1 = ["".join(alphabet[random.randint(from_r, to_r)]
for i in range(6)) for j in range(5000)]
random_strings_2 = ["".join(alphabet[random.randint(from_r, to_r)]
for i in range(6)) for j in range(5000)]
series_1 = pd.Series(random_strings_1)
series_2 = pd.Series(random_strings_2)
t_1 = time.time()
matches = match_strings(series_1, series_2,
min_similarity=0.6)
t_2 = time.time()
print(t_2 - t_1)
print(matches)
It takes less than one second to do 25.000.000 comparisons! For a surely more useful test of the library look here: https://bergvca.github.io/2017/10/14/super-fast-string-matching.html where it is claimed that
"Using this approach made it possible to search for near duplicates in
a set of 663,000 company names in 42 minutes using only a dual-core
laptop".
To tune your matching algorithm further look at the **kwargs arguments you can give to the match_strings function above.
| How to fuzzy match two lists in Python | I have two lists: ref_list and inp_list. How can one make use of FuzzyWuzzy to match the input list from the reference list?
inp_list = pd.DataFrame(['ADAMS SEBASTIAN', 'HAIMBILI SEUN', 'MUTESI
JOHN', 'SHEETEKELA MATT', 'MUTESI JOHN KUTALIKA',
'ADAMS SEBASTIAN HAUSIKU', 'PETERS WILSON',
'PETERS MARIO', 'SHEETEKELA MATT NICKY'],
columns =['Names'])
ref_list = pd.DataFrame(['ADAMS SEBASTIAN HAUSIKU', 'HAIMBILI MIKE', 'HAIMBILI SEUN', 'MUTESI JOHN
KUTALIKA', 'PETERS WILSON MARIO', 'SHEETEKELA MATT NICKY MBILI'], columns =
['Names'])
After some research, I modified some codes I found on the internet. Problems with these codes - they work very well on small sample size. In my case the inp_list and ref_list are 29k and 18k respectively in length and it takes more than a day to run.
Below are the codes, first a helper function was defined.
def match_term(term, inp_list, min_score=0):
# -1 score in case I don't get any matches
max_score = -1
# return empty for no match
max_name = ''
# iterate over all names in the other
for term2 in inp_list:
# find the fuzzy match score
score = fuzz.token_sort_ratio(term, term2)
# checking if I am above my threshold and have a better score
if (score > min_score) & (score > max_score):
max_name = term2
max_score = score
return (max_name, max_score)
# list for dicts for easy dataframe creation
dict_list = []
#iterating over the sales file
for name in inp_list:
#use the defined function above to find the best match, also set the threshold to a chosen #
match = match_term(name, ref_list, 94)
#new dict for storing data
dict_ = {}
dict_.update({'passenger_name': name})
dict_.update({'match_name': match[0]})
dict_.update({'score': match[1]})
dict_list.append(dict_)
Where can these codes be improved to run smoothly and perhaps avoid evaluating items that have already been assessed?
| [
"You can try to vectorized the operations instead of evaluate the scores in a loop.\nMake a df where the firse col ref is ref_list and the second col inp is each name in inp_list. Then call df.apply(lambda row:process.extractOne(row['inp'], row['ref']), axis=1). Finally you'll get the best match name and score in ref_list for each name in inp_list.\n",
"The measures you are using are computationally demanding with a number of pairs of strings that high. Alternatively to fuzzywuzzy, you could try to use instead a library called string-grouper which exploits a faster Tf-idf method and the cosine similarity measure to find similar words. As an example:\nimport random, string, time\nimport pandas as pd\nfrom string_grouper import match_strings\n\nalphabet = list(string.ascii_lowercase)\nfrom_r, to_r = 0, len(alphabet)-1\n\nrandom_strings_1 = [\"\".join(alphabet[random.randint(from_r, to_r)]\n for i in range(6)) for j in range(5000)]\nrandom_strings_2 = [\"\".join(alphabet[random.randint(from_r, to_r)]\n for i in range(6)) for j in range(5000)]\n \nseries_1 = pd.Series(random_strings_1)\nseries_2 = pd.Series(random_strings_2)\n\nt_1 = time.time()\nmatches = match_strings(series_1, series_2,\n min_similarity=0.6)\nt_2 = time.time()\nprint(t_2 - t_1)\nprint(matches)\n\nIt takes less than one second to do 25.000.000 comparisons! For a surely more useful test of the library look here: https://bergvca.github.io/2017/10/14/super-fast-string-matching.html where it is claimed that\n\n\"Using this approach made it possible to search for near duplicates in\na set of 663,000 company names in 42 minutes using only a dual-core\nlaptop\".\n\nTo tune your matching algorithm further look at the **kwargs arguments you can give to the match_strings function above.\n"
] | [
1,
0
] | [] | [] | [
"fuzzywuzzy",
"matching",
"python"
] | stackoverflow_0062790165_fuzzywuzzy_matching_python.txt |
Q:
Installing Python 3.11
I want to try out Python 3.11 to find out how much faster this version is than what I'm currently using (3.7.3). I am using Anaconda and Spyder, but Anaconda does not yet support Python 3.11 and additionally I regularly have problems with updating in Anaconda.
Importantly, I want to maintain my Anaconda and Spyder environments as it is and use Python 3.11 independently from this. Therefore, I was wondering if simply downloading Python 3.11 from their website will mess up my environment, as then there will be two versions of Python insalled on my PC. Also I would like to know if I have to use a different IDE for this (or even without IDE).
Even though my question might be a bit vague, thanks in advance.
A:
Try to create new env 3.10 using Anaconda, if Anaconda still doesn't have 3.11. The difference with 3.11 would be (I'm not guaranty, just a "rumors") ~+15%, depends...
You can build and install your version from source :
build-python-from-source
This way you won't break anything and can to delete Python3.11 after experiments.
You can google the benchmark tests for overage performance comparison between <your.version> and <any.over.version> for very common understanding.
| Installing Python 3.11 | I want to try out Python 3.11 to find out how much faster this version is than what I'm currently using (3.7.3). I am using Anaconda and Spyder, but Anaconda does not yet support Python 3.11 and additionally I regularly have problems with updating in Anaconda.
Importantly, I want to maintain my Anaconda and Spyder environments as it is and use Python 3.11 independently from this. Therefore, I was wondering if simply downloading Python 3.11 from their website will mess up my environment, as then there will be two versions of Python insalled on my PC. Also I would like to know if I have to use a different IDE for this (or even without IDE).
Even though my question might be a bit vague, thanks in advance.
| [
"\nTry to create new env 3.10 using Anaconda, if Anaconda still doesn't have 3.11. The difference with 3.11 would be (I'm not guaranty, just a \"rumors\") ~+15%, depends...\n\nYou can build and install your version from source :\nbuild-python-from-source\n\n\nThis way you won't break anything and can to delete Python3.11 after experiments.\n\nYou can google the benchmark tests for overage performance comparison between <your.version> and <any.over.version> for very common understanding.\n\n"
] | [
0
] | [
"It's not recommended to have multiple versions of Python installed on the same system, as this can cause conflicts and problems with package compatibility. Instead of installing Python 3.11 directly, you can create a new virtual environment using conda and install Python 3.11 in that environment. This will allow you to use Python 3.11 without affecting your existing environment or installations.\nTo create a new conda environment with Python 3.11, first open a terminal or command prompt and run the following command:\nconda create -n py311 python=3.11\n\nThis will create a new environment called \"py311\" with Python 3.11 installed. To activate this environment, run the following command:\nconda activate py311\n\nOnce the environment is activated, you can use the python command to run Python 3.11, and any packages you install will be installed in this environment only. To deactivate the environment, run the following command:\nconda deactivate\n\nYou can also use the conda install command to install packages in the new environment, for example:\nconda install numpy scipy pandas\n\nThis will install the NumPy, SciPy, and pandas packages in the \"py311\" environment.\nAs for the IDE, you can use any IDE that supports Python 3.11, such as PyCharm or Visual Studio Code. These IDEs allow you to select the Python environment you want to use, so you can easily switch between your existing environment and the new \"py311\" environment.\n"
] | [
-4
] | [
"anaconda",
"ide",
"performance",
"python",
"python_3.11"
] | stackoverflow_0074646486_anaconda_ide_performance_python_python_3.11.txt |
Q:
How to pull player game logs from multiple seasons using nba_api?
I am trying to get familiar with nba_api package for python. I am attempting to pull player data from the past two seasons. However, I am only able to get all of the seasons or just one season.
First, I saw I could collect the game logs from an individual season:
from nba_api.stats.static import players
player_dict = players.get_players()
luka = [player for player in player_dict if player['full_name'] == 'Luka Doncic'][0]
luka_id = luka['id']
from nba_api.stats.endpoints import playergamelog
gamelog_luka = playergamelog.PlayerGameLog(player_id=luka_id, season='2022')
Doing this gave me his game logs from this current season. Next, I tried to collect the logs from the past two seasons:
gamelog_luka = playergamelog.PlayerGameLog(player_id=luka_id, season=['2020', '2022'])
I noticed that doing this only gave me 65 games starting from December 2020 to May 2021. I am trying to get the game logs from the beginning of the 2021 season to the present. Is there a syntax fix for this? Is there a way to achieve this through using season ID's?
A:
You can use SeasonAll, convert to pandas df, convert datetime and finally query.
import pandas as pd
from nba_api.stats.endpoints import playergamelog
from nba_api.stats.library.parameters import SeasonAll
from nba_api.stats.static import players
luka_id = next((x for x in players.get_players() if x.get("full_name") == "Luka Doncic"), None).get("id")
gamelog_luka = pd.concat(playergamelog.PlayerGameLog(player_id=luka_id, season=SeasonAll.all).get_data_frames())
gamelog_luka["GAME_DATE"] = pd.to_datetime(gamelog_luka["GAME_DATE"], format="%b %d, %Y")
gamelog_luka = gamelog_luka.query("GAME_DATE.dt.year in [2021, 2022]")
print(gamelog_luka)
| How to pull player game logs from multiple seasons using nba_api? | I am trying to get familiar with nba_api package for python. I am attempting to pull player data from the past two seasons. However, I am only able to get all of the seasons or just one season.
First, I saw I could collect the game logs from an individual season:
from nba_api.stats.static import players
player_dict = players.get_players()
luka = [player for player in player_dict if player['full_name'] == 'Luka Doncic'][0]
luka_id = luka['id']
from nba_api.stats.endpoints import playergamelog
gamelog_luka = playergamelog.PlayerGameLog(player_id=luka_id, season='2022')
Doing this gave me his game logs from this current season. Next, I tried to collect the logs from the past two seasons:
gamelog_luka = playergamelog.PlayerGameLog(player_id=luka_id, season=['2020', '2022'])
I noticed that doing this only gave me 65 games starting from December 2020 to May 2021. I am trying to get the game logs from the beginning of the 2021 season to the present. Is there a syntax fix for this? Is there a way to achieve this through using season ID's?
| [
"You can use SeasonAll, convert to pandas df, convert datetime and finally query.\nimport pandas as pd\nfrom nba_api.stats.endpoints import playergamelog\nfrom nba_api.stats.library.parameters import SeasonAll\nfrom nba_api.stats.static import players\n\n\nluka_id = next((x for x in players.get_players() if x.get(\"full_name\") == \"Luka Doncic\"), None).get(\"id\")\n\ngamelog_luka = pd.concat(playergamelog.PlayerGameLog(player_id=luka_id, season=SeasonAll.all).get_data_frames())\ngamelog_luka[\"GAME_DATE\"] = pd.to_datetime(gamelog_luka[\"GAME_DATE\"], format=\"%b %d, %Y\")\ngamelog_luka = gamelog_luka.query(\"GAME_DATE.dt.year in [2021, 2022]\")\nprint(gamelog_luka)\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"nba_api",
"pandas",
"python",
"web_scraping"
] | stackoverflow_0074648245_dataframe_nba_api_pandas_python_web_scraping.txt |
Q:
Difficulties using matplotlib plot method
Very recently I have been tasked with ploting a derivative using Python and matplotlib. This is my code:
x=np.linspace(-100,100,num=50)
funcion=(56*(x**3))-(38.999*(x**2))+(4.196*x-0.15)
plt.plot(x, funcion)
The resulting plot is this:
Plot generated in Python
At first sight, the graph looks okay, but is not correct, given that the graph is suposed to look like this:
Correct plot
How can I fix this? I have tried changing the linespace a bunch of times, and the results are the same.
I've tried to plot a derivate in matplotlib and the graph is incorrect.
A:
The problem is not with matplotlib, but instead the range of x values you chose. If you look at your own picture, the xvalues are ranging from around -2 to 2, so if I do the same and play with the plotting bounds I get:
import matplotlib.pyplot as plt
import numpy as np
x=np.linspace(-2,2,101)
funcion=(56*(x**3))-(38.999*(x**2))+(4.196*x-0.15)
plt.plot(x, funcion)
plt.axvline(0, color = 'k')
plt.axhline(0, color = 'k')
plt.xlim([-0.8, 1.4])
plt.ylim([-3.5, 3])
which gives
| Difficulties using matplotlib plot method | Very recently I have been tasked with ploting a derivative using Python and matplotlib. This is my code:
x=np.linspace(-100,100,num=50)
funcion=(56*(x**3))-(38.999*(x**2))+(4.196*x-0.15)
plt.plot(x, funcion)
The resulting plot is this:
Plot generated in Python
At first sight, the graph looks okay, but is not correct, given that the graph is suposed to look like this:
Correct plot
How can I fix this? I have tried changing the linespace a bunch of times, and the results are the same.
I've tried to plot a derivate in matplotlib and the graph is incorrect.
| [
"The problem is not with matplotlib, but instead the range of x values you chose. If you look at your own picture, the xvalues are ranging from around -2 to 2, so if I do the same and play with the plotting bounds I get:\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n\nx=np.linspace(-2,2,101) \nfuncion=(56*(x**3))-(38.999*(x**2))+(4.196*x-0.15) \nplt.plot(x, funcion)\n\nplt.axvline(0, color = 'k')\nplt.axhline(0, color = 'k')\n\nplt.xlim([-0.8, 1.4])\nplt.ylim([-3.5, 3])\n\nwhich gives\n\n"
] | [
1
] | [] | [] | [
"calculus",
"matplotlib",
"plot",
"python"
] | stackoverflow_0074649964_calculus_matplotlib_plot_python.txt |
Q:
Visual Studio Code Venv Used Wrong Python Version
I have Python 3.4 and 3.9 installed. I chose the former through the Command Palette and then (also through the Command Palette) created a virtual environment with Venv. I create a new terminal, enter python --version, and it says 3.9 instead of 3.4.
How do I go about fixing this?
A:
There are multiple python environments on your machine, if you have created a virtual environment, you should run these commands after activation. If you execute the python command in a terminal where the virtual environment is not activated, the displayed version will be the one configured in the system environment variable path.
The correct way is:
Create a virtual environment
Activate the virtual environment
Execute python commands
There are two ways to activate a virtual environment:
After creating the virtual environment use the following command to activate
.venv\scripts\activate
Or select the virtual environment interpreter in the select interpreter panel, and
then create a new terminal to automatically activate the environment.
In vscode, the interpreter version you choose for python will always be displayed in the lower right corner.
See the Environments documentation for more details.
| Visual Studio Code Venv Used Wrong Python Version | I have Python 3.4 and 3.9 installed. I chose the former through the Command Palette and then (also through the Command Palette) created a virtual environment with Venv. I create a new terminal, enter python --version, and it says 3.9 instead of 3.4.
How do I go about fixing this?
| [
"There are multiple python environments on your machine, if you have created a virtual environment, you should run these commands after activation. If you execute the python command in a terminal where the virtual environment is not activated, the displayed version will be the one configured in the system environment variable path.\nThe correct way is:\n\nCreate a virtual environment\nActivate the virtual environment\nExecute python commands\n\nThere are two ways to activate a virtual environment:\n\nAfter creating the virtual environment use the following command to activate\n.venv\\scripts\\activate\n\n\nOr select the virtual environment interpreter in the select interpreter panel, and\nthen create a new terminal to automatically activate the environment.\n\n\n\nIn vscode, the interpreter version you choose for python will always be displayed in the lower right corner.\n\nSee the Environments documentation for more details.\n"
] | [
1
] | [] | [] | [
"python",
"virtualenv",
"visual_studio_code"
] | stackoverflow_0074648847_python_virtualenv_visual_studio_code.txt |
Q:
Is there a decorator to simply cache function return values?
Consider the following:
@property
def name(self):
if not hasattr(self, '_name'):
# expensive calculation
self._name = 1 + 1
return self._name
I'm new, but I think the caching could be factored out into a decorator. Only I didn't find one like it ;)
PS the real calculation doesn't depend on mutable values
A:
Starting from Python 3.2 there is a built-in decorator:
@functools.lru_cache(maxsize=100, typed=False)
Decorator to wrap a function with a memoizing callable that saves up to the maxsize most recent calls. It can save time when an expensive or I/O bound function is periodically called with the same arguments.
Example of an LRU cache for computing Fibonacci numbers:
from functools import lru_cache
@lru_cache(maxsize=None)
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
>>> print([fib(n) for n in range(16)])
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610]
>>> print(fib.cache_info())
CacheInfo(hits=28, misses=16, maxsize=None, currsize=16)
If you are stuck with Python 2.x, here's a list of other compatible memoization libraries:
functools32 | PyPI | Source code
repoze.lru | PyPI | Source code
pylru | PyPI | Source code
backports.functools_lru_cache | PyPI | Source code
A:
Python 3.8 functools.cached_property decorator
https://docs.python.org/dev/library/functools.html#functools.cached_property
cached_property from Werkzeug was mentioned at: https://stackoverflow.com/a/5295190/895245 but a supposedly derived version will be merged into 3.8, which is awesome.
This decorator can be seen as caching @property, or as a cleaner @functools.lru_cache for when you don't have any arguments.
The docs say:
@functools.cached_property(func)
Transform a method of a class into a property whose value is computed once and then cached as a normal attribute for the life of the instance. Similar to property(), with the addition of caching. Useful for expensive computed properties of instances that are otherwise effectively immutable.
Example:
class DataSet:
def __init__(self, sequence_of_numbers):
self._data = sequence_of_numbers
@cached_property
def stdev(self):
return statistics.stdev(self._data)
@cached_property
def variance(self):
return statistics.variance(self._data)
New in version 3.8.
Note This decorator requires that the dict attribute on each instance be a mutable mapping. This means it will not work with some types, such as metaclasses (since the dict attributes on type instances are read-only proxies for the class namespace), and those that specify slots without including dict as one of the defined slots (as such classes donβt provide a dict attribute at all).
A:
It sounds like you're not asking for a general-purpose memoization decorator (i.e., you're not interested in the general case where you want to cache return values for different argument values). That is, you'd like to have this:
x = obj.name # expensive
y = obj.name # cheap
while a general-purpose memoization decorator would give you this:
x = obj.name() # expensive
y = obj.name() # cheap
I submit that the method-call syntax is better style, because it suggests the possibility of expensive computation while the property syntax suggests a quick lookup.
[Update: The class-based memoization decorator I had linked to and quoted here previously doesn't work for methods. I've replaced it with a decorator function.] If you're willing to use a general-purpose memoization decorator, here's a simple one:
def memoize(function):
memo = {}
def wrapper(*args):
if args in memo:
return memo[args]
else:
rv = function(*args)
memo[args] = rv
return rv
return wrapper
Example usage:
@memoize
def fibonacci(n):
if n < 2: return n
return fibonacci(n - 1) + fibonacci(n - 2)
Another memoization decorator with a limit on the cache size can be found here.
A:
functools.cache has been released in Python 3.9 (docs):
from functools import cache
@cache
def factorial(n):
return n * factorial(n-1) if n else 1
In previous Python versions, one of the early answers is still a valid solution: Using lru_cache as an ordinary cache without the limit and lru features. (docs)
If maxsize is set to None, the LRU feature is disabled and the cache
can grow without bound.
Here is a prettier version of it:
cache = lru_cache(maxsize=None)
@cache
def func(param1):
pass
A:
class memorize(dict):
def __init__(self, func):
self.func = func
def __call__(self, *args):
return self[args]
def __missing__(self, key):
result = self[key] = self.func(*key)
return result
Sample uses:
>>> @memorize
... def foo(a, b):
... return a * b
>>> foo(2, 4)
8
>>> foo
{(2, 4): 8}
>>> foo('hi', 3)
'hihihi'
>>> foo
{(2, 4): 8, ('hi', 3): 'hihihi'}
A:
Werkzeug has a cached_property decorator (docs, source)
A:
I coded this simple decorator class to cache function responses. I find it VERY useful for my projects:
from datetime import datetime, timedelta
class cached(object):
def __init__(self, *args, **kwargs):
self.cached_function_responses = {}
self.default_max_age = kwargs.get("default_cache_max_age", timedelta(seconds=0))
def __call__(self, func):
def inner(*args, **kwargs):
max_age = kwargs.get('max_age', self.default_max_age)
if not max_age or func not in self.cached_function_responses or (datetime.now() - self.cached_function_responses[func]['fetch_time'] > max_age):
if 'max_age' in kwargs: del kwargs['max_age']
res = func(*args, **kwargs)
self.cached_function_responses[func] = {'data': res, 'fetch_time': datetime.now()}
return self.cached_function_responses[func]['data']
return inner
The usage is straightforward:
import time
@cached
def myfunc(a):
print "in func"
return (a, datetime.now())
@cached(default_max_age = timedelta(seconds=6))
def cacheable_test(a):
print "in cacheable test: "
return (a, datetime.now())
print cacheable_test(1,max_age=timedelta(seconds=5))
print cacheable_test(2,max_age=timedelta(seconds=5))
time.sleep(7)
print cacheable_test(3,max_age=timedelta(seconds=5))
A:
DISCLAIMER: I'm the author of kids.cache.
You should check kids.cache, it provides a @cache decorator that works on python 2 and python 3. No dependencies, ~100 lines of code. It's very straightforward to use, for instance, with your code in mind, you could use it like this:
pip install kids.cache
Then
from kids.cache import cache
...
class MyClass(object):
...
@cache # <-- That's all you need to do
@property
def name(self):
return 1 + 1 # supposedly expensive calculation
Or you could put the @cache decorator after the @property (same result).
Using cache on a property is called lazy evaluation, kids.cache can do much more (it works on function with any arguments, properties, any type of methods, and even classes...). For advanced users, kids.cache supports cachetools which provides fancy cache stores to python 2 and python 3 (LRU, LFU, TTL, RR cache).
IMPORTANT NOTE: the default cache store of kids.cache is a standard dict, which is not recommended for long running program with ever different queries as it would lead to an ever growing caching store. For this usage you can plugin other cache stores using for instance (@cache(use=cachetools.LRUCache(maxsize=2)) to decorate your function/property/class/method...)
A:
Ah, just needed to find the right name for this: "Lazy property evaluation".
I do this a lot too; maybe I'll use that recipe in my code sometime.
A:
There is yet another example of a memoize decorator at Python Wiki:
http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize
That example is a bit smart, because it won't cache the results if the parameters are mutable. (check that code, it's very simple and interesting!)
A:
If you are using Django Framework, it has such a property to cache a view or response of API's
using @cache_page(time) and there can be other options as well.
Example:
@cache_page(60 * 15, cache="special_cache")
def my_view(request):
...
More details can be found here.
A:
Try joblib
https://joblib.readthedocs.io/en/latest/memory.html
from joblib import Memory
memory = Memory(cachedir=cachedir, verbose=0)
@memory.cache
def f(x):
print('Running f(%s)' % x)
return x
A:
There is fastcache, which is "C implementation of Python 3 functools.lru_cache. Provides speedup of 10-30x over standard library."
Same as chosen answer, just different import:
from fastcache import lru_cache
@lru_cache(maxsize=128, typed=False)
def f(a, b):
pass
Also, it comes installed in Anaconda, unlike functools which needs to be installed.
A:
Along with the Memoize Example I found the following python packages:
cachepy; It allows to set up ttl and\or the number of calls for cached functions; Also, one can use encrypted file-based cache...
percache
A:
@lru_cache is not good with default attrs
my @mem decorator:
import inspect
from copy import deepcopy
from functools import lru_cache, wraps
from typing import Any, Callable, Dict, Iterable
# helper
def get_all_kwargs_values(f: Callable, kwargs: Dict[str, Any]) -> Iterable[Any]:
default_kwargs = {
k: v.default
for k, v in inspect.signature(f).parameters.items()
if v.default is not inspect.Parameter.empty
}
all_kwargs = deepcopy(default_kwargs)
all_kwargs.update(kwargs)
for key in sorted(all_kwargs.keys()):
yield all_kwargs[key]
# the best decorator
def mem(func: Callable) -> Callable:
cache = dict()
@wraps(func)
def wrapper(*args, **kwargs) -> Any:
all_kwargs_values = get_all_kwargs_values(func, kwargs)
params = (*args, *all_kwargs_values)
_hash = hash(params)
if _hash not in cache:
cache[_hash] = func(*args, **kwargs)
return cache[_hash]
return wrapper
# some logic
def counter(*args) -> int:
print(f'* not_cached:', end='\t')
return sum(args)
@mem
def check_mem(a, *args, z=10) -> int:
return counter(a, *args, z)
@lru_cache
def check_lru(a, *args, z=10) -> int:
return counter(a, *args, z)
def test(func) -> None:
print(f'\nTest {func.__name__}:')
print('*', func(1, 2, 3, 4, 5))
print('*', func(1, 2, 3, 4, 5))
print('*', func(1, 2, 3, 4, 5, z=6))
print('*', func(1, 2, 3, 4, 5, z=6))
print('*', func(1))
print('*', func(1, z=10))
def main():
test(check_mem)
test(check_lru)
if __name__ == '__main__':
main()
output:
Test check_mem:
* not_cached: * 25
* 25
* not_cached: * 21
* 21
* not_cached: * 11
* 11
Test check_lru:
* not_cached: * 25
* 25
* not_cached: * 21
* 21
* not_cached: * 11
* not_cached: * 11
A:
I implemented something like this, using pickle for persistance and using sha1 for short almost-certainly-unique IDs. Basically the cache hashed the code of the function and the hist of arguments to get a sha1 then looked for a file with that sha1 in the name. If it existed, it opened it and returned the result; if not, it calls the function and saves the result (optionally only saving if it took a certain amount of time to process).
That said, I'd swear I found an existing module that did this and find myself here trying to find that module... The closest I can find is this, which looks about right: http://chase-seibert.github.io/blog/2011/11/23/pythondjango-disk-based-caching-decorator.html
The only problem I see with that is it wouldn't work well for large inputs since it hashes str(arg), which isn't unique for giant arrays.
It would be nice if there were a unique_hash() protocol that had a class return a secure hash of its contents. I basically manually implemented that for the types I cared about.
A:
If you are using Django and want to cache views, see Nikhil Kumar's answer.
But if you want to cache ANY function results, you can use django-cache-utils.
It reuses Django caches and provides easy to use cached decorator:
from cache_utils.decorators import cached
@cached(60)
def foo(x, y=0):
print 'foo is called'
return x+y
A:
Function cache simple solution
with ttl (time to life) and max_entries
doesnt work when the decorated function takes unhashable types as input (e.g. dicts)
optional parameter: ttl (time to live for every entry)
optional parameter: max_entries (if too many cache argument combination to no clutter the storage)
make sure the function has no important side effects
Example use
import time
@cache(ttl=timedelta(minutes=3), max_entries=300)
def add(a, b):
time.sleep(2)
return a + b
@cache()
def substract(a, b):
time.sleep(2)
return a - b
a = 5
# function is called with argument combinations the first time -> it takes some time
for i in range(5):
print(add(a, i))
# function is called with same arguments again? -> will answer from cache
for i in range(5):
print(add(a, i))
Copy the decorator code
from datetime import datetime, timedelta
def cache(**kwargs):
def decorator(function):
# static function variable for cache, lazy initialization
try: function.cache
except: function.cache = {}
def wrapper(*args):
# if nothing valid in cache, insert something
if not args in function.cache or datetime.now() > function.cache[args]['expiry']:
if 'max_entries' in kwargs:
max_entries = kwargs['max_entries']
if max_entries != None and len(function.cache) >= max_entries:
now = datetime.now()
# delete the the first expired entry that can be found (lazy deletion)
for key in function.cache:
if function.cache[key]['expiry'] < now:
del function.cache[key]
break
# if nothing is expired that is deletable, delete the first
if len(function.cache) >= max_entries:
del function.cache[next(iter(function.cache))]
function.cache[args] = {'result': function(*args), 'expiry': datetime.max if 'ttl' not in kwargs else datetime.now() + kwargs['ttl']}
# answer from cache
return function.cache[args]['result']
return wrapper
return decorator
A:
from functools import wraps
def cache(maxsize=128):
cache = {}
def decorator(func):
@wraps(func)
def inner(*args, no_cache=False, **kwargs):
if no_cache:
return func(*args, **kwargs)
key_base = "_".join(str(x) for x in args)
key_end = "_".join(f"{k}:{v}" for k, v in kwargs.items())
key = f"{key_base}-{key_end}"
if key in cache:
return cache[key]
res = func(*args, **kwargs)
if len(cache) > maxsize:
del cache[list(cache.keys())[0]]
cache[key] = res
return res
return inner
return decorator
def async_cache(maxsize=128):
cache = {}
def decorator(func):
@wraps(func)
async def inner(*args, no_cache=False, **kwargs):
if no_cache:
return await func(*args, **kwargs)
key_base = "_".join(str(x) for x in args)
key_end = "_".join(f"{k}:{v}" for k, v in kwargs.items())
key = f"{key_base}-{key_end}"
if key in cache:
return cache[key]
res = await func(*args, **kwargs)
if len(cache) > maxsize:
del cache[list(cache.keys())[0]]
cache[key] = res
return res
return inner
return decorator
Example use
import asyncio
import aiohttp
# Removes the aiohttp ClientSession instance warning.
class HTTPSession(aiohttp.ClientSession):
""" Abstract class for aiohttp. """
def __init__(self, loop=None) -> None:
super().__init__(loop=loop or asyncio.get_event_loop())
def __del__(self) -> None:
if not self.closed:
self.loop.run_until_complete(self.close())
self.loop.close()
return
session = HTTPSession()
@async_cache()
async def query(url, method="get", res_method="text", *args, **kwargs):
async with getattr(session, method.lower())(url, *args, **kwargs) as res:
return await getattr(res, res_method)()
async def get(url, *args, **kwargs):
return await query(url, "get", *args, **kwargs)
async def post(url, *args, **kwargs):
return await query(url, "post", *args, **kwargs)
async def delete(url, *args, **kwargs):
return await query(url, "delete", *args, **kwargs)
A:
Create your own decorator and use it
from django.core.cache import cache
import functools
def cache_returned_values(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
key = "choose a unique key here"
results = cache.get(key)
if not results:
results = func(*args, **kwargs)
cache.set(key, results)
return results
return wrapper
Now at the function side
@cache_returned_values
def get_some_values(args):
return x
| Is there a decorator to simply cache function return values? | Consider the following:
@property
def name(self):
if not hasattr(self, '_name'):
# expensive calculation
self._name = 1 + 1
return self._name
I'm new, but I think the caching could be factored out into a decorator. Only I didn't find one like it ;)
PS the real calculation doesn't depend on mutable values
| [
"Starting from Python 3.2 there is a built-in decorator:\[email protected]_cache(maxsize=100, typed=False)\n\nDecorator to wrap a function with a memoizing callable that saves up to the maxsize most recent calls. It can save time when an expensive or I/O bound function is periodically called with the same arguments.\n\nExample of an LRU cache for computing Fibonacci numbers:\nfrom functools import lru_cache\n\n@lru_cache(maxsize=None)\ndef fib(n):\n if n < 2:\n return n\n return fib(n-1) + fib(n-2)\n\n>>> print([fib(n) for n in range(16)])\n[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610]\n\n>>> print(fib.cache_info())\nCacheInfo(hits=28, misses=16, maxsize=None, currsize=16)\n\n\nIf you are stuck with Python 2.x, here's a list of other compatible memoization libraries:\n\nfunctools32 | PyPI | Source code\nrepoze.lru | PyPI | Source code\npylru | PyPI | Source code\nbackports.functools_lru_cache | PyPI | Source code\n\n",
"Python 3.8 functools.cached_property decorator\nhttps://docs.python.org/dev/library/functools.html#functools.cached_property\ncached_property from Werkzeug was mentioned at: https://stackoverflow.com/a/5295190/895245 but a supposedly derived version will be merged into 3.8, which is awesome.\nThis decorator can be seen as caching @property, or as a cleaner @functools.lru_cache for when you don't have any arguments.\nThe docs say:\n\[email protected]_property(func)\n\nTransform a method of a class into a property whose value is computed once and then cached as a normal attribute for the life of the instance. Similar to property(), with the addition of caching. Useful for expensive computed properties of instances that are otherwise effectively immutable.\nExample:\nclass DataSet:\n def __init__(self, sequence_of_numbers):\n self._data = sequence_of_numbers\n\n @cached_property\n def stdev(self):\n return statistics.stdev(self._data)\n\n @cached_property\n def variance(self):\n return statistics.variance(self._data)\n\nNew in version 3.8.\nNote This decorator requires that the dict attribute on each instance be a mutable mapping. This means it will not work with some types, such as metaclasses (since the dict attributes on type instances are read-only proxies for the class namespace), and those that specify slots without including dict as one of the defined slots (as such classes donβt provide a dict attribute at all).\n\n",
"It sounds like you're not asking for a general-purpose memoization decorator (i.e., you're not interested in the general case where you want to cache return values for different argument values). That is, you'd like to have this:\nx = obj.name # expensive\ny = obj.name # cheap\n\nwhile a general-purpose memoization decorator would give you this:\nx = obj.name() # expensive\ny = obj.name() # cheap\n\nI submit that the method-call syntax is better style, because it suggests the possibility of expensive computation while the property syntax suggests a quick lookup.\n[Update: The class-based memoization decorator I had linked to and quoted here previously doesn't work for methods. I've replaced it with a decorator function.] If you're willing to use a general-purpose memoization decorator, here's a simple one:\ndef memoize(function):\n memo = {}\n def wrapper(*args):\n if args in memo:\n return memo[args]\n else:\n rv = function(*args)\n memo[args] = rv\n return rv\n return wrapper\n\nExample usage:\n@memoize\ndef fibonacci(n):\n if n < 2: return n\n return fibonacci(n - 1) + fibonacci(n - 2)\n\nAnother memoization decorator with a limit on the cache size can be found here.\n",
"functools.cache has been released in Python 3.9 (docs):\nfrom functools import cache\n\n@cache\ndef factorial(n):\n return n * factorial(n-1) if n else 1\n\nIn previous Python versions, one of the early answers is still a valid solution: Using lru_cache as an ordinary cache without the limit and lru features. (docs)\n\nIf maxsize is set to None, the LRU feature is disabled and the cache\ncan grow without bound.\n\nHere is a prettier version of it:\ncache = lru_cache(maxsize=None)\n\n@cache\ndef func(param1):\n pass\n\n",
"class memorize(dict):\n def __init__(self, func):\n self.func = func\n\n def __call__(self, *args):\n return self[args]\n\n def __missing__(self, key):\n result = self[key] = self.func(*key)\n return result\n\nSample uses:\n>>> @memorize\n... def foo(a, b):\n... return a * b\n>>> foo(2, 4)\n8\n>>> foo\n{(2, 4): 8}\n>>> foo('hi', 3)\n'hihihi'\n>>> foo\n{(2, 4): 8, ('hi', 3): 'hihihi'}\n\n",
"Werkzeug has a cached_property decorator (docs, source)\n",
"I coded this simple decorator class to cache function responses. I find it VERY useful for my projects:\nfrom datetime import datetime, timedelta \n\nclass cached(object):\n def __init__(self, *args, **kwargs):\n self.cached_function_responses = {}\n self.default_max_age = kwargs.get(\"default_cache_max_age\", timedelta(seconds=0))\n\n def __call__(self, func):\n def inner(*args, **kwargs):\n max_age = kwargs.get('max_age', self.default_max_age)\n if not max_age or func not in self.cached_function_responses or (datetime.now() - self.cached_function_responses[func]['fetch_time'] > max_age):\n if 'max_age' in kwargs: del kwargs['max_age']\n res = func(*args, **kwargs)\n self.cached_function_responses[func] = {'data': res, 'fetch_time': datetime.now()}\n return self.cached_function_responses[func]['data']\n return inner\n\nThe usage is straightforward:\nimport time\n\n@cached\ndef myfunc(a):\n print \"in func\"\n return (a, datetime.now())\n\n@cached(default_max_age = timedelta(seconds=6))\ndef cacheable_test(a):\n print \"in cacheable test: \"\n return (a, datetime.now())\n\n\nprint cacheable_test(1,max_age=timedelta(seconds=5))\nprint cacheable_test(2,max_age=timedelta(seconds=5))\ntime.sleep(7)\nprint cacheable_test(3,max_age=timedelta(seconds=5))\n\n",
"DISCLAIMER: I'm the author of kids.cache.\nYou should check kids.cache, it provides a @cache decorator that works on python 2 and python 3. No dependencies, ~100 lines of code. It's very straightforward to use, for instance, with your code in mind, you could use it like this:\npip install kids.cache\n\nThen\nfrom kids.cache import cache\n...\nclass MyClass(object):\n ...\n @cache # <-- That's all you need to do\n @property\n def name(self):\n return 1 + 1 # supposedly expensive calculation\n\nOr you could put the @cache decorator after the @property (same result).\nUsing cache on a property is called lazy evaluation, kids.cache can do much more (it works on function with any arguments, properties, any type of methods, and even classes...). For advanced users, kids.cache supports cachetools which provides fancy cache stores to python 2 and python 3 (LRU, LFU, TTL, RR cache).\nIMPORTANT NOTE: the default cache store of kids.cache is a standard dict, which is not recommended for long running program with ever different queries as it would lead to an ever growing caching store. For this usage you can plugin other cache stores using for instance (@cache(use=cachetools.LRUCache(maxsize=2)) to decorate your function/property/class/method...)\n",
"Ah, just needed to find the right name for this: \"Lazy property evaluation\".\nI do this a lot too; maybe I'll use that recipe in my code sometime.\n",
"There is yet another example of a memoize decorator at Python Wiki:\nhttp://wiki.python.org/moin/PythonDecoratorLibrary#Memoize\nThat example is a bit smart, because it won't cache the results if the parameters are mutable. (check that code, it's very simple and interesting!)\n",
"If you are using Django Framework, it has such a property to cache a view or response of API's\nusing @cache_page(time) and there can be other options as well.\nExample:\n@cache_page(60 * 15, cache=\"special_cache\")\ndef my_view(request):\n ...\n\nMore details can be found here.\n",
"Try joblib\nhttps://joblib.readthedocs.io/en/latest/memory.html\nfrom joblib import Memory\nmemory = Memory(cachedir=cachedir, verbose=0)\[email protected]\n def f(x):\n print('Running f(%s)' % x)\n return x\n\n",
"There is fastcache, which is \"C implementation of Python 3 functools.lru_cache. Provides speedup of 10-30x over standard library.\"\nSame as chosen answer, just different import:\nfrom fastcache import lru_cache\n@lru_cache(maxsize=128, typed=False)\ndef f(a, b):\n pass\n\nAlso, it comes installed in Anaconda, unlike functools which needs to be installed.\n",
"Along with the Memoize Example I found the following python packages:\n\ncachepy; It allows to set up ttl and\\or the number of calls for cached functions; Also, one can use encrypted file-based cache...\npercache \n\n",
"@lru_cache is not good with default attrs\nmy @mem decorator:\nimport inspect\nfrom copy import deepcopy\nfrom functools import lru_cache, wraps\nfrom typing import Any, Callable, Dict, Iterable\n\n\n# helper\ndef get_all_kwargs_values(f: Callable, kwargs: Dict[str, Any]) -> Iterable[Any]:\n default_kwargs = {\n k: v.default\n for k, v in inspect.signature(f).parameters.items()\n if v.default is not inspect.Parameter.empty\n }\n\n all_kwargs = deepcopy(default_kwargs)\n all_kwargs.update(kwargs)\n\n for key in sorted(all_kwargs.keys()):\n yield all_kwargs[key]\n\n\n# the best decorator\ndef mem(func: Callable) -> Callable:\n cache = dict()\n\n @wraps(func)\n def wrapper(*args, **kwargs) -> Any:\n all_kwargs_values = get_all_kwargs_values(func, kwargs)\n params = (*args, *all_kwargs_values)\n _hash = hash(params)\n\n if _hash not in cache:\n cache[_hash] = func(*args, **kwargs)\n\n return cache[_hash]\n\n return wrapper\n\n\n# some logic\ndef counter(*args) -> int:\n print(f'* not_cached:', end='\\t')\n return sum(args)\n\n\n@mem\ndef check_mem(a, *args, z=10) -> int:\n return counter(a, *args, z)\n\n\n@lru_cache\ndef check_lru(a, *args, z=10) -> int:\n return counter(a, *args, z)\n\n\ndef test(func) -> None:\n print(f'\\nTest {func.__name__}:')\n\n print('*', func(1, 2, 3, 4, 5))\n print('*', func(1, 2, 3, 4, 5))\n print('*', func(1, 2, 3, 4, 5, z=6))\n print('*', func(1, 2, 3, 4, 5, z=6))\n print('*', func(1))\n print('*', func(1, z=10))\n\n\ndef main():\n test(check_mem)\n test(check_lru)\n\n\nif __name__ == '__main__':\n main()\n\noutput:\nTest check_mem:\n* not_cached: * 25\n* 25\n* not_cached: * 21\n* 21\n* not_cached: * 11\n* 11\n\nTest check_lru:\n* not_cached: * 25\n* 25\n* not_cached: * 21\n* 21\n* not_cached: * 11\n* not_cached: * 11\n\n",
"I implemented something like this, using pickle for persistance and using sha1 for short almost-certainly-unique IDs. Basically the cache hashed the code of the function and the hist of arguments to get a sha1 then looked for a file with that sha1 in the name. If it existed, it opened it and returned the result; if not, it calls the function and saves the result (optionally only saving if it took a certain amount of time to process).\nThat said, I'd swear I found an existing module that did this and find myself here trying to find that module... The closest I can find is this, which looks about right: http://chase-seibert.github.io/blog/2011/11/23/pythondjango-disk-based-caching-decorator.html\nThe only problem I see with that is it wouldn't work well for large inputs since it hashes str(arg), which isn't unique for giant arrays.\nIt would be nice if there were a unique_hash() protocol that had a class return a secure hash of its contents. I basically manually implemented that for the types I cared about.\n",
"If you are using Django and want to cache views, see Nikhil Kumar's answer. \n\nBut if you want to cache ANY function results, you can use django-cache-utils.\nIt reuses Django caches and provides easy to use cached decorator:\nfrom cache_utils.decorators import cached\n\n@cached(60)\ndef foo(x, y=0):\n print 'foo is called'\n return x+y\n\n",
"Function cache simple solution\nwith ttl (time to life) and max_entries\n\ndoesnt work when the decorated function takes unhashable types as input (e.g. dicts)\noptional parameter: ttl (time to live for every entry)\noptional parameter: max_entries (if too many cache argument combination to no clutter the storage)\nmake sure the function has no important side effects\n\nExample use\nimport time\n\n@cache(ttl=timedelta(minutes=3), max_entries=300)\ndef add(a, b):\n time.sleep(2)\n return a + b\n\n@cache()\ndef substract(a, b):\n time.sleep(2)\n return a - b\n\na = 5\n# function is called with argument combinations the first time -> it takes some time\nfor i in range(5):\n print(add(a, i))\n\n# function is called with same arguments again? -> will answer from cache\nfor i in range(5):\n print(add(a, i))\n\nCopy the decorator code\nfrom datetime import datetime, timedelta\n\ndef cache(**kwargs):\n def decorator(function):\n # static function variable for cache, lazy initialization\n try: function.cache\n except: function.cache = {}\n def wrapper(*args):\n # if nothing valid in cache, insert something\n if not args in function.cache or datetime.now() > function.cache[args]['expiry']:\n if 'max_entries' in kwargs:\n max_entries = kwargs['max_entries']\n if max_entries != None and len(function.cache) >= max_entries:\n now = datetime.now()\n # delete the the first expired entry that can be found (lazy deletion)\n for key in function.cache:\n if function.cache[key]['expiry'] < now:\n del function.cache[key]\n break\n # if nothing is expired that is deletable, delete the first\n if len(function.cache) >= max_entries:\n del function.cache[next(iter(function.cache))]\n function.cache[args] = {'result': function(*args), 'expiry': datetime.max if 'ttl' not in kwargs else datetime.now() + kwargs['ttl']}\n\n # answer from cache\n return function.cache[args]['result']\n return wrapper\n return decorator\n\n",
"from functools import wraps\n\n\ndef cache(maxsize=128):\n cache = {}\n\n def decorator(func):\n @wraps(func)\n def inner(*args, no_cache=False, **kwargs):\n if no_cache:\n return func(*args, **kwargs)\n\n key_base = \"_\".join(str(x) for x in args)\n key_end = \"_\".join(f\"{k}:{v}\" for k, v in kwargs.items())\n key = f\"{key_base}-{key_end}\"\n\n if key in cache:\n return cache[key]\n\n res = func(*args, **kwargs)\n\n if len(cache) > maxsize:\n del cache[list(cache.keys())[0]]\n cache[key] = res\n\n return res\n\n return inner\n\n return decorator\n\n\ndef async_cache(maxsize=128):\n cache = {}\n\n def decorator(func):\n @wraps(func)\n async def inner(*args, no_cache=False, **kwargs):\n if no_cache:\n return await func(*args, **kwargs)\n\n key_base = \"_\".join(str(x) for x in args)\n key_end = \"_\".join(f\"{k}:{v}\" for k, v in kwargs.items())\n key = f\"{key_base}-{key_end}\"\n\n if key in cache:\n return cache[key]\n\n res = await func(*args, **kwargs)\n\n if len(cache) > maxsize:\n del cache[list(cache.keys())[0]]\n cache[key] = res\n\n return res\n\n return inner\n\n return decorator\n\nExample use\nimport asyncio\nimport aiohttp\n\n\n# Removes the aiohttp ClientSession instance warning.\nclass HTTPSession(aiohttp.ClientSession):\n \"\"\" Abstract class for aiohttp. \"\"\"\n \n def __init__(self, loop=None) -> None:\n super().__init__(loop=loop or asyncio.get_event_loop())\n\n def __del__(self) -> None:\n if not self.closed:\n self.loop.run_until_complete(self.close())\n self.loop.close()\n \n\n return \n \n\n \n\nsession = HTTPSession()\n\n@async_cache()\nasync def query(url, method=\"get\", res_method=\"text\", *args, **kwargs):\n async with getattr(session, method.lower())(url, *args, **kwargs) as res:\n return await getattr(res, res_method)()\n\n\nasync def get(url, *args, **kwargs):\n return await query(url, \"get\", *args, **kwargs)\n \n\nasync def post(url, *args, **kwargs):\n return await query(url, \"post\", *args, **kwargs)\n\nasync def delete(url, *args, **kwargs):\n return await query(url, \"delete\", *args, **kwargs)\n\n",
"Create your own decorator and use it\nfrom django.core.cache import cache\nimport functools\n\ndef cache_returned_values(func):\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n key = \"choose a unique key here\"\n results = cache.get(key)\n if not results:\n results = func(*args, **kwargs)\n cache.set(key, results)\n return results\n\n return wrapper\n\nNow at the function side\n@cache_returned_values\ndef get_some_values(args):\n return x\n\n"
] | [
278,
48,
39,
30,
27,
12,
10,
8,
7,
4,
4,
4,
4,
3,
3,
2,
2,
2,
1,
0
] | [] | [] | [
"caching",
"decorator",
"memoization",
"python"
] | stackoverflow_0000815110_caching_decorator_memoization_python.txt |
Q:
Removing specific key value pairs from geoJSON object
The following is a subset of my geoJSON object which has a combination of Multipolygons and GeometryCollections in its features. The GeometryCollections include multiple types of geometries.
json_str = '{"type": "FeatureCollection",
"features": [
{"id": "0",
"type": "Feature",
"properties": {"Date": "2019/07/10", "PID": "P1"},
"geometry": {"type": "GeometryCollection",
"geometries": [
{"type": "MultiPolygon",
"coordinates": [[[[138.5765, -35.0101], [138.5764, -35.0113], [138.5776, -35.0119], [138.5757, -35.0123], [138.5744, -35.013], [138.5739, -35.0119], [138.574, -35.0115], [138.5746, -35.0101], [138.5757, -35.0097], [138.5773, -35.0088], [138.5765, -35.0101]]], [[[138.6124, -35.0016], [138.612, -35.0011], [138.613, -35.0006], [138.6134, -35.0008], [138.6143, -35.0011], [138.613, -35.0024], [138.6124, -35.0016]]]]},
{"type": "MultiLineString",
"coordinates": [[[138.5625, -34.9778], [138.5609, -34.9791]], [[138.6042, -34.9885], [138.6042, -34.9886]]]},
{"type": "Point",
"coordinates": [138.6656, -34.8842]}]}},
{"id": "1",
"type": "Feature",
"properties": {"Date": "2019/07/10", "PID": "P2"},
"geometry": {
"type": "MultiPolygon",
"coordinates": [[[[138.5731, -34.9273], [138.5741, -34.9281], [138.5752, -34.9273], [138.5763, -34.9266], [138.5779, -34.9269], [138.5785, -34.9268], [138.5797, -34.9272], [138.5807, -34.928], [138.5814, -34.9276], [138.5809, -34.9282], [138.5807, -34.9285], [138.5794, -34.9292], [138.5785, -34.9299], [138.5766, -34.93], [138.5783, -34.9302], [138.5785, -34.9303], [138.5793, -34.9312], [138.5796, -34.9318], [138.579, -34.9332], [138.5795, -34.9336], [138.5803, -34.934], [138.5807, -34.9341], [138.5816, -34.9347], [138.5821, -34.9354], [138.5822, -34.9359], [138.5829, -34.9368], [138.5831, -34.937], [138.5836, -34.9372], [138.5843, -34.9379], [138.5843, -34.939], [138.5829, -34.9394], [138.5823, -34.9395], [138.5817, -34.939], [138.5823, -34.9377], [138.5807, -34.9385], [138.5792, -34.939], [138.5785, -34.9396], [138.5771, -34.9408], [138.5769, -34.9421], [138.5769, -34.9426], [138.5768, -34.944], [138.5785, -34.9437], [138.5788, -34.9441], [138.5789, -34.9444], [138.579, -34.9458], [138.5797, -34.9462], [138.5798, -34.9469], [138.5797, -34.948], [138.5798, -34.9487], [138.5793, -34.9498], [138.5785, -34.9507], [138.577, -34.9516], [138.5763, -34.9523], [138.5752, -34.9534], [138.5741, -34.9543], [138.5727, -34.9545], [138.5719, -34.954], [138.5713, -34.9552], [138.5697, -34.9567], [138.5692, -34.957], [138.5675, -34.9579], [138.5664, -34.9588], [138.5653, -34.9604], [138.5646, -34.9606], [138.5631, -34.9608], [138.5612, -34.9624], [138.5609, -34.9629], [138.56, -34.9642], [138.5597, -34.9652], [138.5609, -34.9655], [138.5613, -34.9657], [138.5618, -34.966], [138.5609, -34.9664], [138.56, -34.9667], [138.5587, -34.967], [138.5565, -34.9678], [138.5565, -34.9678], [138.5565, -34.9678], [138.5565, -34.9678], [138.5565, -34.9678], [138.5579, -34.9685], [138.5587, -34.9688], [138.5592, -34.9692], [138.5609, -34.9695], [138.561, -34.9695], [138.5631, -34.9695], [138.5632, -34.9695], [138.5653, -34.9693], [138.5656, -34.9694], [138.566, -34.9696], [138.5667, -34.9702], [138.5675, -34.9705], [138.5688, -34.9703], [138.5697, -34.9709], [138.5701, -34.971], [138.5711, -34.9714], [138.5697, -34.972], [138.5692, -34.9732], [138.5675, -34.975], [138.5697, -34.9738], [138.5705, -34.9743], [138.5719, -34.975], [138.5732, -34.9732], [138.5734, -34.972], [138.5723, -34.9714], [138.5741, -34.9709], [138.5744, -34.9711], [138.5763, -34.9714], [138.5763, -34.9714], [138.5763, -34.9714], [138.5782, -34.9716], [138.5785, -34.9717], [138.5798, -34.9714], [138.5807, -34.9713], [138.581, -34.9711], [138.5829, -34.9708], [138.5837, -34.9707], [138.5835, -34.9714], [138.5836, -34.9726], [138.5838, -34.9732], [138.5841, -34.974], [138.5842, -34.975], [138.5844, -34.9755], [138.5847, -34.9768], [138.5847, -34.9771], [138.5851, -34.9785], [138.5851, -34.9786], [138.5851, -34.9786], [138.5856, -34.98], [138.5858, -34.9804], [138.5857, -34.9816], [138.5859, -34.9822], [138.5864, -34.9829], [138.5865, -34.984], [138.5864, -34.9847], [138.5869, -34.9858], [138.587, -34.986], [138.5872, -34.9876], [138.5856, -34.989], [138.5873, -34.988], [138.5875, -34.9892], [138.5874, -34.9894], [138.5873, -34.9897], [138.586, -34.9912], [138.5856, -34.9925], [138.5858, -34.993], [138.5854, -34.9945], [138.5873, -34.994], [138.5877, -34.9944], [138.5879, -34.9948], [138.5876, -34.9963], [138.5875, -34.9966], [138.5873, -34.9971], [138.5866, -34.9984], [138.5864, -34.999], [138.5863, -35.0002], [138.5851, -35.0007], [138.5837, -35.0013], [138.5829, -35.0002], [138.5829, -35.0001], [138.5829, -35.0001], [138.5828, -35.0002], [138.5828, -35.0002], [138.582, -35.002], [138.5823, -35.0024], [138.5825, -35.0038], [138.5807, -35.0053], [138.5789, -35.0056], [138.5802, -35.006], [138.5797, -35.0074], [138.5785, -35.0084], [138.5767, -35.0088], [138.5764, -35.0074], [138.5782, -35.0058], [138.5763, -35.0073], [138.5761, -35.0074], [138.5762, -35.0075], [138.5759, -35.0092], [138.5741, -35.0097], [138.5739, -35.011], [138.5719, -35.0126], [138.5705, -35.0121], [138.5697, -35.0119], [138.5695, -35.0128], [138.5693, -35.0131], [138.568, -35.0146], [138.5675, -35.0163], [138.5675, -35.0164], [138.5678, -35.0179], [138.5697, -35.0178], [138.5701, -35.0178], [138.5719, -35.0177], [138.5727, -35.0175], [138.5741, -35.0176], [138.5759, -35.0167], [138.5763, -35.0165], [138.5766, -35.0164], [138.5776, -35.0153], [138.5783, -35.0146], [138.5785, -35.0145], [138.5786, -35.0145], [138.58, -35.0146], [138.5806, -35.0146], [138.5807, -35.0158], [138.5809, -35.0146], [138.5829, -35.0142], [138.5834, -35.0141], [138.5851, -35.0137], [138.5868, -35.0132], [138.5873, -35.0133], [138.5878, -35.0128], [138.5894, -35.0122], [138.5904, -35.012], [138.5916, -35.0123], [138.5925, -35.011], [138.5936, -35.0093], [138.5937, -35.0092], [138.5938, -35.0091], [138.5939, -35.0091], [138.5942, -35.0092], [138.5955, -35.0096], [138.596, -35.0097], [138.5965, -35.0092], [138.5982, -35.0082], [138.5997, -35.0079], [138.6004, -35.0083], [138.6012, -35.0074], [138.6026, -35.0063], [138.6042, -35.0061], [138.6048, -35.007], [138.6069, -35.0056], [138.6053, -35.0051], [138.6048, -35.0045], [138.6042, -35.0043], [138.6045, -35.0038], [138.6048, -35.0034], [138.605, -35.0036], [138.607, -35.0032], [138.6075, -35.0034], [138.6092, -35.0022], [138.6106, -35.0026], [138.6114, -35.0033], [138.6119, -35.0033], [138.6126, -35.0038], [138.6134, -35.004], [138.6136, -35.004], [138.6148, -35.0046], [138.6158, -35.0051], [138.6175, -35.0041], [138.618, -35.004], [138.6181, -35.0038], [138.6202, -35.002], [138.6203, -35.002], [138.6202, -35.0019], [138.6202, -35.0017], [138.6195, -35.0008], [138.618, -35.0004], [138.6158, -35.002], [138.6158, -35.002], [138.6156, -35.0021], [138.6157, -35.002], [138.6143, -35.0013], [138.6136, -35.0008], [138.6128, -35.0008], [138.6118, -35.0002], [138.6115, -35.0001], [138.6114, -35.0], [138.61, -34.9995], [138.6092, -34.9988], [138.6087, -34.9988], [138.6085, -34.9984], [138.6092, -34.998], [138.6111, -34.9966], [138.6114, -34.9958], [138.612, -34.9961], [138.6136, -34.9957], [138.6143, -34.996], [138.6158, -34.9963], [138.617, -34.9948], [138.6167, -34.994], [138.6163, -34.993], [138.6163, -34.9925], [138.6165, -34.9912], [138.6172, -34.99], [138.6175, -34.9894], [138.6179, -34.9877], [138.6178, -34.9876], [138.6176, -34.9861], [138.6176, -34.9858], [138.618, -34.9855], [138.619, -34.9849], [138.6202, -34.9842], [138.6219, -34.984], [138.6224, -34.9839], [138.6225, -34.9839], [138.6226, -34.984], [138.6241, -34.9844], [138.6246, -34.9845], [138.6252, -34.984], [138.6268, -34.9824], [138.6273, -34.9822], [138.6289, -34.9809], [138.6293, -34.9804], [138.6311, -34.9789], [138.6316, -34.9786], [138.6316, -34.9782], [138.6316, -34.9768], [138.6317, -34.9763], [138.6321, -34.975], [138.633, -34.9735], [138.633, -34.9732], [138.6333, -34.973], [138.6345, -34.9714], [138.6342, -34.9707], [138.6333, -34.9704], [138.6323, -34.9704], [138.6314, -34.9696], [138.6332, -34.9679], [138.6311, -34.9695], [138.6292, -34.9694], [138.6289, -34.9692], [138.6279, -34.9686], [138.628, -34.9678], [138.6286, -34.9663], [138.6286, -34.966], [138.6289, -34.9656], [138.6293, -34.9657], [138.6311, -34.9656], [138.6321, -34.9652], [138.6317, -34.966], [138.6326, -34.9666], [138.6333, -34.9678], [138.6334, -34.9678], [138.6334, -34.9678], [138.6349, -34.9683], [138.6355, -34.9681], [138.6359, -34.9678], [138.636, -34.9674], [138.6363, -34.966], [138.6366, -34.9651], [138.6364, -34.9642], [138.636, -34.9638], [138.6355, -34.9637], [138.6338, -34.9638], [138.6333, -34.9635], [138.6326, -34.963], [138.6326, -34.9624], [138.6323, -34.9614], [138.6321, -34.9606], [138.6317, -34.9601], [138.6311, -34.9593], [138.6307, -34.9592], [138.6289, -34.9593], [138.6286, -34.9591], [138.6282, -34.9588], [138.6288, -34.9572], [138.6287, -34.957], [138.6281, -34.9559], [138.6283, -34.9552], [138.6289, -34.9544], [138.6298, -34.9534], [138.6311, -34.9527], [138.6317, -34.953], [138.632, -34.9534], [138.6325, -34.9541], [138.6333, -34.9546], [138.6339, -34.9547], [138.6341, -34.9552], [138.6347, -34.9559], [138.6355, -34.9561], [138.6365, -34.9562], [138.6368, -34.957], [138.6372, -34.9574], [138.6377, -34.9581], [138.6387, -34.958], [138.6383, -34.9588], [138.6391, -34.9595], [138.6398, -34.9606], [138.6398, -34.9607], [138.6399, -34.9616], [138.6401, -34.9623], [138.6403, -34.9624], [138.6399, -34.9625], [138.6385, -34.9642], [138.6388, -34.9651], [138.6391, -34.966], [138.6392, -34.9666], [138.6399, -34.9667], [138.642, -34.966], [138.6421, -34.966], [138.6422, -34.966], [138.6443, -34.9647], [138.6453, -34.9642], [138.6453, -34.9634], [138.6457, -34.9624], [138.6451, -34.9618], [138.6443, -34.9611], [138.6431, -34.9616], [138.6436, -34.9606], [138.6429, -34.9599], [138.6422, -34.9588], [138.6422, -34.9588], [138.6421, -34.9587], [138.6411, -34.9579], [138.6407, -34.957], [138.6404, -34.9566], [138.6399, -34.9559], [138.6391, -34.9558], [138.6394, -34.9552], [138.6388, -34.9543], [138.6385, -34.9534], [138.6385, -34.9528], [138.6384, -34.9516], [138.6381, -34.9513], [138.6377, -34.951], [138.6366, -34.9507], [138.636, -34.9498], [138.6358, -34.9496], [138.6355, -34.9494], [138.6343, -34.949], [138.6333, -34.948], [138.6333, -34.948], [138.6333, -34.948], [138.6321, -34.9472], [138.6319, -34.9462], [138.6315, -34.9459], [138.6311, -34.9455], [138.6301, -34.9453], [138.6289, -34.9449], [138.6284, -34.9449], [138.6284, -34.9444], [138.6278, -34.9435], [138.6275, -34.9426], [138.6283, -34.9413], [138.6285, -34.9408], [138.6283, -34.9395], [138.6278, -34.939], [138.6271, -34.9388], [138.6268, -34.9386], [138.6257, -34.9381], [138.6254, -34.9372], [138.6254, -34.9365], [138.6246, -34.9355], [138.6245, -34.9354], [138.6245, -34.9354], [138.6238, -34.9342], [138.623, -34.9336], [138.6233, -34.9329], [138.6237, -34.9318], [138.6229, -34.9314], [138.6224, -34.9311], [138.6219, -34.9304], [138.6216, -34.93], [138.6221, -34.9284], [138.6221, -34.9282], [138.6215, -34.9271], [138.6211, -34.9264], [138.6214, -34.9255], [138.6217, -34.9246], [138.6224, -34.9239], [138.6234, -34.9238], [138.6246, -34.9238], [138.6258, -34.9236], [138.6268, -34.9236], [138.6285, -34.9232], [138.6289, -34.9231], [138.6304, -34.9228], [138.6311, -34.9226], [138.6315, -34.9225], [138.6333, -34.922], [138.6343, -34.921], [138.6339, -34.9205], [138.6333, -34.9203], [138.6322, -34.9202], [138.6311, -34.9202], [138.6302, -34.92], [138.6289, -34.9197], [138.6284, -34.9197], [138.6268, -34.9196], [138.6264, -34.9196], [138.6246, -34.9196], [138.6239, -34.9198], [138.6224, -34.9199], [138.6205, -34.921], [138.6202, -34.9215], [138.6191, -34.9228], [138.618, -34.9231], [138.6175, -34.9232], [138.6163, -34.9228], [138.6159, -34.9227], [138.6159, -34.921], [138.6161, -34.9208], [138.6168, -34.9192], [138.6178, -34.9175], [138.6158, -34.9183], [138.614, -34.9192], [138.6136, -34.9199], [138.6135, -34.9193], [138.6114, -34.9196], [138.6111, -34.9195], [138.6099, -34.9192], [138.6095, -34.919], [138.6092, -34.9189], [138.6085, -34.918], [138.6079, -34.9174], [138.6081, -34.9166], [138.6081, -34.9156], [138.6078, -34.915], [138.6079, -34.9138], [138.6083, -34.9128], [138.6081, -34.912], [138.6085, -34.9108], [138.6085, -34.9102], [138.6092, -34.9097], [138.6099, -34.9097], [138.6114, -34.9094], [138.6135, -34.9084], [138.6136, -34.9084], [138.6145, -34.9066], [138.6156, -34.905], [138.6157, -34.9048], [138.6154, -34.9033], [138.6152, -34.903], [138.6147, -34.9021], [138.6146, -34.9012], [138.6156, -34.8996], [138.6156, -34.8994], [138.6158, -34.8993], [138.6173, -34.8976], [138.618, -34.8971], [138.619, -34.8958], [138.6188, -34.8952], [138.6184, -34.894], [138.6182, -34.8939], [138.618, -34.8937], [138.6162, -34.8937], [138.6158, -34.8937], [138.6147, -34.894], [138.6136, -34.8945], [138.613, -34.8945], [138.6114, -34.8956], [138.6103, -34.8949], [138.6092, -34.8942], [138.6087, -34.8945], [138.607, -34.8952], [138.6063, -34.8958], [138.6056, -34.897], [138.607, -34.8964], [138.608, -34.8969], [138.608, -34.8976], [138.6076, -34.8989], [138.6073, -34.8994], [138.607, -34.8998], [138.6051, -34.901], [138.6048, -34.9011], [138.6047, -34.9012], [138.6034, -34.9024], [138.6048, -34.9027], [138.6052, -34.9027], [138.6052, -34.903], [138.6048, -34.9034], [138.6042, -34.9035], [138.6026, -34.9038], [138.6019, -34.9036], [138.6004, -34.904], [138.5994, -34.9039], [138.5982, -34.9033], [138.598, -34.9032], [138.596, -34.9031], [138.5959, -34.9031], [138.5958, -34.903], [138.5951, -34.902], [138.5947, -34.9012], [138.5943, -34.9009], [138.5938, -34.9008], [138.5931, -34.9], [138.5927, -34.8994], [138.593, -34.8984], [138.5916, -34.8982], [138.5913, -34.8979], [138.5906, -34.8976], [138.5911, -34.8963], [138.5894, -34.8972], [138.5883, -34.8968], [138.5876, -34.8958], [138.5885, -34.8948], [138.5873, -34.8955], [138.5857, -34.8953], [138.5851, -34.895], [138.5829, -34.8958], [138.5841, -34.8967], [138.5834, -34.8976], [138.5829, -34.8992], [138.5815, -34.8994], [138.5815, -34.9005], [138.5829, -34.8996], [138.5838, -34.9004], [138.5851, -34.901], [138.5852, -34.9011], [138.5853, -34.9012], [138.5855, -34.9027], [138.5857, -34.903], [138.5851, -34.9047], [138.5849, -34.9048], [138.5829, -34.9056], [138.5819, -34.9056], [138.5807, -34.9058], [138.5796, -34.9066], [138.5798, -34.9074], [138.5807, -34.9083], [138.5808, -34.9083], [138.5829, -34.9076], [138.5848, -34.9066], [138.5851, -34.9054], [138.5852, -34.9065], [138.5852, -34.9066], [138.5851, -34.9068], [138.5837, -34.9084], [138.583, -34.9101], [138.5851, -34.9094], [138.5858, -34.9096], [138.5857, -34.9102], [138.5856, -34.9116], [138.5861, -34.912], [138.5866, -34.9126], [138.5873, -34.9129], [138.5882, -34.9131], [138.5894, -34.9132], [138.59, -34.9134], [138.5902, -34.9138], [138.5901, -34.9151], [138.5908, -34.9156], [138.5905, -34.9166], [138.5899, -34.9174], [138.5894, -34.9183], [138.5884, -34.9183], [138.5873, -34.9175], [138.5872, -34.9175], [138.5872, -34.9174], [138.5859, -34.9168], [138.5851, -34.9174], [138.5845, -34.9174], [138.5829, -34.9175], [138.5828, -34.9175], [138.5827, -34.9174], [138.582, -34.9164], [138.5809, -34.9156], [138.5827, -34.9139], [138.5828, -34.9138], [138.5823, -34.9125], [138.5807, -34.9136], [138.58, -34.9126], [138.5797, -34.912], [138.5797, -34.911], [138.5785, -34.9113], [138.5773, -34.9112], [138.5763, -34.9108], [138.5751, -34.9112], [138.5741, -34.9116], [138.5719, -34.912], [138.5719, -34.912], [138.5718, -34.912], [138.5697, -34.913], [138.568, -34.9134], [138.5675, -34.9137], [138.567, -34.9138], [138.5672, -34.9141], [138.5675, -34.9142], [138.5688, -34.9146], [138.5697, -34.9151], [138.5699, -34.9155], [138.5708, -34.9156], [138.5702, -34.9171], [138.5719, -34.9157], [138.5737, -34.916], [138.5741, -34.9161], [138.575, -34.9167], [138.575, -34.9174], [138.5747, -34.9187], [138.5755, -34.9192], [138.5741, -34.9196], [138.5736, -34.9196], [138.5719, -34.9196], [138.5703, -34.921], [138.5697, -34.9213], [138.5693, -34.9214], [138.568, -34.921], [138.5693, -34.9195], [138.5675, -34.9208], [138.5656, -34.9208], [138.5653, -34.9208], [138.5648, -34.921], [138.5631, -34.9217], [138.5624, -34.9216], [138.5609, -34.9214], [138.5605, -34.9214], [138.5587, -34.9215], [138.5579, -34.9217], [138.5565, -34.9222], [138.5546, -34.9226], [138.5543, -34.9227], [138.5538, -34.9228], [138.5521, -34.9232], [138.5503, -34.9243], [138.55, -34.9245], [138.548, -34.9246], [138.5487, -34.9256], [138.55, -34.9249], [138.5505, -34.926], [138.5509, -34.9264], [138.551, -34.9274], [138.5513, -34.9282], [138.5516, -34.9287], [138.5516, -34.93], [138.5517, -34.9304], [138.5521, -34.9314], [138.5529, -34.93], [138.5528, -34.9295], [138.5529, -34.9282], [138.5537, -34.9269], [138.554, -34.9264], [138.5543, -34.9263], [138.5546, -34.9262], [138.5565, -34.9262], [138.5568, -34.9262], [138.5573, -34.9264], [138.5584, -34.9267], [138.5587, -34.9269], [138.5603, -34.9269], [138.5609, -34.9274], [138.5623, -34.9271], [138.5631, -34.9269], [138.5646, -34.927], [138.5653, -34.927], [138.5662, -34.9275], [138.5675, -34.9282], [138.5664, -34.9291], [138.5675, -34.9294], [138.5694, -34.9282], [138.5677, -34.9281], [138.5693, -34.9264], [138.5697, -34.9263], [138.5699, -34.9263], [138.5719, -34.9261], [138.5722, -34.9262], [138.5724, -34.9264], [138.5731, -34.9273]], [[138.6201, -34.9678], [138.6201, -34.9678], [138.6195, -34.9665], [138.6194, -34.966], [138.6202, -34.9653], [138.6209, -34.9654], [138.622, -34.966], [138.6208, -34.9673], [138.6224, -34.9665], [138.6245, -34.966], [138.6246, -34.966], [138.6246, -34.966], [138.6246, -34.966], [138.6246, -34.966], [138.6227, -34.9678], [138.6224, -34.9681], [138.622, -34.9681], [138.6202, -34.9678], [138.6201, -34.9678]], [[138.642, -34.9643], [138.642, -34.9642], [138.6421, -34.964], [138.6422, -34.9641], [138.6423, -34.9642], [138.6421, -34.9652], [138.642, -34.9643]], [[138.5719, -34.9643], [138.5718, -34.9643], [138.5697, -34.9645], [138.5692, -34.9646], [138.5675, -34.9648], [138.5663, -34.9652], [138.5659, -34.9642], [138.5675, -34.963], [138.5686, -34.9633], [138.5697, -34.9637], [138.5706, -34.9624], [138.5719, -34.9613], [138.5731, -34.9606], [138.5741, -34.9602], [138.5745, -34.9603], [138.5753, -34.9606], [138.5745, -34.9621], [138.5763, -34.9624], [138.5763, -34.9624], [138.5763, -34.9624], [138.5763, -34.9624], [138.5745, -34.9642], [138.5741, -34.9644], [138.5739, -34.9643], [138.5719, -34.9643]], [[138.5824, -34.9628], [138.5822, -34.9624], [138.5818, -34.9614], [138.5812, -34.9606], [138.5814, -34.96], [138.5811, -34.9588], [138.5829, -34.9583], [138.5838, -34.958], [138.5838, -34.9588], [138.5839, -34.9597], [138.5838, -34.9606], [138.5834, -34.9619], [138.5832, -34.9624], [138.5829, -34.9627], [138.5824, -34.9628]], [[138.6047, -34.9985], [138.6047, -34.9984], [138.6048, -34.9982], [138.605, -34.9982], [138.6055, -34.9984], [138.6048, -34.9985], [138.6047, -34.9985]]], [[[138.6132, -34.916], [138.6136, -34.9163], [138.6156, -34.9156], [138.6143, -34.9151], [138.6144, -34.9138], [138.6146, -34.913], [138.615, -34.912], [138.6148, -34.911], [138.6142, -34.9102], [138.6139, -34.91], [138.6136, -34.9085], [138.6124, -34.9102], [138.6114, -34.9112], [138.6108, -34.912], [138.611, -34.9123], [138.6114, -34.9125], [138.6123, -34.9131], [138.6127, -34.9138], [138.6129, -34.9144], [138.6131, -34.9156], [138.6132, -34.916]]]]}}]}'
I'm trying to iterate over all my features and remove all geometries that are not Multipolygons or Polygons from my GeometryCollections. And converting all the GeomeetryCollections into Multipolygons as well.
I have tried the following in Python, with no luck
import json
geojson_obj = json.loads(json_str)
features = geojson_obj ['features']
for feature in features:
if feature['geometry']['type'] == 'GeometryCollection':
for geometry in feature['geometry']['geometries']:
if geometry['type'] != 'MultiPolygon':
geometry = None #maybe?
#print(geometry)
A:
It's generally easier to assemble a new list that has only the items you want, instead of individually deleting the unwanted items from the old list.
And then, if you like, you can replace the old list with the new list.
A simple example of this:
numbers = [0,1,2,3,4,5,6,7,8,9]
# replace numbers with a version of itself that only contains
# numbers that are odd (i.e. n%2 is nonzero)
numbers = [n for n in numbers if n%2]
# numbers is now [1,3,5,7,9]
Applying this concept to your code would look something like this:
for feature in mydict["features"]:
if feature["geometry"]["type"] == "GeometryCollection":
feature["geometry"]["geometries"] = [geom for geom in feature["geometry"]["geometries"] if geom["type"] == "Polygon" or geom["type"] == "Multipolygon"]
A:
You could create a new list of just the things you want to keep and substitute it into the original data structure. In the example case, this could be done with a list comprehension, but I've written a procedural example because of the other work you'd like to do.
import json
geojson_obj = json.loads(json_str)
features = geojson_obj ['features']
for feature in features:
if feature['geometry']['type'] == 'GeometryCollection':
keep_geometries = []
for geometry in feature['geometry']['geometries']:
if geometry['type'] == 'MultiPolygon':
keep_geometries.append(geometry)
feature['geometry']['geometries'] = keep_geometries
| Removing specific key value pairs from geoJSON object | The following is a subset of my geoJSON object which has a combination of Multipolygons and GeometryCollections in its features. The GeometryCollections include multiple types of geometries.
json_str = '{"type": "FeatureCollection",
"features": [
{"id": "0",
"type": "Feature",
"properties": {"Date": "2019/07/10", "PID": "P1"},
"geometry": {"type": "GeometryCollection",
"geometries": [
{"type": "MultiPolygon",
"coordinates": [[[[138.5765, -35.0101], [138.5764, -35.0113], [138.5776, -35.0119], [138.5757, -35.0123], [138.5744, -35.013], [138.5739, -35.0119], [138.574, -35.0115], [138.5746, -35.0101], [138.5757, -35.0097], [138.5773, -35.0088], [138.5765, -35.0101]]], [[[138.6124, -35.0016], [138.612, -35.0011], [138.613, -35.0006], [138.6134, -35.0008], [138.6143, -35.0011], [138.613, -35.0024], [138.6124, -35.0016]]]]},
{"type": "MultiLineString",
"coordinates": [[[138.5625, -34.9778], [138.5609, -34.9791]], [[138.6042, -34.9885], [138.6042, -34.9886]]]},
{"type": "Point",
"coordinates": [138.6656, -34.8842]}]}},
{"id": "1",
"type": "Feature",
"properties": {"Date": "2019/07/10", "PID": "P2"},
"geometry": {
"type": "MultiPolygon",
"coordinates": [[[[138.5731, -34.9273], [138.5741, -34.9281], [138.5752, -34.9273], [138.5763, -34.9266], [138.5779, -34.9269], [138.5785, -34.9268], [138.5797, -34.9272], [138.5807, -34.928], [138.5814, -34.9276], [138.5809, -34.9282], [138.5807, -34.9285], [138.5794, -34.9292], [138.5785, -34.9299], [138.5766, -34.93], [138.5783, -34.9302], [138.5785, -34.9303], [138.5793, -34.9312], [138.5796, -34.9318], [138.579, -34.9332], [138.5795, -34.9336], [138.5803, -34.934], [138.5807, -34.9341], [138.5816, -34.9347], [138.5821, -34.9354], [138.5822, -34.9359], [138.5829, -34.9368], [138.5831, -34.937], [138.5836, -34.9372], [138.5843, -34.9379], [138.5843, -34.939], [138.5829, -34.9394], [138.5823, -34.9395], [138.5817, -34.939], [138.5823, -34.9377], [138.5807, -34.9385], [138.5792, -34.939], [138.5785, -34.9396], [138.5771, -34.9408], [138.5769, -34.9421], [138.5769, -34.9426], [138.5768, -34.944], [138.5785, -34.9437], [138.5788, -34.9441], [138.5789, -34.9444], [138.579, -34.9458], [138.5797, -34.9462], [138.5798, -34.9469], [138.5797, -34.948], [138.5798, -34.9487], [138.5793, -34.9498], [138.5785, -34.9507], [138.577, -34.9516], [138.5763, -34.9523], [138.5752, -34.9534], [138.5741, -34.9543], [138.5727, -34.9545], [138.5719, -34.954], [138.5713, -34.9552], [138.5697, -34.9567], [138.5692, -34.957], [138.5675, -34.9579], [138.5664, -34.9588], [138.5653, -34.9604], [138.5646, -34.9606], [138.5631, -34.9608], [138.5612, -34.9624], [138.5609, -34.9629], [138.56, -34.9642], [138.5597, -34.9652], [138.5609, -34.9655], [138.5613, -34.9657], [138.5618, -34.966], [138.5609, -34.9664], [138.56, -34.9667], [138.5587, -34.967], [138.5565, -34.9678], [138.5565, -34.9678], [138.5565, -34.9678], [138.5565, -34.9678], [138.5565, -34.9678], [138.5579, -34.9685], [138.5587, -34.9688], [138.5592, -34.9692], [138.5609, -34.9695], [138.561, -34.9695], [138.5631, -34.9695], [138.5632, -34.9695], [138.5653, -34.9693], [138.5656, -34.9694], [138.566, -34.9696], [138.5667, -34.9702], [138.5675, -34.9705], [138.5688, -34.9703], [138.5697, -34.9709], [138.5701, -34.971], [138.5711, -34.9714], [138.5697, -34.972], [138.5692, -34.9732], [138.5675, -34.975], [138.5697, -34.9738], [138.5705, -34.9743], [138.5719, -34.975], [138.5732, -34.9732], [138.5734, -34.972], [138.5723, -34.9714], [138.5741, -34.9709], [138.5744, -34.9711], [138.5763, -34.9714], [138.5763, -34.9714], [138.5763, -34.9714], [138.5782, -34.9716], [138.5785, -34.9717], [138.5798, -34.9714], [138.5807, -34.9713], [138.581, -34.9711], [138.5829, -34.9708], [138.5837, -34.9707], [138.5835, -34.9714], [138.5836, -34.9726], [138.5838, -34.9732], [138.5841, -34.974], [138.5842, -34.975], [138.5844, -34.9755], [138.5847, -34.9768], [138.5847, -34.9771], [138.5851, -34.9785], [138.5851, -34.9786], [138.5851, -34.9786], [138.5856, -34.98], [138.5858, -34.9804], [138.5857, -34.9816], [138.5859, -34.9822], [138.5864, -34.9829], [138.5865, -34.984], [138.5864, -34.9847], [138.5869, -34.9858], [138.587, -34.986], [138.5872, -34.9876], [138.5856, -34.989], [138.5873, -34.988], [138.5875, -34.9892], [138.5874, -34.9894], [138.5873, -34.9897], [138.586, -34.9912], [138.5856, -34.9925], [138.5858, -34.993], [138.5854, -34.9945], [138.5873, -34.994], [138.5877, -34.9944], [138.5879, -34.9948], [138.5876, -34.9963], [138.5875, -34.9966], [138.5873, -34.9971], [138.5866, -34.9984], [138.5864, -34.999], [138.5863, -35.0002], [138.5851, -35.0007], [138.5837, -35.0013], [138.5829, -35.0002], [138.5829, -35.0001], [138.5829, -35.0001], [138.5828, -35.0002], [138.5828, -35.0002], [138.582, -35.002], [138.5823, -35.0024], [138.5825, -35.0038], [138.5807, -35.0053], [138.5789, -35.0056], [138.5802, -35.006], [138.5797, -35.0074], [138.5785, -35.0084], [138.5767, -35.0088], [138.5764, -35.0074], [138.5782, -35.0058], [138.5763, -35.0073], [138.5761, -35.0074], [138.5762, -35.0075], [138.5759, -35.0092], [138.5741, -35.0097], [138.5739, -35.011], [138.5719, -35.0126], [138.5705, -35.0121], [138.5697, -35.0119], [138.5695, -35.0128], [138.5693, -35.0131], [138.568, -35.0146], [138.5675, -35.0163], [138.5675, -35.0164], [138.5678, -35.0179], [138.5697, -35.0178], [138.5701, -35.0178], [138.5719, -35.0177], [138.5727, -35.0175], [138.5741, -35.0176], [138.5759, -35.0167], [138.5763, -35.0165], [138.5766, -35.0164], [138.5776, -35.0153], [138.5783, -35.0146], [138.5785, -35.0145], [138.5786, -35.0145], [138.58, -35.0146], [138.5806, -35.0146], [138.5807, -35.0158], [138.5809, -35.0146], [138.5829, -35.0142], [138.5834, -35.0141], [138.5851, -35.0137], [138.5868, -35.0132], [138.5873, -35.0133], [138.5878, -35.0128], [138.5894, -35.0122], [138.5904, -35.012], [138.5916, -35.0123], [138.5925, -35.011], [138.5936, -35.0093], [138.5937, -35.0092], [138.5938, -35.0091], [138.5939, -35.0091], [138.5942, -35.0092], [138.5955, -35.0096], [138.596, -35.0097], [138.5965, -35.0092], [138.5982, -35.0082], [138.5997, -35.0079], [138.6004, -35.0083], [138.6012, -35.0074], [138.6026, -35.0063], [138.6042, -35.0061], [138.6048, -35.007], [138.6069, -35.0056], [138.6053, -35.0051], [138.6048, -35.0045], [138.6042, -35.0043], [138.6045, -35.0038], [138.6048, -35.0034], [138.605, -35.0036], [138.607, -35.0032], [138.6075, -35.0034], [138.6092, -35.0022], [138.6106, -35.0026], [138.6114, -35.0033], [138.6119, -35.0033], [138.6126, -35.0038], [138.6134, -35.004], [138.6136, -35.004], [138.6148, -35.0046], [138.6158, -35.0051], [138.6175, -35.0041], [138.618, -35.004], [138.6181, -35.0038], [138.6202, -35.002], [138.6203, -35.002], [138.6202, -35.0019], [138.6202, -35.0017], [138.6195, -35.0008], [138.618, -35.0004], [138.6158, -35.002], [138.6158, -35.002], [138.6156, -35.0021], [138.6157, -35.002], [138.6143, -35.0013], [138.6136, -35.0008], [138.6128, -35.0008], [138.6118, -35.0002], [138.6115, -35.0001], [138.6114, -35.0], [138.61, -34.9995], [138.6092, -34.9988], [138.6087, -34.9988], [138.6085, -34.9984], [138.6092, -34.998], [138.6111, -34.9966], [138.6114, -34.9958], [138.612, -34.9961], [138.6136, -34.9957], [138.6143, -34.996], [138.6158, -34.9963], [138.617, -34.9948], [138.6167, -34.994], [138.6163, -34.993], [138.6163, -34.9925], [138.6165, -34.9912], [138.6172, -34.99], [138.6175, -34.9894], [138.6179, -34.9877], [138.6178, -34.9876], [138.6176, -34.9861], [138.6176, -34.9858], [138.618, -34.9855], [138.619, -34.9849], [138.6202, -34.9842], [138.6219, -34.984], [138.6224, -34.9839], [138.6225, -34.9839], [138.6226, -34.984], [138.6241, -34.9844], [138.6246, -34.9845], [138.6252, -34.984], [138.6268, -34.9824], [138.6273, -34.9822], [138.6289, -34.9809], [138.6293, -34.9804], [138.6311, -34.9789], [138.6316, -34.9786], [138.6316, -34.9782], [138.6316, -34.9768], [138.6317, -34.9763], [138.6321, -34.975], [138.633, -34.9735], [138.633, -34.9732], [138.6333, -34.973], [138.6345, -34.9714], [138.6342, -34.9707], [138.6333, -34.9704], [138.6323, -34.9704], [138.6314, -34.9696], [138.6332, -34.9679], [138.6311, -34.9695], [138.6292, -34.9694], [138.6289, -34.9692], [138.6279, -34.9686], [138.628, -34.9678], [138.6286, -34.9663], [138.6286, -34.966], [138.6289, -34.9656], [138.6293, -34.9657], [138.6311, -34.9656], [138.6321, -34.9652], [138.6317, -34.966], [138.6326, -34.9666], [138.6333, -34.9678], [138.6334, -34.9678], [138.6334, -34.9678], [138.6349, -34.9683], [138.6355, -34.9681], [138.6359, -34.9678], [138.636, -34.9674], [138.6363, -34.966], [138.6366, -34.9651], [138.6364, -34.9642], [138.636, -34.9638], [138.6355, -34.9637], [138.6338, -34.9638], [138.6333, -34.9635], [138.6326, -34.963], [138.6326, -34.9624], [138.6323, -34.9614], [138.6321, -34.9606], [138.6317, -34.9601], [138.6311, -34.9593], [138.6307, -34.9592], [138.6289, -34.9593], [138.6286, -34.9591], [138.6282, -34.9588], [138.6288, -34.9572], [138.6287, -34.957], [138.6281, -34.9559], [138.6283, -34.9552], [138.6289, -34.9544], [138.6298, -34.9534], [138.6311, -34.9527], [138.6317, -34.953], [138.632, -34.9534], [138.6325, -34.9541], [138.6333, -34.9546], [138.6339, -34.9547], [138.6341, -34.9552], [138.6347, -34.9559], [138.6355, -34.9561], [138.6365, -34.9562], [138.6368, -34.957], [138.6372, -34.9574], [138.6377, -34.9581], [138.6387, -34.958], [138.6383, -34.9588], [138.6391, -34.9595], [138.6398, -34.9606], [138.6398, -34.9607], [138.6399, -34.9616], [138.6401, -34.9623], [138.6403, -34.9624], [138.6399, -34.9625], [138.6385, -34.9642], [138.6388, -34.9651], [138.6391, -34.966], [138.6392, -34.9666], [138.6399, -34.9667], [138.642, -34.966], [138.6421, -34.966], [138.6422, -34.966], [138.6443, -34.9647], [138.6453, -34.9642], [138.6453, -34.9634], [138.6457, -34.9624], [138.6451, -34.9618], [138.6443, -34.9611], [138.6431, -34.9616], [138.6436, -34.9606], [138.6429, -34.9599], [138.6422, -34.9588], [138.6422, -34.9588], [138.6421, -34.9587], [138.6411, -34.9579], [138.6407, -34.957], [138.6404, -34.9566], [138.6399, -34.9559], [138.6391, -34.9558], [138.6394, -34.9552], [138.6388, -34.9543], [138.6385, -34.9534], [138.6385, -34.9528], [138.6384, -34.9516], [138.6381, -34.9513], [138.6377, -34.951], [138.6366, -34.9507], [138.636, -34.9498], [138.6358, -34.9496], [138.6355, -34.9494], [138.6343, -34.949], [138.6333, -34.948], [138.6333, -34.948], [138.6333, -34.948], [138.6321, -34.9472], [138.6319, -34.9462], [138.6315, -34.9459], [138.6311, -34.9455], [138.6301, -34.9453], [138.6289, -34.9449], [138.6284, -34.9449], [138.6284, -34.9444], [138.6278, -34.9435], [138.6275, -34.9426], [138.6283, -34.9413], [138.6285, -34.9408], [138.6283, -34.9395], [138.6278, -34.939], [138.6271, -34.9388], [138.6268, -34.9386], [138.6257, -34.9381], [138.6254, -34.9372], [138.6254, -34.9365], [138.6246, -34.9355], [138.6245, -34.9354], [138.6245, -34.9354], [138.6238, -34.9342], [138.623, -34.9336], [138.6233, -34.9329], [138.6237, -34.9318], [138.6229, -34.9314], [138.6224, -34.9311], [138.6219, -34.9304], [138.6216, -34.93], [138.6221, -34.9284], [138.6221, -34.9282], [138.6215, -34.9271], [138.6211, -34.9264], [138.6214, -34.9255], [138.6217, -34.9246], [138.6224, -34.9239], [138.6234, -34.9238], [138.6246, -34.9238], [138.6258, -34.9236], [138.6268, -34.9236], [138.6285, -34.9232], [138.6289, -34.9231], [138.6304, -34.9228], [138.6311, -34.9226], [138.6315, -34.9225], [138.6333, -34.922], [138.6343, -34.921], [138.6339, -34.9205], [138.6333, -34.9203], [138.6322, -34.9202], [138.6311, -34.9202], [138.6302, -34.92], [138.6289, -34.9197], [138.6284, -34.9197], [138.6268, -34.9196], [138.6264, -34.9196], [138.6246, -34.9196], [138.6239, -34.9198], [138.6224, -34.9199], [138.6205, -34.921], [138.6202, -34.9215], [138.6191, -34.9228], [138.618, -34.9231], [138.6175, -34.9232], [138.6163, -34.9228], [138.6159, -34.9227], [138.6159, -34.921], [138.6161, -34.9208], [138.6168, -34.9192], [138.6178, -34.9175], [138.6158, -34.9183], [138.614, -34.9192], [138.6136, -34.9199], [138.6135, -34.9193], [138.6114, -34.9196], [138.6111, -34.9195], [138.6099, -34.9192], [138.6095, -34.919], [138.6092, -34.9189], [138.6085, -34.918], [138.6079, -34.9174], [138.6081, -34.9166], [138.6081, -34.9156], [138.6078, -34.915], [138.6079, -34.9138], [138.6083, -34.9128], [138.6081, -34.912], [138.6085, -34.9108], [138.6085, -34.9102], [138.6092, -34.9097], [138.6099, -34.9097], [138.6114, -34.9094], [138.6135, -34.9084], [138.6136, -34.9084], [138.6145, -34.9066], [138.6156, -34.905], [138.6157, -34.9048], [138.6154, -34.9033], [138.6152, -34.903], [138.6147, -34.9021], [138.6146, -34.9012], [138.6156, -34.8996], [138.6156, -34.8994], [138.6158, -34.8993], [138.6173, -34.8976], [138.618, -34.8971], [138.619, -34.8958], [138.6188, -34.8952], [138.6184, -34.894], [138.6182, -34.8939], [138.618, -34.8937], [138.6162, -34.8937], [138.6158, -34.8937], [138.6147, -34.894], [138.6136, -34.8945], [138.613, -34.8945], [138.6114, -34.8956], [138.6103, -34.8949], [138.6092, -34.8942], [138.6087, -34.8945], [138.607, -34.8952], [138.6063, -34.8958], [138.6056, -34.897], [138.607, -34.8964], [138.608, -34.8969], [138.608, -34.8976], [138.6076, -34.8989], [138.6073, -34.8994], [138.607, -34.8998], [138.6051, -34.901], [138.6048, -34.9011], [138.6047, -34.9012], [138.6034, -34.9024], [138.6048, -34.9027], [138.6052, -34.9027], [138.6052, -34.903], [138.6048, -34.9034], [138.6042, -34.9035], [138.6026, -34.9038], [138.6019, -34.9036], [138.6004, -34.904], [138.5994, -34.9039], [138.5982, -34.9033], [138.598, -34.9032], [138.596, -34.9031], [138.5959, -34.9031], [138.5958, -34.903], [138.5951, -34.902], [138.5947, -34.9012], [138.5943, -34.9009], [138.5938, -34.9008], [138.5931, -34.9], [138.5927, -34.8994], [138.593, -34.8984], [138.5916, -34.8982], [138.5913, -34.8979], [138.5906, -34.8976], [138.5911, -34.8963], [138.5894, -34.8972], [138.5883, -34.8968], [138.5876, -34.8958], [138.5885, -34.8948], [138.5873, -34.8955], [138.5857, -34.8953], [138.5851, -34.895], [138.5829, -34.8958], [138.5841, -34.8967], [138.5834, -34.8976], [138.5829, -34.8992], [138.5815, -34.8994], [138.5815, -34.9005], [138.5829, -34.8996], [138.5838, -34.9004], [138.5851, -34.901], [138.5852, -34.9011], [138.5853, -34.9012], [138.5855, -34.9027], [138.5857, -34.903], [138.5851, -34.9047], [138.5849, -34.9048], [138.5829, -34.9056], [138.5819, -34.9056], [138.5807, -34.9058], [138.5796, -34.9066], [138.5798, -34.9074], [138.5807, -34.9083], [138.5808, -34.9083], [138.5829, -34.9076], [138.5848, -34.9066], [138.5851, -34.9054], [138.5852, -34.9065], [138.5852, -34.9066], [138.5851, -34.9068], [138.5837, -34.9084], [138.583, -34.9101], [138.5851, -34.9094], [138.5858, -34.9096], [138.5857, -34.9102], [138.5856, -34.9116], [138.5861, -34.912], [138.5866, -34.9126], [138.5873, -34.9129], [138.5882, -34.9131], [138.5894, -34.9132], [138.59, -34.9134], [138.5902, -34.9138], [138.5901, -34.9151], [138.5908, -34.9156], [138.5905, -34.9166], [138.5899, -34.9174], [138.5894, -34.9183], [138.5884, -34.9183], [138.5873, -34.9175], [138.5872, -34.9175], [138.5872, -34.9174], [138.5859, -34.9168], [138.5851, -34.9174], [138.5845, -34.9174], [138.5829, -34.9175], [138.5828, -34.9175], [138.5827, -34.9174], [138.582, -34.9164], [138.5809, -34.9156], [138.5827, -34.9139], [138.5828, -34.9138], [138.5823, -34.9125], [138.5807, -34.9136], [138.58, -34.9126], [138.5797, -34.912], [138.5797, -34.911], [138.5785, -34.9113], [138.5773, -34.9112], [138.5763, -34.9108], [138.5751, -34.9112], [138.5741, -34.9116], [138.5719, -34.912], [138.5719, -34.912], [138.5718, -34.912], [138.5697, -34.913], [138.568, -34.9134], [138.5675, -34.9137], [138.567, -34.9138], [138.5672, -34.9141], [138.5675, -34.9142], [138.5688, -34.9146], [138.5697, -34.9151], [138.5699, -34.9155], [138.5708, -34.9156], [138.5702, -34.9171], [138.5719, -34.9157], [138.5737, -34.916], [138.5741, -34.9161], [138.575, -34.9167], [138.575, -34.9174], [138.5747, -34.9187], [138.5755, -34.9192], [138.5741, -34.9196], [138.5736, -34.9196], [138.5719, -34.9196], [138.5703, -34.921], [138.5697, -34.9213], [138.5693, -34.9214], [138.568, -34.921], [138.5693, -34.9195], [138.5675, -34.9208], [138.5656, -34.9208], [138.5653, -34.9208], [138.5648, -34.921], [138.5631, -34.9217], [138.5624, -34.9216], [138.5609, -34.9214], [138.5605, -34.9214], [138.5587, -34.9215], [138.5579, -34.9217], [138.5565, -34.9222], [138.5546, -34.9226], [138.5543, -34.9227], [138.5538, -34.9228], [138.5521, -34.9232], [138.5503, -34.9243], [138.55, -34.9245], [138.548, -34.9246], [138.5487, -34.9256], [138.55, -34.9249], [138.5505, -34.926], [138.5509, -34.9264], [138.551, -34.9274], [138.5513, -34.9282], [138.5516, -34.9287], [138.5516, -34.93], [138.5517, -34.9304], [138.5521, -34.9314], [138.5529, -34.93], [138.5528, -34.9295], [138.5529, -34.9282], [138.5537, -34.9269], [138.554, -34.9264], [138.5543, -34.9263], [138.5546, -34.9262], [138.5565, -34.9262], [138.5568, -34.9262], [138.5573, -34.9264], [138.5584, -34.9267], [138.5587, -34.9269], [138.5603, -34.9269], [138.5609, -34.9274], [138.5623, -34.9271], [138.5631, -34.9269], [138.5646, -34.927], [138.5653, -34.927], [138.5662, -34.9275], [138.5675, -34.9282], [138.5664, -34.9291], [138.5675, -34.9294], [138.5694, -34.9282], [138.5677, -34.9281], [138.5693, -34.9264], [138.5697, -34.9263], [138.5699, -34.9263], [138.5719, -34.9261], [138.5722, -34.9262], [138.5724, -34.9264], [138.5731, -34.9273]], [[138.6201, -34.9678], [138.6201, -34.9678], [138.6195, -34.9665], [138.6194, -34.966], [138.6202, -34.9653], [138.6209, -34.9654], [138.622, -34.966], [138.6208, -34.9673], [138.6224, -34.9665], [138.6245, -34.966], [138.6246, -34.966], [138.6246, -34.966], [138.6246, -34.966], [138.6246, -34.966], [138.6227, -34.9678], [138.6224, -34.9681], [138.622, -34.9681], [138.6202, -34.9678], [138.6201, -34.9678]], [[138.642, -34.9643], [138.642, -34.9642], [138.6421, -34.964], [138.6422, -34.9641], [138.6423, -34.9642], [138.6421, -34.9652], [138.642, -34.9643]], [[138.5719, -34.9643], [138.5718, -34.9643], [138.5697, -34.9645], [138.5692, -34.9646], [138.5675, -34.9648], [138.5663, -34.9652], [138.5659, -34.9642], [138.5675, -34.963], [138.5686, -34.9633], [138.5697, -34.9637], [138.5706, -34.9624], [138.5719, -34.9613], [138.5731, -34.9606], [138.5741, -34.9602], [138.5745, -34.9603], [138.5753, -34.9606], [138.5745, -34.9621], [138.5763, -34.9624], [138.5763, -34.9624], [138.5763, -34.9624], [138.5763, -34.9624], [138.5745, -34.9642], [138.5741, -34.9644], [138.5739, -34.9643], [138.5719, -34.9643]], [[138.5824, -34.9628], [138.5822, -34.9624], [138.5818, -34.9614], [138.5812, -34.9606], [138.5814, -34.96], [138.5811, -34.9588], [138.5829, -34.9583], [138.5838, -34.958], [138.5838, -34.9588], [138.5839, -34.9597], [138.5838, -34.9606], [138.5834, -34.9619], [138.5832, -34.9624], [138.5829, -34.9627], [138.5824, -34.9628]], [[138.6047, -34.9985], [138.6047, -34.9984], [138.6048, -34.9982], [138.605, -34.9982], [138.6055, -34.9984], [138.6048, -34.9985], [138.6047, -34.9985]]], [[[138.6132, -34.916], [138.6136, -34.9163], [138.6156, -34.9156], [138.6143, -34.9151], [138.6144, -34.9138], [138.6146, -34.913], [138.615, -34.912], [138.6148, -34.911], [138.6142, -34.9102], [138.6139, -34.91], [138.6136, -34.9085], [138.6124, -34.9102], [138.6114, -34.9112], [138.6108, -34.912], [138.611, -34.9123], [138.6114, -34.9125], [138.6123, -34.9131], [138.6127, -34.9138], [138.6129, -34.9144], [138.6131, -34.9156], [138.6132, -34.916]]]]}}]}'
I'm trying to iterate over all my features and remove all geometries that are not Multipolygons or Polygons from my GeometryCollections. And converting all the GeomeetryCollections into Multipolygons as well.
I have tried the following in Python, with no luck
import json
geojson_obj = json.loads(json_str)
features = geojson_obj ['features']
for feature in features:
if feature['geometry']['type'] == 'GeometryCollection':
for geometry in feature['geometry']['geometries']:
if geometry['type'] != 'MultiPolygon':
geometry = None #maybe?
#print(geometry)
| [
"It's generally easier to assemble a new list that has only the items you want, instead of individually deleting the unwanted items from the old list.\nAnd then, if you like, you can replace the old list with the new list.\nA simple example of this:\nnumbers = [0,1,2,3,4,5,6,7,8,9]\n\n# replace numbers with a version of itself that only contains\n# numbers that are odd (i.e. n%2 is nonzero)\nnumbers = [n for n in numbers if n%2]\n\n# numbers is now [1,3,5,7,9]\n\nApplying this concept to your code would look something like this:\nfor feature in mydict[\"features\"]:\n if feature[\"geometry\"][\"type\"] == \"GeometryCollection\":\n feature[\"geometry\"][\"geometries\"] = [geom for geom in feature[\"geometry\"][\"geometries\"] if geom[\"type\"] == \"Polygon\" or geom[\"type\"] == \"Multipolygon\"]\n\n",
"You could create a new list of just the things you want to keep and substitute it into the original data structure. In the example case, this could be done with a list comprehension, but I've written a procedural example because of the other work you'd like to do.\nimport json\n\ngeojson_obj = json.loads(json_str)\nfeatures = geojson_obj ['features']\n\nfor feature in features:\n if feature['geometry']['type'] == 'GeometryCollection':\n keep_geometries = []\n for geometry in feature['geometry']['geometries']:\n if geometry['type'] == 'MultiPolygon':\n keep_geometries.append(geometry)\n feature['geometry']['geometries'] = keep_geometries\n\n \n\n"
] | [
1,
1
] | [] | [] | [
"dictionary",
"python"
] | stackoverflow_0074649960_dictionary_python.txt |
Q:
How can I add features from different images and merge them into a final image
I have some images, each of which may contain one or more blobs, I know how to load the image and convert it to binary but I want to be able to add all found blobs from any amount of images and paste them into a final image (which will start out blank).
I don't know if opencv or pillow is better for this as I have very little experience or knowledge in feature extraction.
Code
import cv2
# use cv2 imread method to load image
img1 = cv2.imread("im1.jpg")
img2 = cv2.imread("im2.jpg")
# make bw image
im1_gray = cv2.imread("im1.jpg", cv2.IMREAD_GRAYSCALE)
im2_gray = cv2.imread("im2.jpg", cv2.IMREAD_GRAYSCALE)
# get threshold and binary image
(thresh, im_bw1) = cv2.threshold(im1_gray, 128, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)
# save binary image 1
im_out1 = "bw_image_1"
ext = ".png"
im_name = im_out1 + "_" + str(thresh) + ext
cv2.imwrite(im_name, im_bw1)
# get threshold and binary image
(thresh, im_bw2) = cv2.threshold(im1_gray, 128, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)
# save binary image 2
im_out2 = "bw_image_2"
ext = ".png"
im_name = im_out2 + "_" + str(thresh) + ext
cv2.imwrite(im_name, im_bw2)
Images
Desired output
I don't know how to do this manually, but the output for this would be either a white or grey background with two black blobs in it.
If either of the input images had two blobs in it and the other image had three blobs the output image would have five blobs, with positions equal to that of their position in the original image(s), it does not matter if they overlap.
A:
This has been asked before on opencv c++, there should be the same function on python3, hconcat, placing two images side by side, opencv 2.3, c++
| How can I add features from different images and merge them into a final image | I have some images, each of which may contain one or more blobs, I know how to load the image and convert it to binary but I want to be able to add all found blobs from any amount of images and paste them into a final image (which will start out blank).
I don't know if opencv or pillow is better for this as I have very little experience or knowledge in feature extraction.
Code
import cv2
# use cv2 imread method to load image
img1 = cv2.imread("im1.jpg")
img2 = cv2.imread("im2.jpg")
# make bw image
im1_gray = cv2.imread("im1.jpg", cv2.IMREAD_GRAYSCALE)
im2_gray = cv2.imread("im2.jpg", cv2.IMREAD_GRAYSCALE)
# get threshold and binary image
(thresh, im_bw1) = cv2.threshold(im1_gray, 128, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)
# save binary image 1
im_out1 = "bw_image_1"
ext = ".png"
im_name = im_out1 + "_" + str(thresh) + ext
cv2.imwrite(im_name, im_bw1)
# get threshold and binary image
(thresh, im_bw2) = cv2.threshold(im1_gray, 128, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)
# save binary image 2
im_out2 = "bw_image_2"
ext = ".png"
im_name = im_out2 + "_" + str(thresh) + ext
cv2.imwrite(im_name, im_bw2)
Images
Desired output
I don't know how to do this manually, but the output for this would be either a white or grey background with two black blobs in it.
If either of the input images had two blobs in it and the other image had three blobs the output image would have five blobs, with positions equal to that of their position in the original image(s), it does not matter if they overlap.
| [
"This has been asked before on opencv c++, there should be the same function on python3, hconcat, placing two images side by side, opencv 2.3, c++\n"
] | [
0
] | [] | [] | [
"image",
"opencv",
"python"
] | stackoverflow_0074641915_image_opencv_python.txt |
Q:
How to avoid Selenium being detected when answering captcha?
I have a Python script which login on a page (sso.acesso.gov.br) using some credentials and them usually answer a captcha using 2Captcha API.
The problem is that recently it takes an error after captcha answer, even when I answer it manually.
By the way, the error message received is different than when I forced answer wrong, which makes me believe that my script has now being detected somehow by the website.
If I open a Chrome browser as a user and just do the same steps, I can login, sometimes even without captcha. And all times without an error.
Here is my code:
from selenium import webdriver
from fake_useragent import UserAgent
import undetected_chromedriver as uc
from fp.fp import FreeProxy
user_path = 'C:\\PythonProjects\\User Data'
driver_path = 'C:\\PythonProjects\\107\\chromedriver.exe'
options = webdriver.ChromeOptions()
## Tactics to avoid being detected as automation
options.add_argument("--start-maximized")
options.add_argument('--disable-blink-features=AutomationControlled')
## User profile
options.add_argument(f"--user-data-dir={user_path}")
## User agent
ua = UserAgent()
options.add_argument(f'--user-agent={ua.random}')
## Proxy
proxy = FreeProxy().get()
options.add_argument(f'--proxy-server={proxy}')
## Set browser
driver = uc.Chrome(
driver_executable_path = driver_path,
options = options
)
## Set device memory info
driver.execute_script("Object.defineProperty(navigator, 'deviceMemory', {get: () => 8});")
## Set navigator webdriver to undefined
driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined});")
## Open page
driver.get('https://sso.acesso.gov.br/login')
## From this point I insert CPF (user) and password, then answer captcha using 2Captcha
## I also have tried just set the browser and navigate manually, inserting data and answering captcha, but no success
Do you have any suggestion to bypass this block?
I have no idea about what is detecting and blocking my browser.
If I use my script on bot.sannysoft.com, I get the following results:
Intoli tests
Fingerprint Scanner 1/2
Fingerprint Scanner 2/2
A:
Add couple of seconds wait before you enter correct captcha first time, that might work unless its designed otherwise.
| How to avoid Selenium being detected when answering captcha? | I have a Python script which login on a page (sso.acesso.gov.br) using some credentials and them usually answer a captcha using 2Captcha API.
The problem is that recently it takes an error after captcha answer, even when I answer it manually.
By the way, the error message received is different than when I forced answer wrong, which makes me believe that my script has now being detected somehow by the website.
If I open a Chrome browser as a user and just do the same steps, I can login, sometimes even without captcha. And all times without an error.
Here is my code:
from selenium import webdriver
from fake_useragent import UserAgent
import undetected_chromedriver as uc
from fp.fp import FreeProxy
user_path = 'C:\\PythonProjects\\User Data'
driver_path = 'C:\\PythonProjects\\107\\chromedriver.exe'
options = webdriver.ChromeOptions()
## Tactics to avoid being detected as automation
options.add_argument("--start-maximized")
options.add_argument('--disable-blink-features=AutomationControlled')
## User profile
options.add_argument(f"--user-data-dir={user_path}")
## User agent
ua = UserAgent()
options.add_argument(f'--user-agent={ua.random}')
## Proxy
proxy = FreeProxy().get()
options.add_argument(f'--proxy-server={proxy}')
## Set browser
driver = uc.Chrome(
driver_executable_path = driver_path,
options = options
)
## Set device memory info
driver.execute_script("Object.defineProperty(navigator, 'deviceMemory', {get: () => 8});")
## Set navigator webdriver to undefined
driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined});")
## Open page
driver.get('https://sso.acesso.gov.br/login')
## From this point I insert CPF (user) and password, then answer captcha using 2Captcha
## I also have tried just set the browser and navigate manually, inserting data and answering captcha, but no success
Do you have any suggestion to bypass this block?
I have no idea about what is detecting and blocking my browser.
If I use my script on bot.sannysoft.com, I get the following results:
Intoli tests
Fingerprint Scanner 1/2
Fingerprint Scanner 2/2
| [
"Add couple of seconds wait before you enter correct captcha first time, that might work unless its designed otherwise.\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x",
"selenium",
"selenium_chromedriver",
"selenium_webdriver"
] | stackoverflow_0074606892_python_python_3.x_selenium_selenium_chromedriver_selenium_webdriver.txt |
Q:
Position bars between tick marks (and not on tick marks) in plotly
I am using Plotly and Python to chart a bar plot. On the x-axis, Plotly arranges the values from each trace around the centre of the tick mark.
This is what I am getting now:
I would like to have the data points (and labels) in between the tick marks. In the example chart, this would mean all the bars centered around 0-2kw would move left of the first tick and the label centered, all the bars around 2-4kw would move between the first and second ticks and the label centered, etc..
I am using tickmode=array, ticktext as an array and also set tickson=boundaries, but it's still the same.
Is there a way to do this?
(Not sure if this makes any difference but there are multiple charts in subplots)
A:
Answering my own question.
Normally setting tickon=boundaries should do the trick, but it doesn't seem to work in conjunction with tickmode=array and ticktext.
The solution for me was to create the labels array and provide it to the bar chart as the x parameter, something similar to this:
fig = go.Figure(data=go.Bar(name='Trace1', x=['0-2kw', '2-4kw', '4-6kw', '6-8kw'], y=[0.2, 0.2, 0.2, 0.4]))
fig.update_xaxes(showgrid=True, tickson='boundaries')
Doing this in my code now the data bars are in between the grid lines:
| Position bars between tick marks (and not on tick marks) in plotly | I am using Plotly and Python to chart a bar plot. On the x-axis, Plotly arranges the values from each trace around the centre of the tick mark.
This is what I am getting now:
I would like to have the data points (and labels) in between the tick marks. In the example chart, this would mean all the bars centered around 0-2kw would move left of the first tick and the label centered, all the bars around 2-4kw would move between the first and second ticks and the label centered, etc..
I am using tickmode=array, ticktext as an array and also set tickson=boundaries, but it's still the same.
Is there a way to do this?
(Not sure if this makes any difference but there are multiple charts in subplots)
| [
"Answering my own question.\nNormally setting tickon=boundaries should do the trick, but it doesn't seem to work in conjunction with tickmode=array and ticktext.\nThe solution for me was to create the labels array and provide it to the bar chart as the x parameter, something similar to this:\nfig = go.Figure(data=go.Bar(name='Trace1', x=['0-2kw', '2-4kw', '4-6kw', '6-8kw'], y=[0.2, 0.2, 0.2, 0.4]))\nfig.update_xaxes(showgrid=True, tickson='boundaries')\n\nDoing this in my code now the data bars are in between the grid lines:\n\n"
] | [
0
] | [] | [] | [
"plotly",
"python"
] | stackoverflow_0074644983_plotly_python.txt |
Q:
Convert a dataframe to dictionary
I have this data like this
technologies = [
("a","2","3"),
("4","5","6"),
("7","8","9")
]
df = pd.DataFrame(technologies,columns = ['C1','C2','C3'])
print(df)
and i convert it to this df
C1 C2 C3
0 a 2 3
1 4 5 6
2 7 8 9
then i convert DataFrame to Dictionary of Records
df2 = df.to_dict('records')
print(df2)
and i got this
[{'C1': 'a', 'C2': '2', 'C3': '3'}, {'C1': '4', 'C2': '5', 'C3': '6'}, {'C1': '7', 'C2': '8', 'C3': '9'}]
Now i want my output would be like this
[{'C1': 'a',
'C2': '2',
'C3': '3'},
{'C1': '4',
'C2': '5',
'C3': '6'},
{'C1': '7',
'C2': '8',
'C3': '9'}]
Is there anychance to get my expect output? i'm just the beginner, please help me to find out
A:
You can use pprint.pprint() for that. Something like this:
>>> from pprint import pprint
>>> d = [{'C1': 'a', 'C2': '2', 'C3': '3'}, {'C1': '4', 'C2': '5', 'C3': '6'}, {'C1': '7', 'C2': '8', 'C3': '9'}]
>>> pprint(d, width=10)
[{'C1': 'a',
'C2': '2',
'C3': '3'},
{'C1': '4',
'C2': '5',
'C3': '6'},
{'C1': '7',
'C2': '8',
'C3': '9'}]
>>>
And with the default width, the output is the following:
>>> pprint(d)
[{'C1': 'a', 'C2': '2', 'C3': '3'},
{'C1': '4', 'C2': '5', 'C3': '6'},
{'C1': '7', 'C2': '8', 'C3': '9'}]
>>>
| Convert a dataframe to dictionary | I have this data like this
technologies = [
("a","2","3"),
("4","5","6"),
("7","8","9")
]
df = pd.DataFrame(technologies,columns = ['C1','C2','C3'])
print(df)
and i convert it to this df
C1 C2 C3
0 a 2 3
1 4 5 6
2 7 8 9
then i convert DataFrame to Dictionary of Records
df2 = df.to_dict('records')
print(df2)
and i got this
[{'C1': 'a', 'C2': '2', 'C3': '3'}, {'C1': '4', 'C2': '5', 'C3': '6'}, {'C1': '7', 'C2': '8', 'C3': '9'}]
Now i want my output would be like this
[{'C1': 'a',
'C2': '2',
'C3': '3'},
{'C1': '4',
'C2': '5',
'C3': '6'},
{'C1': '7',
'C2': '8',
'C3': '9'}]
Is there anychance to get my expect output? i'm just the beginner, please help me to find out
| [
"You can use pprint.pprint() for that. Something like this:\n>>> from pprint import pprint\n>>> d = [{'C1': 'a', 'C2': '2', 'C3': '3'}, {'C1': '4', 'C2': '5', 'C3': '6'}, {'C1': '7', 'C2': '8', 'C3': '9'}]\n>>> pprint(d, width=10)\n[{'C1': 'a',\n 'C2': '2',\n 'C3': '3'},\n {'C1': '4',\n 'C2': '5',\n 'C3': '6'},\n {'C1': '7',\n 'C2': '8',\n 'C3': '9'}]\n>>> \n\nAnd with the default width, the output is the following:\n>>> pprint(d)\n[{'C1': 'a', 'C2': '2', 'C3': '3'},\n {'C1': '4', 'C2': '5', 'C3': '6'},\n {'C1': '7', 'C2': '8', 'C3': '9'}]\n>>> \n\n"
] | [
1
] | [] | [] | [
"dataframe",
"dictionary",
"pandas",
"python"
] | stackoverflow_0074650104_dataframe_dictionary_pandas_python.txt |
Q:
How do I call a function "x" amount of times in python? Using a for loop?
I have a cap.read() function where I am reading in frames from a video. The first call of the function is the zeroth frame, the second call is the 1st frame, etc... I am trying to call the function 1200 because I need to start my read-in at the 1200th frame.
Right now this is what I have, but I know it is incorrect.
A:
The direct answer to your question was answered by @Shmack in the comments. The code is simply
for i in range(1200):
cap.read()
Given that your using the variable cap, I suspect that your using the OpenCV module. If that is the case, then you can simply set the frame you want to start at by using cap.set(1, 1200)
The .set function takes in its first parameter as the identifier, and for your case, you can simply leave that as 1. The second parameter is the frame you wish to start at. You can read more about cap.set() here
| How do I call a function "x" amount of times in python? Using a for loop? | I have a cap.read() function where I am reading in frames from a video. The first call of the function is the zeroth frame, the second call is the 1st frame, etc... I am trying to call the function 1200 because I need to start my read-in at the 1200th frame.
Right now this is what I have, but I know it is incorrect.
| [
"The direct answer to your question was answered by @Shmack in the comments. The code is simply\nfor i in range(1200):\n cap.read()\n\nGiven that your using the variable cap, I suspect that your using the OpenCV module. If that is the case, then you can simply set the frame you want to start at by using cap.set(1, 1200)\nThe .set function takes in its first parameter as the identifier, and for your case, you can simply leave that as 1. The second parameter is the frame you wish to start at. You can read more about cap.set() here\n"
] | [
0
] | [] | [] | [
"continuous",
"for_loop",
"function",
"python",
"repeat"
] | stackoverflow_0074650021_continuous_for_loop_function_python_repeat.txt |
Q:
How can I import all of sklearns regressors
I'm doing some predictive modeling and would like to benchmark different kinds of regressors in scikit-learn, just to see what's out there and how they perform on a given prediction task.
I got inspired to do this by this kaggle kernel in which the author essentially manually imports a bunch of classifiers (about 10) and benchmarks them.
I'm having trouble finding a comprehensive list of imports for the regressors in sklearn so I'm trying to automatize the import statements to automatically return me a list of classes that I can use.
I tried to dynamically import the classes:
from importlib import import_module
import sklearn
def all_regressors():
regressors=[]
for module in sklearn.__all__:
try:
regressors.extend([cls for cls in import_module(f'sklearn.{module}').__all__ if 'Regress' in cls ])
except:
pass
regressors.append(sklearn.svm.SVR)
return regressors
print(all_regressors())
But i only get back the names as strings, rather than the classes:
['RandomForestRegressor', 'ExtraTreesRegressor', 'BaggingRegressor',
'GradientBoostingRegressor', 'AdaBoostRegressor',
'GaussianProcessRegressor', 'IsotonicRegression', 'ARDRegression',
'HuberRegressor', 'LinearRegression', 'LogisticRegression',
'LogisticRegressionCV', 'PassiveAggressiveRegressor',
'RandomizedLogisticRegression', 'SGDRegressor', 'TheilSenRegressor',
'RANSACRegressor', 'MultiOutputRegressor', 'KNeighborsRegressor',
'RadiusNeighborsRegressor', 'MLPRegressor', 'DecisionTreeRegressor',
'ExtraTreeRegressor', <class 'sklearn.svm.classes.SVR'>]
How can I get the actual classes?
A:
I figured out i had to use getattr on the module object:
from importlib import import_module
import sklearn
def all_regressors():
regressors=[]
for module in sklearn.__all__:
try:
module = import_module(f'sklearn.{module}')
regressors.extend([getattr(module,cls) for cls in module.__all__ if 'Regress' in cls ])
except:
pass
regressors.append(sklearn.svm.SVR)
return regressors
print(all_regressors())
[<class 'sklearn.ensemble.forest.RandomForestRegressor'>, <class
'sklearn.ensemble.forest.ExtraTreesRegressor'>, <class
'sklearn.ensemble.bagging.BaggingRegressor'>, <class
'sklearn.ensemble.gradient_boosting.GradientBoostingRegressor'>,
<class 'sklearn.ensemble.weight_boosting.AdaBoostRegressor'>, <class
'sklearn.gaussian_process.gpr.GaussianProcessRegressor'>, <class
'sklearn.isotonic.IsotonicRegression'>, <class
'sklearn.linear_model.bayes.ARDRegression'>, <class
'sklearn.linear_model.huber.HuberRegressor'>, <class
'sklearn.linear_model.base.LinearRegression'>, <class
'sklearn.linear_model.logistic.LogisticRegression'>, <class
'sklearn.linear_model.logistic.LogisticRegressionCV'>, <class
'sklearn.linear_model.passive_aggressive.PassiveAggressiveRegressor'>,
<class 'sklearn.linear_model.randomized_l1.RandomizedLogisticRegression'>, <class
'sklearn.linear_model.stochastic_gradient.SGDRegressor'>, <class
'sklearn.linear_model.theil_sen.TheilSenRegressor'>, <class
'sklearn.linear_model.ransac.RANSACRegressor'>, <class
'sklearn.multioutput.MultiOutputRegressor'>, <class
'sklearn.neighbors.regression.KNeighborsRegressor'>, <class
'sklearn.neighbors.regression.RadiusNeighborsRegressor'>, <class
'sklearn.neural_network.multilayer_perceptron.MLPRegressor'>, <class
'sklearn.tree.tree.DecisionTreeRegressor'>, <class
'sklearn.tree.tree.ExtraTreeRegressor'>, <class
'sklearn.svm.classes.SVR'>]
A:
You can use all_estimators from sklearn.utils
from sklearn.utils import all_estimators
def get_all_regressors_sklearn():
from sklearn.utils import all_estimators
estimators = all_estimators(type_filter='regressor')
all_regs = []
for name, RegClass in estimators:
print('Appending', name)
try:
reg = RegClass()
all_regs.append(reg)
except Exception as e:
pass
return all_regs
all_regs = get_all_regressors_sklearn()
print(all_regs)
Gives:
[ARDRegression(), AdaBoostRegressor(), BaggingRegressor(), BayesianRidge(), CCA(), DecisionTreeRegressor(), DummyRegressor(), ElasticNet(), ElasticNetCV(), ExtraTreeRegressor(), ExtraTreesRegressor(), GammaRegressor(), GaussianProcessRegressor(), GradientBoostingRegressor(), HistGradientBoostingRegressor(), HuberRegressor(), IsotonicRegression(), KNeighborsRegressor(), KernelRidge(), Lars(), LarsCV(), Lasso(), LassoCV(), LassoLars(), LassoLarsCV(), LassoLarsIC(), LinearRegression(), LinearSVR(), MLPRegressor(), MultiTaskElasticNet(), MultiTaskElasticNetCV(), MultiTaskLasso(), MultiTaskLassoCV(), NuSVR(), OrthogonalMatchingPursuit(), OrthogonalMatchingPursuitCV(), PLSCanonical(), PLSRegression(), PassiveAggressiveRegressor(), PoissonRegressor(), QuantileRegressor(), RANSACRegressor(), RadiusNeighborsRegressor(), RandomForestRegressor(), Ridge(), RidgeCV(), SGDRegressor(), SVR(), TheilSenRegressor(), TransformedTargetRegressor(), TweedieRegressor()]
| How can I import all of sklearns regressors | I'm doing some predictive modeling and would like to benchmark different kinds of regressors in scikit-learn, just to see what's out there and how they perform on a given prediction task.
I got inspired to do this by this kaggle kernel in which the author essentially manually imports a bunch of classifiers (about 10) and benchmarks them.
I'm having trouble finding a comprehensive list of imports for the regressors in sklearn so I'm trying to automatize the import statements to automatically return me a list of classes that I can use.
I tried to dynamically import the classes:
from importlib import import_module
import sklearn
def all_regressors():
regressors=[]
for module in sklearn.__all__:
try:
regressors.extend([cls for cls in import_module(f'sklearn.{module}').__all__ if 'Regress' in cls ])
except:
pass
regressors.append(sklearn.svm.SVR)
return regressors
print(all_regressors())
But i only get back the names as strings, rather than the classes:
['RandomForestRegressor', 'ExtraTreesRegressor', 'BaggingRegressor',
'GradientBoostingRegressor', 'AdaBoostRegressor',
'GaussianProcessRegressor', 'IsotonicRegression', 'ARDRegression',
'HuberRegressor', 'LinearRegression', 'LogisticRegression',
'LogisticRegressionCV', 'PassiveAggressiveRegressor',
'RandomizedLogisticRegression', 'SGDRegressor', 'TheilSenRegressor',
'RANSACRegressor', 'MultiOutputRegressor', 'KNeighborsRegressor',
'RadiusNeighborsRegressor', 'MLPRegressor', 'DecisionTreeRegressor',
'ExtraTreeRegressor', <class 'sklearn.svm.classes.SVR'>]
How can I get the actual classes?
| [
"I figured out i had to use getattr on the module object:\nfrom importlib import import_module\nimport sklearn\n\ndef all_regressors():\n regressors=[]\n for module in sklearn.__all__:\n try:\n module = import_module(f'sklearn.{module}')\n regressors.extend([getattr(module,cls) for cls in module.__all__ if 'Regress' in cls ])\n except:\n pass\n regressors.append(sklearn.svm.SVR)\n return regressors\nprint(all_regressors())\n\n[<class 'sklearn.ensemble.forest.RandomForestRegressor'>, <class \n'sklearn.ensemble.forest.ExtraTreesRegressor'>, <class \n'sklearn.ensemble.bagging.BaggingRegressor'>, <class \n'sklearn.ensemble.gradient_boosting.GradientBoostingRegressor'>,\n<class 'sklearn.ensemble.weight_boosting.AdaBoostRegressor'>, <class \n'sklearn.gaussian_process.gpr.GaussianProcessRegressor'>, <class \n'sklearn.isotonic.IsotonicRegression'>, <class \n'sklearn.linear_model.bayes.ARDRegression'>, <class \n'sklearn.linear_model.huber.HuberRegressor'>, <class \n'sklearn.linear_model.base.LinearRegression'>, <class \n'sklearn.linear_model.logistic.LogisticRegression'>, <class \n'sklearn.linear_model.logistic.LogisticRegressionCV'>, <class \n'sklearn.linear_model.passive_aggressive.PassiveAggressiveRegressor'>, \n<class 'sklearn.linear_model.randomized_l1.RandomizedLogisticRegression'>, <class \n'sklearn.linear_model.stochastic_gradient.SGDRegressor'>, <class \n'sklearn.linear_model.theil_sen.TheilSenRegressor'>, <class \n'sklearn.linear_model.ransac.RANSACRegressor'>, <class \n'sklearn.multioutput.MultiOutputRegressor'>, <class \n'sklearn.neighbors.regression.KNeighborsRegressor'>, <class \n'sklearn.neighbors.regression.RadiusNeighborsRegressor'>, <class \n'sklearn.neural_network.multilayer_perceptron.MLPRegressor'>, <class \n'sklearn.tree.tree.DecisionTreeRegressor'>, <class \n'sklearn.tree.tree.ExtraTreeRegressor'>, <class \n'sklearn.svm.classes.SVR'>]\n\n",
"You can use all_estimators from sklearn.utils\nfrom sklearn.utils import all_estimators\n\ndef get_all_regressors_sklearn():\n from sklearn.utils import all_estimators\n\n estimators = all_estimators(type_filter='regressor')\n\n all_regs = []\n for name, RegClass in estimators:\n print('Appending', name)\n try:\n reg = RegClass()\n all_regs.append(reg)\n except Exception as e:\n pass\n return all_regs\n\nall_regs = get_all_regressors_sklearn()\nprint(all_regs)\n\nGives:\n[ARDRegression(), AdaBoostRegressor(), BaggingRegressor(), BayesianRidge(), CCA(), DecisionTreeRegressor(), DummyRegressor(), ElasticNet(), ElasticNetCV(), ExtraTreeRegressor(), ExtraTreesRegressor(), GammaRegressor(), GaussianProcessRegressor(), GradientBoostingRegressor(), HistGradientBoostingRegressor(), HuberRegressor(), IsotonicRegression(), KNeighborsRegressor(), KernelRidge(), Lars(), LarsCV(), Lasso(), LassoCV(), LassoLars(), LassoLarsCV(), LassoLarsIC(), LinearRegression(), LinearSVR(), MLPRegressor(), MultiTaskElasticNet(), MultiTaskElasticNetCV(), MultiTaskLasso(), MultiTaskLassoCV(), NuSVR(), OrthogonalMatchingPursuit(), OrthogonalMatchingPursuitCV(), PLSCanonical(), PLSRegression(), PassiveAggressiveRegressor(), PoissonRegressor(), QuantileRegressor(), RANSACRegressor(), RadiusNeighborsRegressor(), RandomForestRegressor(), Ridge(), RidgeCV(), SGDRegressor(), SVR(), TheilSenRegressor(), TransformedTargetRegressor(), TweedieRegressor()]\n\n"
] | [
2,
0
] | [] | [] | [
"python",
"python_import",
"scikit_learn"
] | stackoverflow_0046852222_python_python_import_scikit_learn.txt |
Q:
Why would a MySQL DELETE statement fail when the corresponding SELECT statement works?
I have a MySQL database instance hosted on GCP, and I am connecting to it using the pymysql python package. I would like to delete some rows from a database table called Basic.
The code I have written to do this is included below. The variable conf contains the connection details to the database instance.
import pymysql
import pandas as pd
# Establish connection.
connection = pymysql.connect(**conf)
cursor = connection.cursor()
# Delete rows of table.
try:
cursor.execute("DELETE FROM Basic WHERE Date = '2022-11-25 04:00:00';")
except Exception as ex:
print("Exception occured: %s" %ex)
finally:
connection.close()
# Check the table to see if deletion has occurred.
connection = pymysql.connect(**conf)
cursor = connection.cursor()
cursor.execute("SELECT * FROM Basic WHERE Date = '2022-11-25 04:00:00'")
connection.close()
df = pd.DataFrame(cursor.fetchall(), columns = ["Date", "State", "Price", "Demand"])
Clearly one would expect the dataframe defined here to have no rows, but this is not the case. This shows that the SELECT statement included in the code above produces the expected result, but the corresponding DELETE statement does not.
Why is this the case?
A:
The code above requires the addition of the following line, in order to commit the DELETE statement.
connection.commit()
The commit method should be called after every transaction that modifies data, such as this one.
| Why would a MySQL DELETE statement fail when the corresponding SELECT statement works? | I have a MySQL database instance hosted on GCP, and I am connecting to it using the pymysql python package. I would like to delete some rows from a database table called Basic.
The code I have written to do this is included below. The variable conf contains the connection details to the database instance.
import pymysql
import pandas as pd
# Establish connection.
connection = pymysql.connect(**conf)
cursor = connection.cursor()
# Delete rows of table.
try:
cursor.execute("DELETE FROM Basic WHERE Date = '2022-11-25 04:00:00';")
except Exception as ex:
print("Exception occured: %s" %ex)
finally:
connection.close()
# Check the table to see if deletion has occurred.
connection = pymysql.connect(**conf)
cursor = connection.cursor()
cursor.execute("SELECT * FROM Basic WHERE Date = '2022-11-25 04:00:00'")
connection.close()
df = pd.DataFrame(cursor.fetchall(), columns = ["Date", "State", "Price", "Demand"])
Clearly one would expect the dataframe defined here to have no rows, but this is not the case. This shows that the SELECT statement included in the code above produces the expected result, but the corresponding DELETE statement does not.
Why is this the case?
| [
"The code above requires the addition of the following line, in order to commit the DELETE statement.\nconnection.commit()\n\nThe commit method should be called after every transaction that modifies data, such as this one.\n"
] | [
0
] | [] | [] | [
"google_cloud_platform",
"mysql",
"pymysql",
"python"
] | stackoverflow_0074649302_google_cloud_platform_mysql_pymysql_python.txt |
Q:
The input keeps repeating itself
Hello I'm new in programming, I was coding newton's method for a uni class, and the part where the user input the f(x) in the code keeps repeating.
This is the code I was making, it works but the def f(x) keeps repeating for 2 or 3 times before the while starts
import math
import sympy as smp
from sympy import *
x = smp.symbols('x')
x0=float(input("Initial Value:"))
k=1
n=int(input("Number of interactions:"))
def f(x):
return eval(input("f(x):"))
f_prime= smp.diff(f(x), x)
f_prime = lambdify(x, f_prime)
while(k<=n):
r=x0-(f(x0)/f_prime(x0))
print("root:",r,"interaction:",k)
k=k+1
x0=r
A:
You can perform the code as follows.
Code
from sympy import Symbol, diff, sympify
def newton_method(func, sym, x0, n = 10):
diff_func = diff(func, x) # derivative of function
root = x0
for i in range(n):
root = root - float(func.subs(x, root)/diff_func.subs(x, root))
return root
# Setup
expression = input('input function: ') # String expression corresponding to function
f = sympify(expression) # string expression to sympy function
x0 = float(input("Initial Value:"))
n = int(input("Number of interactions:"))
# Find root
root = newton_method(f, Symbol('x'), x0, n) # newton method with x as symbol
print('root:', root)
Test
input function: x**2 - 2*x + 1
Initial Value: 0
Number of interactions: 10
root: 0.9990234375
| The input keeps repeating itself | Hello I'm new in programming, I was coding newton's method for a uni class, and the part where the user input the f(x) in the code keeps repeating.
This is the code I was making, it works but the def f(x) keeps repeating for 2 or 3 times before the while starts
import math
import sympy as smp
from sympy import *
x = smp.symbols('x')
x0=float(input("Initial Value:"))
k=1
n=int(input("Number of interactions:"))
def f(x):
return eval(input("f(x):"))
f_prime= smp.diff(f(x), x)
f_prime = lambdify(x, f_prime)
while(k<=n):
r=x0-(f(x0)/f_prime(x0))
print("root:",r,"interaction:",k)
k=k+1
x0=r
| [
"You can perform the code as follows.\nCode\nfrom sympy import Symbol, diff, sympify\n\ndef newton_method(func, sym, x0, n = 10):\n diff_func = diff(func, x) # derivative of function\n \n root = x0\n for i in range(n):\n root = root - float(func.subs(x, root)/diff_func.subs(x, root))\n \n return root\n \n# Setup\nexpression = input('input function: ') # String expression corresponding to function\nf = sympify(expression) # string expression to sympy function\nx0 = float(input(\"Initial Value:\")) \nn = int(input(\"Number of interactions:\")) \n\n# Find root\nroot = newton_method(f, Symbol('x'), x0, n) # newton method with x as symbol\n\nprint('root:', root)\n\nTest\ninput function: x**2 - 2*x + 1\nInitial Value: 0\nNumber of interactions: 10\nroot: 0.9990234375\n\n"
] | [
0
] | [] | [] | [
"math",
"python"
] | stackoverflow_0074647327_math_python.txt |
Q:
Clearing decorator ipywidgets
I have a function which plots a graph with a couple ipywidgets as inputs:
from IPython.display import clear_output
def on_clicker(button):
clear_output()
@widgets.interact(dropdown=widgets.Dropdown(...),
datepicker=widgets.DatePicker(...)
def grapher(dropdown, datepicker):
global recalculate
... some graphing stuff
display(recalculate)
recalculate.on_click(on_clicker)
The idea is that clicking recalculate calls the graph again and clears the past output. However, when I try this the widgets in the decorator remain under repeated use of recalculate:
Stacked widgets
Is there any way to clear these widgets as clear_output() does not seem to work?
A:
You can use the clear_output() function to clear the output of the current cell in a Jupyter notebook. However, this will not clear the widgets that you have defined in your code. In order to clear the widgets, you will need to re-initialize them each time the on_clicker() function is called.
One way to do this would be to move the code that initializes the widgets inside the on_clicker() function, so that they are re-initialized each time the function is called.
Here is an example of how you could modify your code to achieve this:
from IPython.display import clear_output
def on_clicker(button):
clear_output()
# Define the widgets inside the on_clicker function
dropdown = widgets.Dropdown(...)
datepicker = widgets.DatePicker(...)
@widgets.interact(dropdown=dropdown, datepicker=datepicker)
def grapher(dropdown, datepicker):
global recalculate
... some graphing stuff
display(recalculate)
recalculate.on_click(on_clicker)
This way, when the on_clicker() function is called, it will clear the output and re-initialize the widgets, which should solve your problem.
| Clearing decorator ipywidgets | I have a function which plots a graph with a couple ipywidgets as inputs:
from IPython.display import clear_output
def on_clicker(button):
clear_output()
@widgets.interact(dropdown=widgets.Dropdown(...),
datepicker=widgets.DatePicker(...)
def grapher(dropdown, datepicker):
global recalculate
... some graphing stuff
display(recalculate)
recalculate.on_click(on_clicker)
The idea is that clicking recalculate calls the graph again and clears the past output. However, when I try this the widgets in the decorator remain under repeated use of recalculate:
Stacked widgets
Is there any way to clear these widgets as clear_output() does not seem to work?
| [
"You can use the clear_output() function to clear the output of the current cell in a Jupyter notebook. However, this will not clear the widgets that you have defined in your code. In order to clear the widgets, you will need to re-initialize them each time the on_clicker() function is called.\nOne way to do this would be to move the code that initializes the widgets inside the on_clicker() function, so that they are re-initialized each time the function is called.\nHere is an example of how you could modify your code to achieve this:\nfrom IPython.display import clear_output\n\ndef on_clicker(button):\n clear_output()\n # Define the widgets inside the on_clicker function\n dropdown = widgets.Dropdown(...)\n datepicker = widgets.DatePicker(...)\n @widgets.interact(dropdown=dropdown, datepicker=datepicker)\n def grapher(dropdown, datepicker):\n global recalculate\n ... some graphing stuff\n display(recalculate)\n recalculate.on_click(on_clicker)\n\nThis way, when the on_clicker() function is called, it will clear the output and re-initialize the widgets, which should solve your problem.\n"
] | [
0
] | [] | [] | [
"ipywidgets",
"jupyter",
"python"
] | stackoverflow_0074650205_ipywidgets_jupyter_python.txt |
Q:
How to add a function with print in print
When I'm doing this:
def pencil():
print("pencil")
print("A", pencil())
Output showing:
pencil
A None
I tried some things but nothing worked.
A:
def pencil():
return "pencil"
print("A", pencil()) # A pencil
Or
def pencil():
print("pencil")
print("A") # A
pencil() # pencil
A:
When you do
print("A", pencil())
you are basically asking python to print "A" and the return value of the function named pencil.
Because you do not specify a return statement in your function definition, python returns None, hence the unwanted result.
As stated by other people, you just need to return "pencil" from the function so that the desired outcome can be achieved
A:
The default return value of a function is None
Your function doesn't have a return statement, so it will return None
Furthermore, print returns None to as it prints its input to the sys.stdout stream. So returning print("pencil") would also give you None
In order for you to get a value from a function you need to use the return statement and return a value. In your case "pencil":
def pencil():
return "pencil"
print("A", pencil())
| How to add a function with print in print | When I'm doing this:
def pencil():
print("pencil")
print("A", pencil())
Output showing:
pencil
A None
I tried some things but nothing worked.
| [
"def pencil():\n return \"pencil\"\n\n\nprint(\"A\", pencil()) # A pencil\n\nOr\ndef pencil():\n print(\"pencil\")\n\n\nprint(\"A\") # A\npencil() # pencil\n\n",
"When you do\nprint(\"A\", pencil())\n\nyou are basically asking python to print \"A\" and the return value of the function named pencil.\nBecause you do not specify a return statement in your function definition, python returns None, hence the unwanted result.\nAs stated by other people, you just need to return \"pencil\" from the function so that the desired outcome can be achieved\n",
"The default return value of a function is None\nYour function doesn't have a return statement, so it will return None\nFurthermore, print returns None to as it prints its input to the sys.stdout stream. So returning print(\"pencil\") would also give you None\nIn order for you to get a value from a function you need to use the return statement and return a value. In your case \"pencil\":\ndef pencil():\n return \"pencil\"\n\nprint(\"A\", pencil())\n\n"
] | [
1,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074650214_python.txt |
Q:
How to Implement 'if' Statement for a Function to Solve a System of ODEs using solve_ivp in Python
For the time, t, from 0 to 30, z0 is a constant value of 6. For the time,t, from 30 to 100, z0 takes on the form of a time variable where z0 = 6exp(-0.5*(t-15)). I tried to implement the 'if' condition in my function but it does not seem to work. Is there anything I am doing wrong? Any help will be appreciated.
z0 = 6
z0_new = z0*np.exp(-0.5*(t-15))
def f(t,y):
return z0-y[0],3/y[0]-y[1]
fsol = solve_ivp(f,(0,100),[3,350])
plt.plot(fsol.t,fsol.y[0])
z0 = 6
def f(t,y):
if t>=30:
z0 = 6*np.exp(-0.5*(t-15))
return z0-y[0],3/y[0]-y[1]
fsol = solve_ivp(f,(0,100),[3,350])
plt.plot(fsol.t,fsol.y[0])
A:
You should use an if/elif/else structure. You don't need to make 2 functions for the 2 different values of z0.
def f(t,y):
if 0 <= t <= 30:
z0 = 6
elif t > 30:
z0 = 6 * np.exp(-0.5 * (t - 15))
else:
return f"t value {t} is less than zero"
return z0 - y[0], 3 / y[0] - y[1]
fsol = solve_ivp(f, (0,100), [3,350])
plt.plot(fsol.t, fsol.y[0])
| How to Implement 'if' Statement for a Function to Solve a System of ODEs using solve_ivp in Python | For the time, t, from 0 to 30, z0 is a constant value of 6. For the time,t, from 30 to 100, z0 takes on the form of a time variable where z0 = 6exp(-0.5*(t-15)). I tried to implement the 'if' condition in my function but it does not seem to work. Is there anything I am doing wrong? Any help will be appreciated.
z0 = 6
z0_new = z0*np.exp(-0.5*(t-15))
def f(t,y):
return z0-y[0],3/y[0]-y[1]
fsol = solve_ivp(f,(0,100),[3,350])
plt.plot(fsol.t,fsol.y[0])
z0 = 6
def f(t,y):
if t>=30:
z0 = 6*np.exp(-0.5*(t-15))
return z0-y[0],3/y[0]-y[1]
fsol = solve_ivp(f,(0,100),[3,350])
plt.plot(fsol.t,fsol.y[0])
| [
"You should use an if/elif/else structure. You don't need to make 2 functions for the 2 different values of z0.\ndef f(t,y):\n if 0 <= t <= 30:\n z0 = 6\n elif t > 30: \n z0 = 6 * np.exp(-0.5 * (t - 15))\n else:\n return f\"t value {t} is less than zero\"\n return z0 - y[0], 3 / y[0] - y[1]\n\nfsol = solve_ivp(f, (0,100), [3,350])\nplt.plot(fsol.t, fsol.y[0])\n\n"
] | [
0
] | [] | [] | [
"if_statement",
"math",
"ode",
"python"
] | stackoverflow_0074650245_if_statement_math_ode_python.txt |
Q:
Python - openpyxl - Use openpyxl to get number of rows that contain a specific value
I'm newer to Python. I'm using openpyxl for a SEO project for my brother and I'm trying to get a number of rows that contain a specific value in them.
I have a spreadsheet that looks something like this:
I want to write a program that will get the keywords and parse them to a string by state, so like:
Missouri = "search item 1, search item 2, search item 5, search item 6"
Illinois = "search item 3, search item 4"
I have thus far created a program like this:
#first, import openpyxl
import openpyxl
#next, give location of file
path = "testExcel.xlsx"
#Open workbook by creating object
wb_object = openpyxl.load_workbook(path)
#Get workbook active sheet object
sheet_object = wb_object.active
#Getting the value of maximum rows
#and column
row = sheet_object.max_row
column = sheet_object.max_column
print("Total Rows:", row)
print("Total Columns:", column)
#printing the value of forth column, state
#Loop will print all values
#of first column
print("\nValue of fourth column")
for i in range(4, row + 1):
cell_object = sheet_object.cell(row=i, column=4)
split_item_test = cell_object.value.split(",")
split_item_test_result = split_item_test[0]
state = split_item_test_result
print(state)
if (state == 'Missouri'):
print(state.count('Missouri'))
print("All good")
The problem is after doing this, I see that it prints 1 repeatedly, but not a total number for Missouri. I would like a total number of mentions of the state, and then eventually get it to a string with each search criteria.
Is this possible with openpyxl? Or will I need a different library?
A:
Ok another option
This will create a dictionary 'state_dict' in the format per your question
Missouri = "search item 1, search item 2, search item 5, search item
6"
Illinois = "search item 3, search item 4"
...
print("\nValue of fourth column")
state_dict = {}
for row in sheet_object.iter_rows(min_row=2, max_row=sheet_object.max_row):
k = row[3].value.split(',')[1].strip()
v = row[0].value
if k in state_dict:
state_dict[k] += [v]
else:
state_dict[k] = [v]
### Print values
for key, value in state_dict.items():
print(f'{key}, Total {len(value)}', end='; ')
for v in value:
print(f'{v}', end=', ')
print('')
Will create the dictionary 'state_dict' as so;
'Missouri' = {list: 4} ['search item 1', 'search item 2', 'search item 5', 'search item 6']
'Illinois' = {list: 2} ['search item 3', 'search item 4']
'Alabama' = {list: 1} ['search item 7']
'Colorado' = {list: 1} ['search item 8']
Print output
Value of fourth column
Missouri = Total 4; search item 1, search item 2, search item 5, search item 6,
Illinois = Total 2; search item 3, search item 4,
Alabama = Total 1; search item 7,
Colorado = Total 1; search item 8,
A:
ranemirusG is right, there are several ways to obtain the same result. Here's another option...I attempted to preserve your thought process, good luck.
print("\nValue of fourth column")
missouri_list = [] # empty list
illinois_list = [] # empty list
for i in range(2, row+1): # It didn't look like "4, row+1" captured the full sheet, try (2, row+1)
cell_object = sheet_object.cell(row=i, column=4)
keyword = sheet_object.cell(row=i, column=1)
keyword_fmt = keyword.value # Captures values in Keyword column
split_item_test = cell_object.value.split(",")
split_item_test_result = split_item_test[1] # 1 captures states
state = split_item_test_result
print(state)
# simple if statement to capture results in a list
if 'Missouri' in state:
missouri_list.append(keyword_fmt)
if 'Illinois' in state:
illinois_list.append(keyword_fmt)
print(missouri_list)
print(len(missouri_list)) # Counts the number of occurances
print(illinois_list)
print(len(illinois_list)) # Counts the number of occurances
print("All good")
A:
Yes, it's possible with openpyxl.
To achieve your real goal try something like this:
states_and_keywords = {}
for i in range(4, row + 1):
cell_object = sheet_object.cell(row=i, column=4)
split_item_test = cell_object.value.split(",")
split_item_test_result = split_item_test[1] #note that the element should be 1 for the state
state = split_item_test_result.strip(" ") #trim whitespace (after comma)
keyword = cell_object.offset(0,-3).value #this gets the value of the keyword for that row
if state not in states_and_keywords:
states_and_keywords[state] = [keyword]
else:
states_and_keywords[state].append(keyword)
print(states_and_keywords)
| Python - openpyxl - Use openpyxl to get number of rows that contain a specific value | I'm newer to Python. I'm using openpyxl for a SEO project for my brother and I'm trying to get a number of rows that contain a specific value in them.
I have a spreadsheet that looks something like this:
I want to write a program that will get the keywords and parse them to a string by state, so like:
Missouri = "search item 1, search item 2, search item 5, search item 6"
Illinois = "search item 3, search item 4"
I have thus far created a program like this:
#first, import openpyxl
import openpyxl
#next, give location of file
path = "testExcel.xlsx"
#Open workbook by creating object
wb_object = openpyxl.load_workbook(path)
#Get workbook active sheet object
sheet_object = wb_object.active
#Getting the value of maximum rows
#and column
row = sheet_object.max_row
column = sheet_object.max_column
print("Total Rows:", row)
print("Total Columns:", column)
#printing the value of forth column, state
#Loop will print all values
#of first column
print("\nValue of fourth column")
for i in range(4, row + 1):
cell_object = sheet_object.cell(row=i, column=4)
split_item_test = cell_object.value.split(",")
split_item_test_result = split_item_test[0]
state = split_item_test_result
print(state)
if (state == 'Missouri'):
print(state.count('Missouri'))
print("All good")
The problem is after doing this, I see that it prints 1 repeatedly, but not a total number for Missouri. I would like a total number of mentions of the state, and then eventually get it to a string with each search criteria.
Is this possible with openpyxl? Or will I need a different library?
| [
"Ok another option\nThis will create a dictionary 'state_dict' in the format per your question\n\nMissouri = \"search item 1, search item 2, search item 5, search item\n6\" \nIllinois = \"search item 3, search item 4\"\n\n...\nprint(\"\\nValue of fourth column\")\nstate_dict = {}\nfor row in sheet_object.iter_rows(min_row=2, max_row=sheet_object.max_row):\n k = row[3].value.split(',')[1].strip()\n v = row[0].value\n if k in state_dict:\n state_dict[k] += [v]\n else:\n state_dict[k] = [v]\n\n### Print values\nfor key, value in state_dict.items():\n print(f'{key}, Total {len(value)}', end='; ')\n for v in value:\n print(f'{v}', end=', ')\n print('')\n\nWill create the dictionary 'state_dict' as so;\n'Missouri' = {list: 4} ['search item 1', 'search item 2', 'search item 5', 'search item 6']\n'Illinois' = {list: 2} ['search item 3', 'search item 4']\n'Alabama' = {list: 1} ['search item 7']\n'Colorado' = {list: 1} ['search item 8']\n\nPrint output\nValue of fourth column\nMissouri = Total 4; search item 1, search item 2, search item 5, search item 6, \nIllinois = Total 2; search item 3, search item 4, \nAlabama = Total 1; search item 7, \nColorado = Total 1; search item 8, \n\n",
"ranemirusG is right, there are several ways to obtain the same result. Here's another option...I attempted to preserve your thought process, good luck.\nprint(\"\\nValue of fourth column\")\n\nmissouri_list = [] # empty list\nillinois_list = [] # empty list\n\nfor i in range(2, row+1): # It didn't look like \"4, row+1\" captured the full sheet, try (2, row+1)\n cell_object = sheet_object.cell(row=i, column=4)\n keyword = sheet_object.cell(row=i, column=1)\n keyword_fmt = keyword.value # Captures values in Keyword column\n split_item_test = cell_object.value.split(\",\")\n split_item_test_result = split_item_test[1] # 1 captures states\n state = split_item_test_result\n print(state)\n\n # simple if statement to capture results in a list\n if 'Missouri' in state:\n missouri_list.append(keyword_fmt)\n if 'Illinois' in state:\n illinois_list.append(keyword_fmt)\nprint(missouri_list)\nprint(len(missouri_list)) # Counts the number of occurances\nprint(illinois_list)\nprint(len(illinois_list)) # Counts the number of occurances\nprint(\"All good\")\n\n",
"Yes, it's possible with openpyxl.\nTo achieve your real goal try something like this:\nstates_and_keywords = {}\nfor i in range(4, row + 1):\n cell_object = sheet_object.cell(row=i, column=4)\n split_item_test = cell_object.value.split(\",\")\n split_item_test_result = split_item_test[1] #note that the element should be 1 for the state\n state = split_item_test_result.strip(\" \") #trim whitespace (after comma)\n keyword = cell_object.offset(0,-3).value #this gets the value of the keyword for that row\n if state not in states_and_keywords:\n states_and_keywords[state] = [keyword]\n else:\n states_and_keywords[state].append(keyword) \nprint(states_and_keywords)\n\n"
] | [
1,
0,
0
] | [] | [] | [
"excel",
"loops",
"openpyxl",
"python"
] | stackoverflow_0074649003_excel_loops_openpyxl_python.txt |
Q:
How to stack capsnet (capsule neural network) properly?
Capsule neural network use convolution, primary capsule, and digit capsule layer. Meanwhile convolutional neural network using convolution and max pool layer. I want to make a comparison between convolutional neural network and capsule neural network. The table below is the architecture of my cnn model. I need to make a similar architecture like this on capsule neural network. So how to stack capsule neural network properly and what does the stack capsule neural network look like?
| How to stack capsnet (capsule neural network) properly? | Capsule neural network use convolution, primary capsule, and digit capsule layer. Meanwhile convolutional neural network using convolution and max pool layer. I want to make a comparison between convolutional neural network and capsule neural network. The table below is the architecture of my cnn model. I need to make a similar architecture like this on capsule neural network. So how to stack capsule neural network properly and what does the stack capsule neural network look like?
| [] | [] | [
"from your question, I understand that you need to create networks that compared the input and segmentations. There are techniques to create objects segmentation by auto-encoder, see the example images divided by color shades and background you can determine initial image components.\nFor the segmentation part you can apply networks or object segmentation where you can merged it back into a picture with labels and process for image recognition or other tasks. In the example is simply masking with regions and shadows.\nSample: My simple implementation is not too difficult than multi-head input networks categorize tasks, shadow images and regions is one label can be significant.\nimport os\nfrom os.path import exists\n\nimport tensorflow as tf\nimport tensorflow_io as tfio\n\nimport matplotlib.pyplot as plt\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Variables\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nIMG_SHAPE = ( 32, 32, 1 ) \ntsk1_CLASSES_NUM = 2\ninitial_epochs = 5\n\nPATH = os.path.join('F:\\\\datasets\\\\downloads\\\\Actors_jpg\\\\train\\\\Pikaploy', '*.jpg')\nPATH_2 = os.path.join('F:\\\\datasets\\\\downloads\\\\Actors_jpg\\\\train\\\\Candidt Kibt', '*.jpg')\nfiles = tf.data.Dataset.list_files(PATH)\nfiles_2 = tf.data.Dataset.list_files(PATH_2)\n\nlist_file = []\nlist_file_actual = []\nlist_label = []\nlist_label_actual = [ 'Pikaploy', 'Pikaploy', 'Pikaploy', 'Pikaploy', 'Pikaploy', 'Candidt Kibt', 'Candidt Kibt', 'Candidt Kibt', 'Candidt Kibt', 'Candidt Kibt' ]\nfor file in files.take(5):\n image = tf.io.read_file( file )\n image = tf.io.decode_jpeg( image, channels=0, ratio=1, fancy_upscaling=True, try_recover_truncated=False, acceptable_fraction=1, dct_method='INTEGER_FAST', name=\"decode_jpeg\" )\n \n list_file_actual.append(image)\n image = tf.image.resize(image, [32,32], method='nearest')\n \n \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n : Image Masking\n \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n n_horizon = 64\n n_features = 16\n masking_layer = tf.keras.layers.Masking(mask_value=50, input_shape=(n_horizon, n_features))\n image = tf.constant( masking_layer(image)[:,:,0], shape=(32, 32, 1) )\n \n list_file.append(image)\n \n temp = tf.ones([ 20, 20 ]).numpy() * 0\n list_label.append( temp )\n \nfor file in files_2.take(5):\n image = tf.io.read_file( file )\n image = tf.io.decode_jpeg( image, channels=0, ratio=1, fancy_upscaling=True, try_recover_truncated=False, acceptable_fraction=1, dct_method='INTEGER_FAST', name=\"decode_jpeg\" )\n \n list_file_actual.append(image)\n image = tf.image.resize(image, [32,32], method='nearest')\n \n \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n : Image Masking\n \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n n_horizon = 64\n n_features = 16\n masking_layer = tf.keras.layers.Masking(mask_value=50, input_shape=(n_horizon, n_features))\n image = tf.constant( masking_layer(image)[:,:,0], shape=(32, 32, 1) )\n \n list_file.append(image)\n temp = tf.ones([ 20, 20 ]).numpy() * 9\n list_label.append( temp )\n \ncheckpoint_path = \"F:\\\\models\\\\checkpoint\\\\\" + os.path.basename(__file__).split('.')[0] + \"\\\\TF_DataSets_01.h5\"\ncheckpoint_dir = os.path.dirname(checkpoint_path)\n\nif not exists(checkpoint_dir) : \n os.mkdir(checkpoint_dir)\n print(\"Create directory: \" + checkpoint_dir)\n \n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Definition / Class\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\ndef build_model():\n\n branch_A_input = tf.keras.Input(shape=IMG_SHAPE)\n branch_A_rescale = tf.keras.layers.experimental.preprocessing.Rescaling(1./127.5, offset= -1)(branch_A_input)\n branch_A = tf.keras.layers.Dropout(0.3)(branch_A_rescale)\n branch_A = tf.keras.layers.Conv2D(filters = 128, kernel_size = 13, activation= 'swish', name = \"base_conv_A\")(branch_A)\n branch_A = tf.keras.layers.BatchNormalization(name = \"base_batch_normalization_A\")(branch_A)\n branch_A = tf.keras.Model(inputs=branch_A_input, outputs = branch_A)\n\n\n branch_B_input = tf.keras.Input(shape=IMG_SHAPE)\n branch_B_rescale = tf.keras.layers.experimental.preprocessing.Rescaling(1./127.5, offset= -1)(branch_B_input)\n branch_B = tf.keras.layers.Dropout(0.3)(branch_B_rescale)\n branch_B = tf.keras.layers.Conv2D(filters = 128, kernel_size = 13, activation= 'swish', name = \"base_conv_B\")(branch_B)\n branch_B = tf.keras.layers.BatchNormalization(name = \"base_batch_normalization_B\")(branch_B)\n branch_B = tf.keras.Model(inputs=branch_B_input, outputs = branch_B)\n\n merge = tf.keras.layers.Concatenate()([branch_A.output, branch_B.output])\n\n output_A = tf.keras.layers.Dense(tsk1_CLASSES_NUM, activation='softmax', name='4cls')(merge)\n output_B = tf.keras.layers.Dense(1, name='2cls')(merge)\n\n model = tf.keras.Model(inputs = [branch_A.input, branch_B.input] , outputs = [output_A, output_B], name=\"multi_task_model\")\n\n optimizer = tf.keras.optimizers.get('adam')\n optimizer.learning_rate = 0.001\n losses = {'4cls': tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),\n '2cls': tf.keras.losses.BinaryCrossentropy(from_logits=True)}\n\n mtrcs = {\n \"4cls\": 'accuracy',\n \"2cls\": 'accuracy',\n }\n model.compile(optimizer=optimizer, loss= losses, metrics=mtrcs)\n\n return model\n \n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: DataSet / Input\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nlist_label = tf.constant( list_label, shape=( 10, 20, 20, 1 ) )\nlist_file = tf.constant( tf.cast( list_file, dtype=tf.int64), shape=( 10, 32, 32, 1 ) )\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Task\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nmodel = build_model()\nhistory = model.fit( [list_file, list_file], list_label, epochs=initial_epochs)\n\ninput( '...' )\n\nData preparation: Image masking background\n\nOutput:\n2022-12-02 09:39:30.019022: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8100\n1/1 [==============================] - ETA: 0s - loss: nan - 4cls_loss: nan - 2cls_loss: 0.7376 - 4cls_accuracy: 0.2632 1/1 [==============================] - 2s 2s/step - loss: nan - 4cls_loss: nan - 2cls_loss: 0.7376 - 4cls_accuracy: 0.2632 - 2cls_accuracy: 0.3570\nEpoch 2/5\n1/1 [==============================] - ETA: 0s - loss: nan - 4cls_loss: nan - 2cls_loss: nan - 4cls_accuracy: 0.5000 - 21/1 [==============================] - 0s 11ms/step - loss: nan - 4cls_loss: nan - 2cls_loss: nan - 4cls_accuracy: 0.5000 - 2cls_accuracy: 0.5000\n\n"
] | [
-2
] | [
"conv_neural_network",
"deep_learning",
"neural_network",
"python",
"tensorflow"
] | stackoverflow_0074649733_conv_neural_network_deep_learning_neural_network_python_tensorflow.txt |
Q:
Testing a POST that uses Flask-WTF validate_on_submit
I am stumped on testing a POST to add a category to the database where I've used Flask_WTF for validation and CSRF protection. For the CRUD operations pm my website. I've used Flask, Flask_WTF and Flask-SQLAlchemy. It is my first independent project, and I find myself a little at a lost on how to test the Flask-WTForm validate_on_submit function.
Here's are the models:
class Users(db.Model):
id = db.Column(db.Integer, primary_key=True, unique=True)
name = db.Column(db.String(80), nullable=False)
email = db.Column(db.String(250), unique=True)
class Category(db.Model):
id = db.Column(db.Integer, primary_key=True, unique=True)
name = db.Column(db.String(250), nullable=False, unique=True)
users_id = db.Column(db.Integer, db.ForeignKey('users.id'))
Here's the form:
class CategoryForm(Form):
name = StringField(
'Name', [validators.Length(min=4, max=250, message="name problem")])
And here's the controller:
@category.route('/category/add', methods=['GET', 'POST'])
@login_required
def addCategory():
""" Add a new category.
Returns: Redirect Home.
"""
# Initiate the form.
form = CategoryForm()
# On POST of a valid form, add the new category.
if form.validate_on_submit():
category = Category(
form.name.data, login_session['users_id'])
db.session.add(category)
db.session.commit()
flash('New Category %s Successfully Created' % category.name)
return redirect(url_for('category.showHome'))
else:
# Render the form to add the category.
return render_template('newCategory.html', form=form)
How do I write a test for the if statement with the validate_on_submit function?
A:
You should have different configurations for your app, depending if you are local / in production / executing unit tests. One configuration you can set is
WTF_CSRF_ENABLED = False
See flask-wtforms documentation.
A:
Using py.test and a conftest.py recommended by Delightful testing with pytest and SQLAlchemy, here's a test that confirms the added category.
def test_add_category_post(app, session):
"""Does add category post a new category?"""
TESTEMAIL = "[email protected]"
TESTUSER = "Joe Test"
user = Users.query.filter(Users.email==TESTEMAIL).first()
category = Category(name="Added Category", users_id=user.id)
form = CategoryForm(formdata=None, obj=category)
with app.test_client() as c:
with c.session_transaction() as sess:
sess['email'] = TESTEMAIL
sess['username'] = TESTUSER
sess['users_id'] = user.id
response = c.post(
'/category/add', data=form.data, follow_redirects=True)
assert response.status_code == 200
added_category = Category.query.filter(
Category.name=="Added Category").first()
assert added_category
session.delete(added_category)
session.commit()
Note that the new category is assigned to a variable and then used to create a form. The form's data is used in the post.
A:
Working on the comments of @mas I got to this solution which worked for me:
topic_name = "test_topic"
response = fixt_client_logged_in.post('/create', data={"value":topic_name}, follow_redirects=True)
I am using this form class:
class SimpleSubmitForm(FlaskForm):
value = StringField(validators=[DataRequired()])
submit = SubmitField()
In this html file:
{{form.hidden_tag()}}
{{form.value.label("Topic", class="form-label")}}
{{form.value(value=topic_name, class="form-control")}}
<br/>
{{form.submit(value="submit", class="btn btn-primary")}}
Note that I am using the hidden_tag for the CSRF security, however when testing I have this extra line that de-activates it:
app.config['WTF_CSRF_ENABLED']=False
I have no idea how it actually works under the hood but my hypothesis is this: The wtform FlaskForm object looks at the "data" attribute of the request, which should be a dict. It then looks for keys in that dict that have the same name as its attributes. If it finds a key with the same name then it assigns that value to its attribute.
| Testing a POST that uses Flask-WTF validate_on_submit | I am stumped on testing a POST to add a category to the database where I've used Flask_WTF for validation and CSRF protection. For the CRUD operations pm my website. I've used Flask, Flask_WTF and Flask-SQLAlchemy. It is my first independent project, and I find myself a little at a lost on how to test the Flask-WTForm validate_on_submit function.
Here's are the models:
class Users(db.Model):
id = db.Column(db.Integer, primary_key=True, unique=True)
name = db.Column(db.String(80), nullable=False)
email = db.Column(db.String(250), unique=True)
class Category(db.Model):
id = db.Column(db.Integer, primary_key=True, unique=True)
name = db.Column(db.String(250), nullable=False, unique=True)
users_id = db.Column(db.Integer, db.ForeignKey('users.id'))
Here's the form:
class CategoryForm(Form):
name = StringField(
'Name', [validators.Length(min=4, max=250, message="name problem")])
And here's the controller:
@category.route('/category/add', methods=['GET', 'POST'])
@login_required
def addCategory():
""" Add a new category.
Returns: Redirect Home.
"""
# Initiate the form.
form = CategoryForm()
# On POST of a valid form, add the new category.
if form.validate_on_submit():
category = Category(
form.name.data, login_session['users_id'])
db.session.add(category)
db.session.commit()
flash('New Category %s Successfully Created' % category.name)
return redirect(url_for('category.showHome'))
else:
# Render the form to add the category.
return render_template('newCategory.html', form=form)
How do I write a test for the if statement with the validate_on_submit function?
| [
"You should have different configurations for your app, depending if you are local / in production / executing unit tests. One configuration you can set is\nWTF_CSRF_ENABLED = False\n\nSee flask-wtforms documentation.\n",
"Using py.test and a conftest.py recommended by Delightful testing with pytest and SQLAlchemy, here's a test that confirms the added category.\ndef test_add_category_post(app, session):\n \"\"\"Does add category post a new category?\"\"\"\n TESTEMAIL = \"[email protected]\"\n TESTUSER = \"Joe Test\"\n user = Users.query.filter(Users.email==TESTEMAIL).first()\n category = Category(name=\"Added Category\", users_id=user.id)\n form = CategoryForm(formdata=None, obj=category)\n with app.test_client() as c:\n with c.session_transaction() as sess:\n sess['email'] = TESTEMAIL\n sess['username'] = TESTUSER \n sess['users_id'] = user.id\n response = c.post(\n '/category/add', data=form.data, follow_redirects=True)\n assert response.status_code == 200\n added_category = Category.query.filter(\n Category.name==\"Added Category\").first()\n assert added_category\n session.delete(added_category)\n session.commit()\n\nNote that the new category is assigned to a variable and then used to create a form. The form's data is used in the post. \n",
"Working on the comments of @mas I got to this solution which worked for me:\ntopic_name = \"test_topic\"\nresponse = fixt_client_logged_in.post('/create', data={\"value\":topic_name}, follow_redirects=True)\n\nI am using this form class:\nclass SimpleSubmitForm(FlaskForm):\n value = StringField(validators=[DataRequired()])\n submit = SubmitField()\n\nIn this html file:\n{{form.hidden_tag()}}\n\n{{form.value.label(\"Topic\", class=\"form-label\")}}\n{{form.value(value=topic_name, class=\"form-control\")}}\n<br/>\n{{form.submit(value=\"submit\", class=\"btn btn-primary\")}}\n\nNote that I am using the hidden_tag for the CSRF security, however when testing I have this extra line that de-activates it:\napp.config['WTF_CSRF_ENABLED']=False\n\nI have no idea how it actually works under the hood but my hypothesis is this: The wtform FlaskForm object looks at the \"data\" attribute of the request, which should be a dict. It then looks for keys in that dict that have the same name as its attributes. If it finds a key with the same name then it assigns that value to its attribute.\n"
] | [
5,
2,
0
] | [] | [] | [
"flask_wtforms",
"python",
"python_2.7",
"wtforms"
] | stackoverflow_0037579411_flask_wtforms_python_python_2.7_wtforms.txt |
Q:
get tree view from a dictionary with anytree or rich, or treelib
I read the manual from https://anytree.readthedocs.io/en/latest/#, but I didn't figure out how to translate a dictionary to tree view, anyone can help?
data = {
'Marc': 'Udo',
'Lian': 'Marc',
'Dan': 'Udo',
'Jet': 'Dan',
'Jan': 'Dan',
'Joe': 'Dan',
}
output is
Udo
βββ Marc
β βββ Lian
βββ Dan
βββ Jet
βββ Jan
βββ Joe
A:
First you need to create the tree from your dict of "relationship" data, there are many ways to do this but here's an example:
from anytree import Node
nodes = {}
for k, v in data.items():
nk = nodes[k] = nodes.get(k) or Node(k)
nv = nodes[v] = nodes.get(v) or Node(v)
nk.parent = nv
Now you need to identify the root node, in your case it is the unique node which has no parent (Udo).
[root] = [n for n in nodes.values() if n.parent is None]
# Or, if you don't need the validation that there is a unique root:
root = <any node>
while root.parent is not None:
root = root.parent
# Or, if you already knew the root node's name then just:
root = nodes[root_name]
Once you have the root node, you can render the tree like this:
>>> from anytree import RenderTree
>>> print(RenderTree(root).by_attr())
Udo
βββ Marc
β βββ Lian
βββ Dan
βββ Jet
βββ Jan
βββ Joe
The anytree API is richer than rich, so it's a little more complicated with rich.tree:
>>> import rich.tree
>>> nodes = {}
... for k, v in data.items():
... nk = nodes[k] = nodes.get(k) or rich.tree.Tree(k)
... nv = nodes[v] = nodes.get(v) or rich.tree.Tree(v)
... nv.children.append(nk)
... nk.parent = nv
...
>>> [root] = [n for n in nodes.values() if getattr(n, "parent", None) is None]
>>> rich.print(root)
Udo
βββ Marc
β βββ Lian
βββ Dan
βββ Jet
βββ Jan
βββ Joe
| get tree view from a dictionary with anytree or rich, or treelib | I read the manual from https://anytree.readthedocs.io/en/latest/#, but I didn't figure out how to translate a dictionary to tree view, anyone can help?
data = {
'Marc': 'Udo',
'Lian': 'Marc',
'Dan': 'Udo',
'Jet': 'Dan',
'Jan': 'Dan',
'Joe': 'Dan',
}
output is
Udo
βββ Marc
β βββ Lian
βββ Dan
βββ Jet
βββ Jan
βββ Joe
| [
"First you need to create the tree from your dict of \"relationship\" data, there are many ways to do this but here's an example:\nfrom anytree import Node\n\nnodes = {}\nfor k, v in data.items():\n nk = nodes[k] = nodes.get(k) or Node(k)\n nv = nodes[v] = nodes.get(v) or Node(v)\n nk.parent = nv\n\nNow you need to identify the root node, in your case it is the unique node which has no parent (Udo).\n[root] = [n for n in nodes.values() if n.parent is None]\n\n# Or, if you don't need the validation that there is a unique root:\nroot = <any node>\nwhile root.parent is not None:\n root = root.parent\n\n# Or, if you already knew the root node's name then just:\nroot = nodes[root_name]\n\nOnce you have the root node, you can render the tree like this:\n>>> from anytree import RenderTree\n>>> print(RenderTree(root).by_attr())\nUdo\nβββ Marc\nβ βββ Lian\nβββ Dan\n βββ Jet\n βββ Jan\n βββ Joe\n\nThe anytree API is richer than rich, so it's a little more complicated with rich.tree:\n>>> import rich.tree\n>>> nodes = {}\n... for k, v in data.items():\n... nk = nodes[k] = nodes.get(k) or rich.tree.Tree(k)\n... nv = nodes[v] = nodes.get(v) or rich.tree.Tree(v)\n... nv.children.append(nk)\n... nk.parent = nv\n... \n>>> [root] = [n for n in nodes.values() if getattr(n, \"parent\", None) is None]\n>>> rich.print(root)\nUdo\nβββ Marc\nβ βββ Lian\nβββ Dan\n βββ Jet\n βββ Jan\n βββ Joe\n\n"
] | [
1
] | [] | [] | [
"anytree",
"python",
"rich",
"treelib"
] | stackoverflow_0074650261_anytree_python_rich_treelib.txt |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.