content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Pydantic error when reading data from JSON
I am writing code, which loads the data of a JSON file and parses it using Pydantic.
Here is the Python code:
import json
import pydantic
from typing import Optional, List
class Car(pydantic.BaseModel):
manufacturer: str
model: str
date_of_manufacture: str
date_of_sale: str
number_plate: str
price: float
type_of_fuel: Optional[str]
location_of_sale: Optional[str]
def load_data() -> None:
with open("./data.json") as file:
data = json.load(file)
cars: List[Car] = [Car(**item) for item in data]
print(cars[0])
if __name__ == "__main__":
load_data()
And here is the JSON data:
[
{
"manufacturer": "BMW",
"model": "i8",
"date_of_manufacture": "14/06/2021",
"date_of_sale": "19/11/2022",
"number_plate": "ND21WHP",
"price": "100,000",
"type_of_fuel": "electric",
"location_of_sale": "Leicester, England"
},
{
"manufacturer": "Audi",
"model": "TT RS",
"date_of_manufacture": "22/02/2019",
"date_of_sale": "12/08/2021",
"number_plate": "LR69FOW",
"price": "67,000",
"type_of_fuel": "petrol",
"location_of_sale": "Manchester, England"
}
]
And this is the error I am getting:
File "pydantic\main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Car
price value is not a valid float (type=type_error.float)
I have tried adding .00 to the end of the price strings but I get the same error.
A:
The problem comes from the fact that the default Pydantic validator for float simply tries to coerce the string value to float (as @Paul mentioned). And float("100,000") leads to a ValueError.
I am surprised no one suggested this, but if you don't control the source JSON data, you can easily solve this issue by writing your own little validator to properly format the string (or parse the number properly yourself):
from pydantic import BaseModel, validator
class Car(BaseModel):
manufacturer: str
model: str
date_of_manufacture: str
date_of_sale: str
number_plate: str
price: float
type_of_fuel: Optional[str]
location_of_sale: Optional[str]
@validator("price", pre=True)
def adjust_number_format(cls, v: object) -> object:
if isinstance(v, str):
return v.replace(",", "")
return v
The pre=True is important to make the adjustment before the default field validator receives the value. I purposefully did it like this to show that you don't need to convert the str to a float yourself, but you could of course do that too:
...
@validator("price", pre=True)
def parse_number(cls, v: object) -> object:
if isinstance(v, str):
return float(v.replace(",", ""))
return v
Both of these work and require no changes in the JSON document.
Finally, if you have (or anticipate to have in the future) multiple number-like fields and know that all of them may cause such problems with weirdly formatted strings, you could generalize that validator like this: (different class for demo pruposes)
from pydantic import BaseModel, validator
from pydantic.fields import ModelField
class Car2(BaseModel):
model: str
price: float
year: int
numbers: list[float]
@validator("*", pre=True, each_item=True)
def format_number_string(cls, v: object, field: ModelField) -> object:
if issubclass(field.type_, (float, int)) and isinstance(v, str):
return v.replace(",", "")
return v
if __name__ == "__main__":
car = Car2.parse_obj({
"model": "foo",
"price": "100,000",
"year": "2,010",
"numbers": ["1", "3.14", "10,000"]
})
print(car) # model='foo' price=100000.0 year=2010 numbers=[1.0, 3.14, 10000.0]
A:
You could also change the decimal comma , to a _ and keep the string.
Pydantic is taking care of the str to float conversion then.
A:
You need to remove the quotes around the numbers since they are being interpreted as strings.
"price": "100,000" should be:
"price": 100000
| Pydantic error when reading data from JSON | I am writing code, which loads the data of a JSON file and parses it using Pydantic.
Here is the Python code:
import json
import pydantic
from typing import Optional, List
class Car(pydantic.BaseModel):
manufacturer: str
model: str
date_of_manufacture: str
date_of_sale: str
number_plate: str
price: float
type_of_fuel: Optional[str]
location_of_sale: Optional[str]
def load_data() -> None:
with open("./data.json") as file:
data = json.load(file)
cars: List[Car] = [Car(**item) for item in data]
print(cars[0])
if __name__ == "__main__":
load_data()
And here is the JSON data:
[
{
"manufacturer": "BMW",
"model": "i8",
"date_of_manufacture": "14/06/2021",
"date_of_sale": "19/11/2022",
"number_plate": "ND21WHP",
"price": "100,000",
"type_of_fuel": "electric",
"location_of_sale": "Leicester, England"
},
{
"manufacturer": "Audi",
"model": "TT RS",
"date_of_manufacture": "22/02/2019",
"date_of_sale": "12/08/2021",
"number_plate": "LR69FOW",
"price": "67,000",
"type_of_fuel": "petrol",
"location_of_sale": "Manchester, England"
}
]
And this is the error I am getting:
File "pydantic\main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Car
price value is not a valid float (type=type_error.float)
I have tried adding .00 to the end of the price strings but I get the same error.
| [
"The problem comes from the fact that the default Pydantic validator for float simply tries to coerce the string value to float (as @Paul mentioned). And float(\"100,000\") leads to a ValueError.\nI am surprised no one suggested this, but if you don't control the source JSON data, you can easily solve this issue by writing your own little validator to properly format the string (or parse the number properly yourself):\nfrom pydantic import BaseModel, validator\n\nclass Car(BaseModel):\n manufacturer: str\n model: str\n date_of_manufacture: str\n date_of_sale: str\n number_plate: str\n price: float\n type_of_fuel: Optional[str]\n location_of_sale: Optional[str]\n\n @validator(\"price\", pre=True)\n def adjust_number_format(cls, v: object) -> object:\n if isinstance(v, str):\n return v.replace(\",\", \"\")\n return v\n\nThe pre=True is important to make the adjustment before the default field validator receives the value. I purposefully did it like this to show that you don't need to convert the str to a float yourself, but you could of course do that too:\n...\n @validator(\"price\", pre=True)\n def parse_number(cls, v: object) -> object:\n if isinstance(v, str):\n return float(v.replace(\",\", \"\"))\n return v\n\nBoth of these work and require no changes in the JSON document.\n\nFinally, if you have (or anticipate to have in the future) multiple number-like fields and know that all of them may cause such problems with weirdly formatted strings, you could generalize that validator like this: (different class for demo pruposes)\nfrom pydantic import BaseModel, validator\nfrom pydantic.fields import ModelField\n\n\nclass Car2(BaseModel):\n model: str\n price: float\n year: int\n numbers: list[float]\n\n @validator(\"*\", pre=True, each_item=True)\n def format_number_string(cls, v: object, field: ModelField) -> object:\n if issubclass(field.type_, (float, int)) and isinstance(v, str):\n return v.replace(\",\", \"\")\n return v\n\n\nif __name__ == \"__main__\":\n car = Car2.parse_obj({\n \"model\": \"foo\",\n \"price\": \"100,000\",\n \"year\": \"2,010\",\n \"numbers\": [\"1\", \"3.14\", \"10,000\"]\n })\n print(car) # model='foo' price=100000.0 year=2010 numbers=[1.0, 3.14, 10000.0]\n\n",
"You could also change the decimal comma , to a _ and keep the string.\nPydantic is taking care of the str to float conversion then.\n",
"You need to remove the quotes around the numbers since they are being interpreted as strings.\n\"price\": \"100,000\" should be:\n\"price\": 100000\n"
] | [
2,
1,
0
] | [] | [] | [
"json",
"pydantic",
"python"
] | stackoverflow_0074632526_json_pydantic_python.txt |
Q:
Can't click on this element on heroku
I have this robot that takes in some data and places an order in another website. everything worked fine locally, but on heroku the button place order doesn't get clicked for some reason. here is the code:
place_order = driver.find_element(By.ID, 'placeOrderBtn')
driver.execute_script("arguments[0].click();", place_order)
print('place order: ', place_order)
I have also tried place_order.click() but same result.
its also worth mentioning that on the print statment, so the selenium element gets printed.
here is also the arguments i use to run the robot on heroku
chrome_options.binary_location = os.environ.get("GOOGLE_CHROME_BIN")
chrome_options.add_argument("--headless")
chrome_options.add_argument("--disable-dev-shm-usage")
chrome_options.add_argument("start-maximized")
chrome_options.add_argument("--no-sandbox")
driver = webdriver.Chrome(executable_path=os.environ.get("CHROMEDRIVER_PATH"), options=chrome_options)
I have also tried chrome_options.add_argument("--start-maximized") with -- before stat_maximized.
and here is the HTML
A:
Don't try to click any element using JavaScript, use the Selenium plugin API and tools:
In Selenium there is a class called WebDriverWait, you can use it for smart waiting until an element is clickable, visible, hidden and so on:
element = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.ID, "placeOrderBtn")))
element.click()
You may change the timeout of 10 seconds, this will allow for a longer wait time. If the element is clickable before the timeout passed - the element will be clicked.
| Can't click on this element on heroku | I have this robot that takes in some data and places an order in another website. everything worked fine locally, but on heroku the button place order doesn't get clicked for some reason. here is the code:
place_order = driver.find_element(By.ID, 'placeOrderBtn')
driver.execute_script("arguments[0].click();", place_order)
print('place order: ', place_order)
I have also tried place_order.click() but same result.
its also worth mentioning that on the print statment, so the selenium element gets printed.
here is also the arguments i use to run the robot on heroku
chrome_options.binary_location = os.environ.get("GOOGLE_CHROME_BIN")
chrome_options.add_argument("--headless")
chrome_options.add_argument("--disable-dev-shm-usage")
chrome_options.add_argument("start-maximized")
chrome_options.add_argument("--no-sandbox")
driver = webdriver.Chrome(executable_path=os.environ.get("CHROMEDRIVER_PATH"), options=chrome_options)
I have also tried chrome_options.add_argument("--start-maximized") with -- before stat_maximized.
and here is the HTML
| [
"Don't try to click any element using JavaScript, use the Selenium plugin API and tools:\nIn Selenium there is a class called WebDriverWait, you can use it for smart waiting until an element is clickable, visible, hidden and so on:\nelement = WebDriverWait(driver, 10).until(\n EC.element_to_be_clickable((By.ID, \"placeOrderBtn\")))\nelement.click()\n\nYou may change the timeout of 10 seconds, this will allow for a longer wait time. If the element is clickable before the timeout passed - the element will be clicked.\n"
] | [
0
] | [] | [] | [
"heroku",
"python",
"selenium"
] | stackoverflow_0074641337_heroku_python_selenium.txt |
Q:
Concatenate strings from several rows using Pandas groupby
I want to merge several strings in a dataframe based on a groupedby in Pandas.
This is my code so far:
import pandas as pd
from io import StringIO
data = StringIO("""
"name1","hej","2014-11-01"
"name1","du","2014-11-02"
"name1","aj","2014-12-01"
"name1","oj","2014-12-02"
"name2","fin","2014-11-01"
"name2","katt","2014-11-02"
"name2","mycket","2014-12-01"
"name2","lite","2014-12-01"
""")
# load string as stream into dataframe
df = pd.read_csv(data,header=0, names=["name","text","date"],parse_dates=[2])
# add column with month
df["month"] = df["date"].apply(lambda x: x.month)
I want the end result to look like this:
I don't get how I can use groupby and apply some sort of concatenation of the strings in the column "text". Any help appreciated!
A:
You can groupby the 'name' and 'month' columns, then call transform which will return data aligned to the original df and apply a lambda where we join the text entries:
In [119]:
df['text'] = df[['name','text','month']].groupby(['name','month'])['text'].transform(lambda x: ','.join(x))
df[['name','text','month']].drop_duplicates()
Out[119]:
name text month
0 name1 hej,du 11
2 name1 aj,oj 12
4 name2 fin,katt 11
6 name2 mycket,lite 12
I sub the original df by passing a list of the columns of interest df[['name','text','month']] here and then call drop_duplicates
EDIT actually I can just call apply and then reset_index:
In [124]:
df.groupby(['name','month'])['text'].apply(lambda x: ','.join(x)).reset_index()
Out[124]:
name month text
0 name1 11 hej,du
1 name1 12 aj,oj
2 name2 11 fin,katt
3 name2 12 mycket,lite
update
the lambda is unnecessary here:
In[38]:
df.groupby(['name','month'])['text'].apply(','.join).reset_index()
Out[38]:
name month text
0 name1 11 du
1 name1 12 aj,oj
2 name2 11 fin,katt
3 name2 12 mycket,lite
A:
We can groupby the 'name' and 'month' columns, then call agg() functions of Panda’s DataFrame objects.
The aggregation functionality provided by the agg() function allows multiple statistics to be calculated per group in one calculation.
df.groupby(['name', 'month'], as_index = False).agg({'text': ' '.join})
A:
The answer by EdChum provides you with a lot of flexibility but if you just want to concateate strings into a column of list objects you can also:
output_series = df.groupby(['name','month'])['text'].apply(list)
A:
If you want to concatenate your "text" in a list:
df.groupby(['name', 'month'], as_index = False).agg({'text': list})
A:
For me the above solutions were close but added some unwanted /n's and dtype:object, so here's a modified version:
df.groupby(['name', 'month'])['text'].apply(lambda text: ''.join(text.to_string(index=False))).str.replace('(\\n)', '').reset_index()
A:
Please try this line of code : -
df.groupby(['name','month'])['text'].apply(','.join).reset_index()
A:
Although, this is an old question. But just in case. I used the below code and it seems to work like a charm.
text = ''.join(df[df['date'].dt.month==8]['text'])
A:
Thanks to all the other answers, the following is probably the most concise and feels more natural. Using df.groupby("X")["A"].agg() aggregates over one or many selected columns.
df = pandas.DataFrame({'A' : ['a', 'a', 'b', 'c', 'c'],
'B' : ['i', 'j', 'k', 'i', 'j'],
'X' : [1, 2, 2, 1, 3]})
A B X
a i 1
a j 2
b k 2
c i 1
c j 3
df.groupby("X", as_index=False)["A"].agg(' '.join)
X A
1 a c
2 a b
3 c
df.groupby("X", as_index=False)[["A", "B"]].agg(' '.join)
X A B
1 a c i i
2 a b j k
3 c j
| Concatenate strings from several rows using Pandas groupby | I want to merge several strings in a dataframe based on a groupedby in Pandas.
This is my code so far:
import pandas as pd
from io import StringIO
data = StringIO("""
"name1","hej","2014-11-01"
"name1","du","2014-11-02"
"name1","aj","2014-12-01"
"name1","oj","2014-12-02"
"name2","fin","2014-11-01"
"name2","katt","2014-11-02"
"name2","mycket","2014-12-01"
"name2","lite","2014-12-01"
""")
# load string as stream into dataframe
df = pd.read_csv(data,header=0, names=["name","text","date"],parse_dates=[2])
# add column with month
df["month"] = df["date"].apply(lambda x: x.month)
I want the end result to look like this:
I don't get how I can use groupby and apply some sort of concatenation of the strings in the column "text". Any help appreciated!
| [
"You can groupby the 'name' and 'month' columns, then call transform which will return data aligned to the original df and apply a lambda where we join the text entries:\nIn [119]:\n\ndf['text'] = df[['name','text','month']].groupby(['name','month'])['text'].transform(lambda x: ','.join(x))\ndf[['name','text','month']].drop_duplicates()\nOut[119]:\n name text month\n0 name1 hej,du 11\n2 name1 aj,oj 12\n4 name2 fin,katt 11\n6 name2 mycket,lite 12\n\nI sub the original df by passing a list of the columns of interest df[['name','text','month']] here and then call drop_duplicates\nEDIT actually I can just call apply and then reset_index:\nIn [124]:\n\ndf.groupby(['name','month'])['text'].apply(lambda x: ','.join(x)).reset_index()\n\nOut[124]:\n name month text\n0 name1 11 hej,du\n1 name1 12 aj,oj\n2 name2 11 fin,katt\n3 name2 12 mycket,lite\n\nupdate\nthe lambda is unnecessary here:\nIn[38]:\ndf.groupby(['name','month'])['text'].apply(','.join).reset_index()\n\nOut[38]: \n name month text\n0 name1 11 du\n1 name1 12 aj,oj\n2 name2 11 fin,katt\n3 name2 12 mycket,lite\n\n",
"We can groupby the 'name' and 'month' columns, then call agg() functions of Panda’s DataFrame objects.\nThe aggregation functionality provided by the agg() function allows multiple statistics to be calculated per group in one calculation.\ndf.groupby(['name', 'month'], as_index = False).agg({'text': ' '.join})\n\n\n",
"The answer by EdChum provides you with a lot of flexibility but if you just want to concateate strings into a column of list objects you can also:\noutput_series = df.groupby(['name','month'])['text'].apply(list)\n\n",
"If you want to concatenate your \"text\" in a list:\ndf.groupby(['name', 'month'], as_index = False).agg({'text': list})\n\n",
"For me the above solutions were close but added some unwanted /n's and dtype:object, so here's a modified version:\ndf.groupby(['name', 'month'])['text'].apply(lambda text: ''.join(text.to_string(index=False))).str.replace('(\\\\n)', '').reset_index()\n\n",
"Please try this line of code : -\ndf.groupby(['name','month'])['text'].apply(','.join).reset_index()\n\n",
"Although, this is an old question. But just in case. I used the below code and it seems to work like a charm.\ntext = ''.join(df[df['date'].dt.month==8]['text'])\n\n",
"Thanks to all the other answers, the following is probably the most concise and feels more natural. Using df.groupby(\"X\")[\"A\"].agg() aggregates over one or many selected columns.\ndf = pandas.DataFrame({'A' : ['a', 'a', 'b', 'c', 'c'],\n 'B' : ['i', 'j', 'k', 'i', 'j'],\n 'X' : [1, 2, 2, 1, 3]})\n\n A B X\n a i 1\n a j 2\n b k 2\n c i 1\n c j 3\n\ndf.groupby(\"X\", as_index=False)[\"A\"].agg(' '.join)\n\n X A\n 1 a c\n 2 a b\n 3 c\n\ndf.groupby(\"X\", as_index=False)[[\"A\", \"B\"]].agg(' '.join)\n\n X A B\n 1 a c i i\n 2 a b j k\n 3 c j\n\n"
] | [
300,
115,
57,
16,
13,
6,
3,
0
] | [] | [] | [
"pandas",
"pandas_groupby",
"python",
"python_3.x"
] | stackoverflow_0027298178_pandas_pandas_groupby_python_python_3.x.txt |
Q:
How do I extract specific rows from a CSV file?
I have three CSV files:
doctors.csv
1,John,Smith,Internal Med
2,Jone,Smith,Pediatrics
3,Jone,Carlos,Cardiology
patients.csv
1,Sara,Smith,20,07012345678,B1234
2,Mike,Jones,37,07555551234,L22AB
3,Daivd,Smith,15,07123456789,C1ABC
... and linked.csv, which I need to populate based on doctors.csv and patients.csv.
I'm taking two inputs from the user that correspond with the doctor ID and patient ID, and checking if they are present, and then writing them to the linked.csv file.
I'd like the linked.csv file to contain for each column:
[patientID,patientfirstname,patientsurname,doctorID,doctorfirstname,doctorlastname]
Unfortunately, I can't figure out how to read a specific row using the csv module and then extract the specific data I need from both files.
Here is the code I have so far:
#asking for input
print('Please select both a Patient ID and Doctor ID to link together')
patient_index = input('Please enter the patient ID: ')
doctorlink = input('Please select a doctor ID: ')
doctorpresent = False
patientpresent = False
# precence check for both values
with open('patiens.csv', 'r') as f:
reader = csv.reader(f, delimiter=',')
for row in reader:
if patient_index == row[0]:
print('Patient is present')
patientpresent = True
with open('doctors.csv', 'r') as f:
reader = csv.reader(f, delimiter=',')
for row in reader:
if patient_index == row[0]:
print('Doctor is present')
doctorpresent = True
if patientpresent == True and doctorpresent == True:
Here, I need to add the code necessary for extracting the rows.
A:
This looks like a task better suited for a database, but here is a possible solution just using the csv module (pathlib here is just my preferred way to handle files):
import csv
from pathlib import Path
# Files
patients_csv = Path('patients.csv')
doctors_csv = Path('doctors.csv')
linked_csv = Path('linked.csv')
# Create in memory lists for patients and doctors
with patients_csv.open() as f:
reader = csv.reader(f)
patients = [_ for _ in reader]
with doctors_csv.open() as f:
reader = csv.reader(f)
doctors = [_ for _ in reader]
print('Please select both a Patient ID and Doctor ID to link together')
patient_id = input('Please enter the patient ID: ')
doctor_id = input('Please select a doctor ID: ')
# Is there a doctor that matches doctor_id?
doctor = [_ for _ in doctors if _[0] == doctor_id]
# Is there a patient that matches patient_id?
patient = [_ for _ in patients if _[0] == patient_id]
# Do we have a patient and a doctor match?
if patientt and doctor:
patient = patient[0]
doctor = doctor[0]
with linked_csv.open('a') as f:
print(*patient, *doctor, file=f, sep=',')
And here is the result of a test run of the code above, which is in app.py here:
$ ls
app.py doctors.csv patients.csv
$ cat doctors.csv
1,John,Smith,Internal Med
2,Jone,Smith,Pediatrics
3,Jone,Carlos,Cardiology
$ cat patients.csv
1,Sara,Smith,20,07012345678,B1234
2,Mike,Jones,37,07555551234,L22AB
3,Daivd,Smith,15,07123456789,C1ABC
$ python app.py
Please select both a Patient ID and Doctor ID to link together
Please enter the patient ID: 1
Please select a doctor ID: 1
$ cat linked.csv
1,Sara,Smith,20,07012345678,B1234,1,John,Smith,Internal Med
It can also be implemented independent of the csv module:
from pathlib import Path
# Files
patients_csv = Path('patients.csv')
doctors_csv = Path('doctors.csv')
linked_csv = Path('linked.csv')
# Create in memory lists for patients and doctors
with patients_csv.open() as f:
patients = [_.strip().split(',') for _ in f]
with doctors_csv.open() as f:
doctors = [_.strip().split(',') for _ in f]
print('Please select both a Patient ID and Doctor ID to link together')
patient_id = input('Please enter the patient ID: ')
doctor_id = input('Please select a doctor ID: ')
# Is there a doctor that matches doctor_id?
doctor = [_ for _ in doctors if _[0] == doctor_id]
# Is there a patient that matches patient_id?
patient = [_ for _ in patients if _[0] == patient_id]
# Do we have a patient and a doctor match?
if patientt and doctor:
patient = patient[0]
doctor = doctor[0]
with linked_csv.open('a') as f:
print(*patient, *doctor, file=f, sep=',')
Both examples are just starting points, which you can construct and improve above them.
A:
I have used CSV library to read files from both files and combine them into a list.
import csv
# asking for input
print('Please select both a Patient ID and Doctor ID to link together')
patient_index = input('Please enter the patient ID: ')
doctor_index = input('Please select a doctor ID: ')
output = []
# opening the CSV file
with open('patient.csv', 'r') as f:
# reading the CSV file
reader = csv.reader(f, delimiter=',')
# iterating the rows
for row in reader:
if patient_index == row[0]:
output.append(row[0]) # appending patientID
output.append(row[1]) # appending patientfirstname
output.append(row[2]) # appending patientlastname
# opening the CSV file
with open('doctor.csv', 'r') as f:
# reading the CSV file
reader = csv.reader(f, delimiter=',')
# iterating the rows
for row in reader:
if doctor_index == row[0]:
output.append(row[0]) # appending doctorID
output.append(row[1]) # appending doctorfirstname
output.append(row[2]) # appending doctorlastname
print(output)
Output
['1', 'Sara', 'Smith', '1', 'John', 'Smith']
[patientID,patientfirstname,patientsurname,doctorID,doctorfirstname,doctorlastname]
Hope this helps. Happy Coding :)
| How do I extract specific rows from a CSV file? | I have three CSV files:
doctors.csv
1,John,Smith,Internal Med
2,Jone,Smith,Pediatrics
3,Jone,Carlos,Cardiology
patients.csv
1,Sara,Smith,20,07012345678,B1234
2,Mike,Jones,37,07555551234,L22AB
3,Daivd,Smith,15,07123456789,C1ABC
... and linked.csv, which I need to populate based on doctors.csv and patients.csv.
I'm taking two inputs from the user that correspond with the doctor ID and patient ID, and checking if they are present, and then writing them to the linked.csv file.
I'd like the linked.csv file to contain for each column:
[patientID,patientfirstname,patientsurname,doctorID,doctorfirstname,doctorlastname]
Unfortunately, I can't figure out how to read a specific row using the csv module and then extract the specific data I need from both files.
Here is the code I have so far:
#asking for input
print('Please select both a Patient ID and Doctor ID to link together')
patient_index = input('Please enter the patient ID: ')
doctorlink = input('Please select a doctor ID: ')
doctorpresent = False
patientpresent = False
# precence check for both values
with open('patiens.csv', 'r') as f:
reader = csv.reader(f, delimiter=',')
for row in reader:
if patient_index == row[0]:
print('Patient is present')
patientpresent = True
with open('doctors.csv', 'r') as f:
reader = csv.reader(f, delimiter=',')
for row in reader:
if patient_index == row[0]:
print('Doctor is present')
doctorpresent = True
if patientpresent == True and doctorpresent == True:
Here, I need to add the code necessary for extracting the rows.
| [
"This looks like a task better suited for a database, but here is a possible solution just using the csv module (pathlib here is just my preferred way to handle files):\nimport csv\nfrom pathlib import Path\n\n# Files\npatients_csv = Path('patients.csv')\ndoctors_csv = Path('doctors.csv')\nlinked_csv = Path('linked.csv')\n\n# Create in memory lists for patients and doctors\nwith patients_csv.open() as f:\n reader = csv.reader(f)\n patients = [_ for _ in reader]\n\nwith doctors_csv.open() as f:\n reader = csv.reader(f)\n doctors = [_ for _ in reader]\n\nprint('Please select both a Patient ID and Doctor ID to link together')\npatient_id = input('Please enter the patient ID: ')\ndoctor_id = input('Please select a doctor ID: ')\n\n# Is there a doctor that matches doctor_id?\ndoctor = [_ for _ in doctors if _[0] == doctor_id]\n\n# Is there a patient that matches patient_id?\npatient = [_ for _ in patients if _[0] == patient_id]\n\n# Do we have a patient and a doctor match?\nif patientt and doctor:\n patient = patient[0]\n doctor = doctor[0]\n with linked_csv.open('a') as f:\n print(*patient, *doctor, file=f, sep=',')\n\nAnd here is the result of a test run of the code above, which is in app.py here:\n$ ls\napp.py doctors.csv patients.csv\n\n$ cat doctors.csv \n1,John,Smith,Internal Med\n2,Jone,Smith,Pediatrics\n3,Jone,Carlos,Cardiology\n\n$ cat patients.csv \n1,Sara,Smith,20,07012345678,B1234\n2,Mike,Jones,37,07555551234,L22AB\n3,Daivd,Smith,15,07123456789,C1ABC\n\n$ python app.py \nPlease select both a Patient ID and Doctor ID to link together\nPlease enter the patient ID: 1\nPlease select a doctor ID: 1\n\n$ cat linked.csv \n1,Sara,Smith,20,07012345678,B1234,1,John,Smith,Internal Med\n\nIt can also be implemented independent of the csv module:\nfrom pathlib import Path\n\n# Files\npatients_csv = Path('patients.csv')\ndoctors_csv = Path('doctors.csv')\nlinked_csv = Path('linked.csv')\n\n# Create in memory lists for patients and doctors\nwith patients_csv.open() as f:\n patients = [_.strip().split(',') for _ in f]\n\nwith doctors_csv.open() as f:\n doctors = [_.strip().split(',') for _ in f]\n\nprint('Please select both a Patient ID and Doctor ID to link together')\npatient_id = input('Please enter the patient ID: ')\ndoctor_id = input('Please select a doctor ID: ')\n\n# Is there a doctor that matches doctor_id?\ndoctor = [_ for _ in doctors if _[0] == doctor_id]\n\n# Is there a patient that matches patient_id?\npatient = [_ for _ in patients if _[0] == patient_id]\n\n# Do we have a patient and a doctor match?\nif patientt and doctor:\n patient = patient[0]\n doctor = doctor[0]\n with linked_csv.open('a') as f:\n print(*patient, *doctor, file=f, sep=',')\n\nBoth examples are just starting points, which you can construct and improve above them.\n",
"I have used CSV library to read files from both files and combine them into a list.\nimport csv\n\n# asking for input\nprint('Please select both a Patient ID and Doctor ID to link together')\npatient_index = input('Please enter the patient ID: ')\ndoctor_index = input('Please select a doctor ID: ')\n\noutput = []\n\n# opening the CSV file\nwith open('patient.csv', 'r') as f:\n # reading the CSV file\n reader = csv.reader(f, delimiter=',')\n # iterating the rows\n for row in reader:\n if patient_index == row[0]:\n output.append(row[0]) # appending patientID\n output.append(row[1]) # appending patientfirstname\n output.append(row[2]) # appending patientlastname\n\n# opening the CSV file\nwith open('doctor.csv', 'r') as f:\n # reading the CSV file\n reader = csv.reader(f, delimiter=',')\n # iterating the rows\n for row in reader:\n if doctor_index == row[0]:\n output.append(row[0]) # appending doctorID\n output.append(row[1]) # appending doctorfirstname\n output.append(row[2]) # appending doctorlastname\n\nprint(output)\n\nOutput\n['1', 'Sara', 'Smith', '1', 'John', 'Smith'] \n[patientID,patientfirstname,patientsurname,doctorID,doctorfirstname,doctorlastname]\n\nHope this helps. Happy Coding :)\n"
] | [
1,
1
] | [] | [] | [
"object",
"python"
] | stackoverflow_0074641071_object_python.txt |
Q:
ImportError: No module named bs4 - despite bs4 and BeautifulSoup being installed
I downloaded Python 3.7 and am running a script with "from bs4 import BeautifulSoup" and am receiving the following error on execution;
"File "myscript.py", line 3, in
from bs4 import BeautifulSoup ImportError: No module named bs4"
When I type "pip3 install bs4" or "pip3 install BeautifulSoup4" in the terminal I get the following;
"Requirement already satisfied: bs4 in
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages
(0.0.1) Requirement already satisfied: beautifulsoup4 in
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages
(from bs4) (4.6.3)"
Executing "import bs4" and "from bs4 import BeautifulSoup" in IDLE don't error out.
Any idea about what's going on?
A:
Just pip install bs4.
Probably, you are maintaining different versions of Python.
A:
check if you have more than one version of python, if so add the path of python 3.7 in the system setting and try removing the older python if possible
and then pip install BeautifulSoup
A:
I had a similar problem with importing bs4. There is a similar question with many answers. I tried many things and nothing worked but when I installed poetry [https://python-poetry.org/ ]
$ poetry add pendulum
It sorted dependencies for me so I was able to import bs4.
| ImportError: No module named bs4 - despite bs4 and BeautifulSoup being installed | I downloaded Python 3.7 and am running a script with "from bs4 import BeautifulSoup" and am receiving the following error on execution;
"File "myscript.py", line 3, in
from bs4 import BeautifulSoup ImportError: No module named bs4"
When I type "pip3 install bs4" or "pip3 install BeautifulSoup4" in the terminal I get the following;
"Requirement already satisfied: bs4 in
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages
(0.0.1) Requirement already satisfied: beautifulsoup4 in
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages
(from bs4) (4.6.3)"
Executing "import bs4" and "from bs4 import BeautifulSoup" in IDLE don't error out.
Any idea about what's going on?
| [
"Just pip install bs4. \nProbably, you are maintaining different versions of Python.\n",
"check if you have more than one version of python, if so add the path of python 3.7 in the system setting and try removing the older python if possible\nand then pip install BeautifulSoup \n",
"I had a similar problem with importing bs4. There is a similar question with many answers. I tried many things and nothing worked but when I installed poetry [https://python-poetry.org/ ]\n$ poetry add pendulum\nIt sorted dependencies for me so I was able to import bs4.\n"
] | [
2,
1,
0
] | [] | [] | [
"macos",
"python"
] | stackoverflow_0052492315_macos_python.txt |
Q:
Python Telegram Bot: show message history for new group members
With a python based telegram bot that should help to setup group settings I want to hide/unhide the message history for new group subscribers. I am using the python-telegram-bot API wrapper (documented here).
For setting other permissions there is a method Bot.set_chat_permissions(). But for hide/unhide message history I found no method.
However, in the telegram API documentation there is an endpoint channels.togglePreHistoryHidden, that should do exactly what I want. But I could not figure out how to use that via python-telegram-bot.
Another user has asked a similar question, but related to telethon.
Any hints are appreciated.
A:
python-telegram-bot is a wrapper for the Bot API. Only those methods listed in the API docs have a counterpart in python-telegram-bot.
The togglePreHistoryHidden method is an endpoint of the Telegram API != Bot API.
Disclaimer: I'm currently the maintainer of python-telegram-bot.
| Python Telegram Bot: show message history for new group members | With a python based telegram bot that should help to setup group settings I want to hide/unhide the message history for new group subscribers. I am using the python-telegram-bot API wrapper (documented here).
For setting other permissions there is a method Bot.set_chat_permissions(). But for hide/unhide message history I found no method.
However, in the telegram API documentation there is an endpoint channels.togglePreHistoryHidden, that should do exactly what I want. But I could not figure out how to use that via python-telegram-bot.
Another user has asked a similar question, but related to telethon.
Any hints are appreciated.
| [
"python-telegram-bot is a wrapper for the Bot API. Only those methods listed in the API docs have a counterpart in python-telegram-bot.\nThe togglePreHistoryHidden method is an endpoint of the Telegram API != Bot API.\n\nDisclaimer: I'm currently the maintainer of python-telegram-bot.\n"
] | [
1
] | [] | [] | [
"python",
"python_telegram_bot",
"telegram",
"telegram_api",
"telegram_bot"
] | stackoverflow_0074621034_python_python_telegram_bot_telegram_telegram_api_telegram_bot.txt |
Q:
ERROR: Command errored out with exit status 1 when trying to install ciso8601
Im a rookie to start programming. It would so much appreciated if you can help me on this.
Im using Windows 8 64bits and Python 3.7. thank you guys so much!!
The error popped up when I tried to install ciso8601 in Pycharm.
ERROR: Command errored out with exit status 1:
Collecting ciso8601
Using cached ciso8601-2.1.3.tar.gz (15 kB)
Building wheels for collected packages: ciso8601
Building wheel for ciso8601 (setup.py): started
Building wheel for ciso8601 (setup.py): finished with status 'error'
Running setup.py clean for ciso8601
Failed to build ciso8601
Installing collected packages: ciso8601
Running setup.py install for ciso8601: started
Running setup.py install for ciso8601: finished with status 'error'
ERROR: Command errored out with exit status 1:
command:
Complete output (14 lines):
running bdist_wheel
running build
running build_py
package init file 'ciso8601\__init__.py' not found (or not a regular file)
creating build
creating build\lib.win32-3.7
creating build\lib.win32-3.7\ciso8601
copying ciso8601\__init__.pyi -> build\lib.win32-3.7\ciso8601
copying ciso8601\py.typed -> build\lib.win32-3.7\ciso8601
warning: build_py: byte-compiling is disabled, skipping.
running build_ext
building 'ciso8601' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build
Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
Collecting ciso8601
Using cached ciso8601-2.1.3.tar.gz (15 kB)
Building wheels for collected packages: ciso8601
Building wheel for ciso8601 (setup.py): started
Building wheel for ciso8601 (setup.py): finished with status 'error'
Running setup.py clean for ciso8601
Failed to build ciso8601
Installing collected packages: ciso8601
Running setup.py install for ciso8601: started
Running setup.py install for ciso8601: finished with status 'error'
ERROR: Command errored out with exit status 1:
command:
Complete output (14 lines):
running bdist_wheel
running build
running build_py
package init file 'ciso8601\__init__.py' not found (or not a regular file)
creating build
creating build\lib.win32-3.7
creating build\lib.win32-3.7\ciso8601
copying ciso8601\__init__.pyi -> build\lib.win32-3.7\ciso8601
copying ciso8601\py.typed -> build\lib.win32-3.7\ciso8601
warning: build_py: byte-compiling is disabled, skipping.
running build_ext
building 'ciso8601' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build
Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
A:
You have to Install with docker. Beacuse ciso8601 is a Python extension written in C. Source: GitHub
| ERROR: Command errored out with exit status 1 when trying to install ciso8601 | Im a rookie to start programming. It would so much appreciated if you can help me on this.
Im using Windows 8 64bits and Python 3.7. thank you guys so much!!
The error popped up when I tried to install ciso8601 in Pycharm.
ERROR: Command errored out with exit status 1:
Collecting ciso8601
Using cached ciso8601-2.1.3.tar.gz (15 kB)
Building wheels for collected packages: ciso8601
Building wheel for ciso8601 (setup.py): started
Building wheel for ciso8601 (setup.py): finished with status 'error'
Running setup.py clean for ciso8601
Failed to build ciso8601
Installing collected packages: ciso8601
Running setup.py install for ciso8601: started
Running setup.py install for ciso8601: finished with status 'error'
ERROR: Command errored out with exit status 1:
command:
Complete output (14 lines):
running bdist_wheel
running build
running build_py
package init file 'ciso8601\__init__.py' not found (or not a regular file)
creating build
creating build\lib.win32-3.7
creating build\lib.win32-3.7\ciso8601
copying ciso8601\__init__.pyi -> build\lib.win32-3.7\ciso8601
copying ciso8601\py.typed -> build\lib.win32-3.7\ciso8601
warning: build_py: byte-compiling is disabled, skipping.
running build_ext
building 'ciso8601' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build
Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
Collecting ciso8601
Using cached ciso8601-2.1.3.tar.gz (15 kB)
Building wheels for collected packages: ciso8601
Building wheel for ciso8601 (setup.py): started
Building wheel for ciso8601 (setup.py): finished with status 'error'
Running setup.py clean for ciso8601
Failed to build ciso8601
Installing collected packages: ciso8601
Running setup.py install for ciso8601: started
Running setup.py install for ciso8601: finished with status 'error'
ERROR: Command errored out with exit status 1:
command:
Complete output (14 lines):
running bdist_wheel
running build
running build_py
package init file 'ciso8601\__init__.py' not found (or not a regular file)
creating build
creating build\lib.win32-3.7
creating build\lib.win32-3.7\ciso8601
copying ciso8601\__init__.pyi -> build\lib.win32-3.7\ciso8601
copying ciso8601\py.typed -> build\lib.win32-3.7\ciso8601
warning: build_py: byte-compiling is disabled, skipping.
running build_ext
building 'ciso8601' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build
Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
| [
"You have to Install with docker. Beacuse ciso8601 is a Python extension written in C. Source: GitHub\n"
] | [
0
] | [] | [] | [
"installation",
"python"
] | stackoverflow_0068084614_installation_python.txt |
Q:
sh: flake8: not found, though I installed it with python pip
Here I'm going to use flake8 within a docker container. I installed flake8 using the following command and everything installed successfully.
$ sudo -H pip3 install flake8 // worked fine
and it's location path is-
/usr/local/lib/python3.8/dist-packages
Then I executed the following command but result was not expected.
$ sudo docker-compose run --rm app sh -c "flake8"
It says,
sh: flake8: not found
May be, the flake8 package is not installed in the correct location path. Please help me with this issue.
A:
When you docker-compose run a container, it is in a new container in a new isolated filesystem. If you ran pip install in a debugging shell in another container, it won't be visible there.
As a general rule, never install software into a running container, unless it's for very short-term debugging. This will get lost as soon as the container exits, and it's very routine to destroy and recreate containers.
If you do need a tool like this, you should install it in your Dockerfile instead:
FROM python:3.10
...
RUN pip3 install flake8
However, for purely developer-oriented tools like linters, it may be better to not install them in Docker at all. A Docker image contains a fixed copy of your application code and is intended to be totally separate from anything on your host system. You can do day-to-day development, including unit testing and style checking, in a non-Docker virtual environment, and use docker-compose up --build to get a container-based integration-test setup.
| sh: flake8: not found, though I installed it with python pip | Here I'm going to use flake8 within a docker container. I installed flake8 using the following command and everything installed successfully.
$ sudo -H pip3 install flake8 // worked fine
and it's location path is-
/usr/local/lib/python3.8/dist-packages
Then I executed the following command but result was not expected.
$ sudo docker-compose run --rm app sh -c "flake8"
It says,
sh: flake8: not found
May be, the flake8 package is not installed in the correct location path. Please help me with this issue.
| [
"When you docker-compose run a container, it is in a new container in a new isolated filesystem. If you ran pip install in a debugging shell in another container, it won't be visible there.\nAs a general rule, never install software into a running container, unless it's for very short-term debugging. This will get lost as soon as the container exits, and it's very routine to destroy and recreate containers.\nIf you do need a tool like this, you should install it in your Dockerfile instead:\nFROM python:3.10\n...\nRUN pip3 install flake8\n\nHowever, for purely developer-oriented tools like linters, it may be better to not install them in Docker at all. A Docker image contains a fixed copy of your application code and is intended to be totally separate from anything on your host system. You can do day-to-day development, including unit testing and style checking, in a non-Docker virtual environment, and use docker-compose up --build to get a container-based integration-test setup.\n"
] | [
2
] | [] | [] | [
"docker",
"flake8",
"pip",
"python"
] | stackoverflow_0074637107_docker_flake8_pip_python.txt |
Q:
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 1 (char 2) when reading a json?
{
"teams": {
"sp": [
{
"k": {
"attack": 3,
"defense": 4
},
"s": {
"attack": 3,
"defense": 4
},
"b": {
"attack": 3,
"defense": 4
},
"h": {
"attack": 3,
"defense": 4
},
"r": {
"attack": 3,
"defense": 4
},
"l": {
"attack": 4,
"defense": 5
}
}
],
"mu": [
{
"r": {
"attack": 5,
"defense": 6
},
"a": {
"attack": 4,
"defense": 3
},
"f": {
"attack": 4,
"defense": 3
},
"c": {
"attack": 4,
"defense": 3
},
"v": {
"attack": 4,
"defense": 2
},
"dg": {
"attack": 4,
"defense": 5
}
}
]
}
}
Code
obj = [json.loads(line) for line in open('playerlist.json', 'r')]
print(obj)
the above json is player list and below is the python code I'm trying to read it with. when I run it it raise
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 1 (char 2)
I've read its a formatting issue with the json but when I run it through a json formatted it says it is valid json.
A:
The issue is that you are trying to convert individual lines to json, you need to convert the all file at once
with open('playerlist.json', 'r') as f:
obj = json.load(f)
| json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 1 (char 2) when reading a json? | {
"teams": {
"sp": [
{
"k": {
"attack": 3,
"defense": 4
},
"s": {
"attack": 3,
"defense": 4
},
"b": {
"attack": 3,
"defense": 4
},
"h": {
"attack": 3,
"defense": 4
},
"r": {
"attack": 3,
"defense": 4
},
"l": {
"attack": 4,
"defense": 5
}
}
],
"mu": [
{
"r": {
"attack": 5,
"defense": 6
},
"a": {
"attack": 4,
"defense": 3
},
"f": {
"attack": 4,
"defense": 3
},
"c": {
"attack": 4,
"defense": 3
},
"v": {
"attack": 4,
"defense": 2
},
"dg": {
"attack": 4,
"defense": 5
}
}
]
}
}
Code
obj = [json.loads(line) for line in open('playerlist.json', 'r')]
print(obj)
the above json is player list and below is the python code I'm trying to read it with. when I run it it raise
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 1 (char 2)
I've read its a formatting issue with the json but when I run it through a json formatted it says it is valid json.
| [
"The issue is that you are trying to convert individual lines to json, you need to convert the all file at once\nwith open('playerlist.json', 'r') as f:\n obj = json.load(f)\n\n"
] | [
0
] | [] | [] | [
"json",
"python",
"python_jsons"
] | stackoverflow_0074641598_json_python_python_jsons.txt |
Q:
Django ORM for fetching latest record for specific ID
How do I write a django ORM to fetch the most recent records from the table for a specific id.
Example:
I have table(tr_data) data like:
id
trs(foreign key)
status
last_updated
1
301
3
2022-11-28 06:14:28
2
301
4
2022-11-28 06:15:28
3
302
3
2022-11-28 06:14:28
4
302
4
2022-11-28 06:15:28
5
302
2
2022-11-28 06:16:28
I want to have a queryset values that gives me trs id with its latest status.I have tried with aggragate and MAX but not getting the desired result.
Expecting ouput as :
[{"trs":301, "status":4},"trs":302,"status":2}]
A:
Possible duplicate
Django ORM: Group by and Max
You can achieve this by annotate and Max. In the tr_data model, add a related_name parameter to trs something like 'tr_status'.
Write an orm:
latest_objs = TrData.objects.annotate(temp=Max('trs__tr_status__last_updated')).filter(last_updated).values('trs', 'status')
| Django ORM for fetching latest record for specific ID | How do I write a django ORM to fetch the most recent records from the table for a specific id.
Example:
I have table(tr_data) data like:
id
trs(foreign key)
status
last_updated
1
301
3
2022-11-28 06:14:28
2
301
4
2022-11-28 06:15:28
3
302
3
2022-11-28 06:14:28
4
302
4
2022-11-28 06:15:28
5
302
2
2022-11-28 06:16:28
I want to have a queryset values that gives me trs id with its latest status.I have tried with aggragate and MAX but not getting the desired result.
Expecting ouput as :
[{"trs":301, "status":4},"trs":302,"status":2}]
| [
"Possible duplicate\nDjango ORM: Group by and Max\nYou can achieve this by annotate and Max. In the tr_data model, add a related_name parameter to trs something like 'tr_status'.\nWrite an orm:\nlatest_objs = TrData.objects.annotate(temp=Max('trs__tr_status__last_updated')).filter(last_updated).values('trs', 'status')\n\n"
] | [
2
] | [] | [] | [
"django",
"python"
] | stackoverflow_0074641547_django_python.txt |
Q:
How to update/upgrade a package using pip?
What is the way to update a package using pip?
those do not work:
pip update
pip upgrade
I know this is a simple question but it is needed as it is not so easy to find (pip documentation doesn't pop up and other questions from stack overflow are relevant but are not exactly about that)
A:
The way is
pip install <package_name> --upgrade
or in short
pip install <package_name> -U
Using sudo will ask to enter your root password to confirm the action, but although common, is considered unsafe.
If you do not have a root password (if you are not the admin) you should probably work with virtualenv.
You can also use the user flag to install it on this user only.
pip install <package_name> --upgrade --user
A:
For a non-specific package and a more general solution, you can check out pip-review. A tool that checks what packages could/should be updated.
To install:
$ pip install pip-review
Then run:
$ pip-review --interactive
requests==0.14.0 is available (you have 0.13.2)
Upgrade now? [Y]es, [N]o, [A]ll, [Q]uit y
A:
Use this code in terminal:
python -m pip install --upgrade PACKAGE_NAME
For example I want update pip package:
python -m pip install --upgrade pip
More examples:
python -m pip install --upgrade selenium
python -m pip install --upgrade requests
...
A:
tl;dr script to update all installed packages
If you only want to upgrade one package, refer to @borgr's answer. I often find it necessary, or at least pleasing, to upgrade all my packages at once. Currently, pip doesn't natively support that action, but with sh scripting it is simple enough. You use pip list, awk (or cut and tail), and command substitution. My normal one-liner is:
for i in $(pip list -o | awk 'NR > 2 {print $1}'); do sudo pip install -U $i; done
This will ask for the root password. If you do not have access to that, the --user option of pip or virtualenv may be something to look into.
A:
import subprocess as sbp
import pip
pkgs = eval(str(sbp.run("pip3 list -o --format=json", shell=True,
stdout=sbp.PIPE).stdout, encoding='utf-8'))
for pkg in pkgs:
sbp.run("pip3 install --upgrade " + pkg['name'], shell=True)
Save as xx.py
Then run Python3 xx.py
Environment: python3.5+ pip10.0+
A:
I use the following line to update all of my outdated packages:
pip list --outdated --format=freeze | awk -F '==' '{print $1}' | xargs -n1 pip install -U
A:
While off-topic, one may reach this question wishing to update pip itself (See here).
To upgrade pip for Python3.4+, you must use pip3 as follows:
sudo pip3 install pip --upgrade
This will upgrade pip located at: /usr/local/lib/python3.X/dist-packages
Otherwise, to upgrade pip for Python2.7, you would use pip as follows:
sudo pip install pip --upgrade
This will upgrade pip located at: /usr/local/lib/python2.7/dist-packages
A:
Also, in Jupyter notebook, by running the code below in a code cell, you can update your package:
!pip install <package_name> --upgrade
| How to update/upgrade a package using pip? | What is the way to update a package using pip?
those do not work:
pip update
pip upgrade
I know this is a simple question but it is needed as it is not so easy to find (pip documentation doesn't pop up and other questions from stack overflow are relevant but are not exactly about that)
| [
"The way is\npip install <package_name> --upgrade\n\nor in short\npip install <package_name> -U\n\nUsing sudo will ask to enter your root password to confirm the action, but although common, is considered unsafe.\nIf you do not have a root password (if you are not the admin) you should probably work with virtualenv.\nYou can also use the user flag to install it on this user only.\npip install <package_name> --upgrade --user\n\n",
"For a non-specific package and a more general solution, you can check out pip-review. A tool that checks what packages could/should be updated.\nTo install:\n$ pip install pip-review\n\nThen run:\n$ pip-review --interactive\nrequests==0.14.0 is available (you have 0.13.2)\nUpgrade now? [Y]es, [N]o, [A]ll, [Q]uit y\n\n",
"Use this code in terminal:\npython -m pip install --upgrade PACKAGE_NAME\n\nFor example I want update pip package:\npython -m pip install --upgrade pip\n\nMore examples:\npython -m pip install --upgrade selenium\npython -m pip install --upgrade requests\n...\n\n",
"tl;dr script to update all installed packages\nIf you only want to upgrade one package, refer to @borgr's answer. I often find it necessary, or at least pleasing, to upgrade all my packages at once. Currently, pip doesn't natively support that action, but with sh scripting it is simple enough. You use pip list, awk (or cut and tail), and command substitution. My normal one-liner is:\nfor i in $(pip list -o | awk 'NR > 2 {print $1}'); do sudo pip install -U $i; done\n\nThis will ask for the root password. If you do not have access to that, the --user option of pip or virtualenv may be something to look into.\n",
"import subprocess as sbp\nimport pip\npkgs = eval(str(sbp.run(\"pip3 list -o --format=json\", shell=True,\n stdout=sbp.PIPE).stdout, encoding='utf-8'))\nfor pkg in pkgs:\n sbp.run(\"pip3 install --upgrade \" + pkg['name'], shell=True)\n\n\nSave as xx.py\nThen run Python3 xx.py\nEnvironment: python3.5+ pip10.0+\n\n",
"I use the following line to update all of my outdated packages:\npip list --outdated --format=freeze | awk -F '==' '{print $1}' | xargs -n1 pip install -U\n\n",
"While off-topic, one may reach this question wishing to update pip itself (See here).\nTo upgrade pip for Python3.4+, you must use pip3 as follows:\nsudo pip3 install pip --upgrade\n\nThis will upgrade pip located at: /usr/local/lib/python3.X/dist-packages\nOtherwise, to upgrade pip for Python2.7, you would use pip as follows:\nsudo pip install pip --upgrade\n\nThis will upgrade pip located at: /usr/local/lib/python2.7/dist-packages\n",
"Also, in Jupyter notebook, by running the code below in a code cell, you can update your package:\n!pip install <package_name> --upgrade\n\n"
] | [
841,
74,
24,
13,
11,
10,
9,
1
] | [
"Execute the below command in your command prompt,\nC:\\Users\\Owner\\AppData\\Local\\Programs\\Python\\Python310>python -m pip install --upgrade pip\n\nOutput will be like below,\nRequirement already satisfied: pip in c:\\users\\owner\\appdata\\local\\programs\\python\\python310\\lib\\site-packages (21.2.4)\nCollecting pip\n Downloading pip-22.0.3-py3-none-any.whl (2.1 MB)\n |████████████████████████████████| 2.1 MB 3.3 MB/s\nInstalling collected packages: pip\n Attempting uninstall: pip\n Found existing installation: pip 21.2.4\n Uninstalling pip-21.2.4:\n Successfully uninstalled pip-21.2.4\nSuccessfully installed pip-22.0.3\n\n"
] | [
-2
] | [
"pip",
"python"
] | stackoverflow_0047071256_pip_python.txt |
Q:
How to remove entire rows if all columns except one is empty?
I want to remove entire rows if all columns except the one is empty. So, imagine that my DataFrame is
df = pd.DataFrame({"col1": ["s1", "s2", "s3", "s4", "s5", "s6"],
"col2": [41, np.nan, np.nan, np.nan, np.nan, 61],
"col3": [24, 51, np.nan, np.nan, np.nan, 84],
"col4": [53, 64, 81, np.nan, np.nan, np.nan],
"col5": [43, 83, 47, 12, np.nan, 19]})
which looks like this
col1 col2 col3 col4 col5
0 s1 41 24 53 43
1 s2 NaN 51 64 83
2 s3 NaN NaN 81 47
3 s4 NaN NaN NaN 12
4 s5 NaN NaN NaN NaN
5 s6 61 84 NaN 19
In this example, the desired result is
col1 col2 col3 col4 col5
0 s1 41 24 53 43
1 s2 NaN 51 64 83
2 s3 NaN NaN 81 47
3 s4 NaN NaN NaN 12
4 s6 61 84 NaN 19
which means that I want to remove the last row. I initially tried with df.dropna(how="all") but it does not work since the last row is not entirely empty (s5 in the col1).
How can I solve this?
A:
Use the thresh parameter:
N = 1
df.dropna(thresh=N+1)
Or if you want to match exactly N NAs (no more no less):
N = 1
out = df[df.isna().sum(axis=1).ne(df.shape[1]-N)]
Output:
col1 col2 col3 col4 col5
0 s1 41.0 24.0 53.0 43.0
1 s2 NaN 51.0 64.0 83.0
2 s3 NaN NaN 81.0 47.0
3 s4 NaN NaN NaN 12.0
A:
df[df.iloc[:, 1:].notnull().any(axis=1)]
A:
import numpy as np
import pandas as pd
df = pd.DataFrame({"x1": [np.nan, np.nan], "x2": [1, np.nan]})
print(df.head())
for idx, row in df.iterrows():
if np.isnan(row).all():
df = df.drop(idx)
print(df.head())
Added in edit:
Simply drop the irrelevant columns from the row count.
import numpy as np
import pandas as pd
df = pd.DataFrame({"name": ["keep", "remove"], "x1": [np.nan, np.nan], "x2": [1, np.nan]})
print("ORG")
print(df.head())
for idx, row in df.iterrows():
if np.isnan(row[1:].astype(float)).all():
df = df.drop(idx)
print("OUT")
print(df.head())
A:
You should use threshold in dropna.
df = df.dropna(axis=0, thresh=2)
A:
You can also try this one to check if the element is a NaN.
np.isnan()
Here is the official documentation for more info.
https://numpy.org/doc/stable/user/misc.html
A:
As an alternative to given answers and if you want to use .dropna() you may set col1 as your index with:
df = df.set_index("col1")
This way df.dropna(how='all') will work like a charm.
If you do not need it as index anymore you may get your column back by df['col1'] = df.index and reset index df.reset_index(drop=True)
col1 will appear after col5, you may rearrange it back with:
cols = df.columns.tolist()
cols = cols[-1:] + cols[:-1]
df[cols]
| How to remove entire rows if all columns except one is empty? | I want to remove entire rows if all columns except the one is empty. So, imagine that my DataFrame is
df = pd.DataFrame({"col1": ["s1", "s2", "s3", "s4", "s5", "s6"],
"col2": [41, np.nan, np.nan, np.nan, np.nan, 61],
"col3": [24, 51, np.nan, np.nan, np.nan, 84],
"col4": [53, 64, 81, np.nan, np.nan, np.nan],
"col5": [43, 83, 47, 12, np.nan, 19]})
which looks like this
col1 col2 col3 col4 col5
0 s1 41 24 53 43
1 s2 NaN 51 64 83
2 s3 NaN NaN 81 47
3 s4 NaN NaN NaN 12
4 s5 NaN NaN NaN NaN
5 s6 61 84 NaN 19
In this example, the desired result is
col1 col2 col3 col4 col5
0 s1 41 24 53 43
1 s2 NaN 51 64 83
2 s3 NaN NaN 81 47
3 s4 NaN NaN NaN 12
4 s6 61 84 NaN 19
which means that I want to remove the last row. I initially tried with df.dropna(how="all") but it does not work since the last row is not entirely empty (s5 in the col1).
How can I solve this?
| [
"Use the thresh parameter:\nN = 1\ndf.dropna(thresh=N+1)\n\nOr if you want to match exactly N NAs (no more no less):\nN = 1\nout = df[df.isna().sum(axis=1).ne(df.shape[1]-N)]\n\nOutput:\n col1 col2 col3 col4 col5\n0 s1 41.0 24.0 53.0 43.0\n1 s2 NaN 51.0 64.0 83.0\n2 s3 NaN NaN 81.0 47.0\n3 s4 NaN NaN NaN 12.0\n\n",
"df[df.iloc[:, 1:].notnull().any(axis=1)]\n\n",
"import numpy as np\nimport pandas as pd \n\ndf = pd.DataFrame({\"x1\": [np.nan, np.nan], \"x2\": [1, np.nan]})\nprint(df.head())\n\nfor idx, row in df.iterrows():\n if np.isnan(row).all():\n df = df.drop(idx)\n\nprint(df.head())\n\n\nAdded in edit:\nSimply drop the irrelevant columns from the row count.\nimport numpy as np\nimport pandas as pd \n\ndf = pd.DataFrame({\"name\": [\"keep\", \"remove\"], \"x1\": [np.nan, np.nan], \"x2\": [1, np.nan]})\nprint(\"ORG\")\nprint(df.head())\n\nfor idx, row in df.iterrows():\n if np.isnan(row[1:].astype(float)).all():\n df = df.drop(idx)\n\nprint(\"OUT\")\nprint(df.head())\n\n",
"You should use threshold in dropna.\ndf = df.dropna(axis=0, thresh=2)\n\n",
"You can also try this one to check if the element is a NaN.\nnp.isnan()\nHere is the official documentation for more info.\nhttps://numpy.org/doc/stable/user/misc.html\n",
"As an alternative to given answers and if you want to use .dropna() you may set col1 as your index with:\ndf = df.set_index(\"col1\")\n\nThis way df.dropna(how='all') will work like a charm.\nIf you do not need it as index anymore you may get your column back by df['col1'] = df.index and reset index df.reset_index(drop=True)\ncol1 will appear after col5, you may rearrange it back with:\ncols = df.columns.tolist()\ncols = cols[-1:] + cols[:-1]\ndf[cols]\n\n"
] | [
1,
1,
0,
0,
0,
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074641498_dataframe_pandas_python.txt |
Q:
Import a pandas dataframe in a Streamlit app
I am trying to import a pandas dataframe in a Streamlit app (the goal being to run a Machine Learning model based on this dataframe when clicking a button). I use the usual way:
import pandas as pd
import streamlit as st
df = pd.read_csv('/data/metabolic_syndrome.csv')
if (st.button('Click on this fancy button !')):
st.warning("This was a bad choice.")
My path is the good one on my local machine, yet when I run the app on localhost it sends back this error:
FileNotFoundError: [Errno 2] No such file or directory: '/data/metabolic_syndrome.csv'
I may miss a key concept as I'm not a computer science specialist (such as saving the file somewhere else) yet the pathfile is the good one on my local machine, and I really don't understand what should I do here to have a proper import and run the app localy ? If that's useful, I'm on iOS.
A:
Try with a dot, like this:
pd.read_csv('./data/metabolic_syndrome.csv')
A:
You need to adapt your path to how you launch streamlit.exe.
This should work:
cd PATH_TO_PROJECT_DIR
PATH_TO_STREAMLIT\streamlit.exe run main.py --server.port 8080
Of course, your data folder must be a subfolder of PROJECT_DIR.
In your code, like stated before, you then need the . for relative path:
df = pd.read_csv('./data/metabolic_syndrome.csv')
| Import a pandas dataframe in a Streamlit app | I am trying to import a pandas dataframe in a Streamlit app (the goal being to run a Machine Learning model based on this dataframe when clicking a button). I use the usual way:
import pandas as pd
import streamlit as st
df = pd.read_csv('/data/metabolic_syndrome.csv')
if (st.button('Click on this fancy button !')):
st.warning("This was a bad choice.")
My path is the good one on my local machine, yet when I run the app on localhost it sends back this error:
FileNotFoundError: [Errno 2] No such file or directory: '/data/metabolic_syndrome.csv'
I may miss a key concept as I'm not a computer science specialist (such as saving the file somewhere else) yet the pathfile is the good one on my local machine, and I really don't understand what should I do here to have a proper import and run the app localy ? If that's useful, I'm on iOS.
| [
"Try with a dot, like this:\npd.read_csv('./data/metabolic_syndrome.csv')\n\n",
"You need to adapt your path to how you launch streamlit.exe.\nThis should work:\ncd PATH_TO_PROJECT_DIR\nPATH_TO_STREAMLIT\\streamlit.exe run main.py --server.port 8080\n\nOf course, your data folder must be a subfolder of PROJECT_DIR.\nIn your code, like stated before, you then need the . for relative path:\ndf = pd.read_csv('./data/metabolic_syndrome.csv')\n\n"
] | [
0,
0
] | [] | [] | [
"pandas",
"python",
"streamlit"
] | stackoverflow_0074524715_pandas_python_streamlit.txt |
Q:
using machine learning to find patterns in strings
Im new to AI and machine learning, using python3.
I have a huge data base of strings and each string has a number of 1-3 asign to it.
there are hidden patterns in the strings that make them get a value of 1-3.
I want to try and make a ai that can help me categorize them and try to find as much patterns as you can.
can anyone suggest me any tutorial or what to learn, I have no idea from where to begin.
Thanks for helping
seraching for any help and guidens for where to start and what to learn.
A:
The task you are describing would fall in the category of "Text Classification", although not quite, because you are not working with Natural Text.
My first idea would be to somehow convert to Links into single characters and then into a vector / list of numbers. Computer can only work with numbers, but not with texts or characters.
One method would be to convert each character in the link to its corresponding Unicode, by using:
link = "https://www.placeholder.com/foobar"
link_chars = [c for c in link]
link_char_numbers = [ord(c) for c in link_chars]
These separate numbers could be fed into any type of Neural Network.
For your purposes I would recommend using a Recurrent Neural Network (RNN), because with these, the input length (number of characters) can vary.
In summary: I would take a look into RNNs for classification tasks!
You might want to look into https://datascience.oneoffcoder.com/rnn-classify-signals.html
I hope this can somehow help you.
If you have further question, feel free to ask ^^
| using machine learning to find patterns in strings | Im new to AI and machine learning, using python3.
I have a huge data base of strings and each string has a number of 1-3 asign to it.
there are hidden patterns in the strings that make them get a value of 1-3.
I want to try and make a ai that can help me categorize them and try to find as much patterns as you can.
can anyone suggest me any tutorial or what to learn, I have no idea from where to begin.
Thanks for helping
seraching for any help and guidens for where to start and what to learn.
| [
"The task you are describing would fall in the category of \"Text Classification\", although not quite, because you are not working with Natural Text.\nMy first idea would be to somehow convert to Links into single characters and then into a vector / list of numbers. Computer can only work with numbers, but not with texts or characters.\nOne method would be to convert each character in the link to its corresponding Unicode, by using:\nlink = \"https://www.placeholder.com/foobar\"\nlink_chars = [c for c in link]\nlink_char_numbers = [ord(c) for c in link_chars]\n\nThese separate numbers could be fed into any type of Neural Network.\nFor your purposes I would recommend using a Recurrent Neural Network (RNN), because with these, the input length (number of characters) can vary.\nIn summary: I would take a look into RNNs for classification tasks!\nYou might want to look into https://datascience.oneoffcoder.com/rnn-classify-signals.html\nI hope this can somehow help you.\nIf you have further question, feel free to ask ^^\n"
] | [
0
] | [] | [] | [
"machine_learning",
"python",
"string",
"url_pattern"
] | stackoverflow_0074641530_machine_learning_python_string_url_pattern.txt |
Q:
How to fail a gitlab CI pipeline if the python script throws error code 1?
I have a python file that opens and checks for a word. The program returns 0 if pass and 1 if fails.
import sys
word = "test"
def check():
with open("tex.txt", "r") as file:
for line_number, line in enumerate(file, start=1):
if word in line:
return 0
return 1
is_word_found = check() # store the return value of check() in variable `is_word_found`
print(is_word_found)
output
1
I have gitlab-ci.yml that runs this python script in a pipeline.
image: davidlor/python-git-app:latest
stages:
- Test
Test_stage:
tags:
- docker
stage: Test
script:
- echo "test stage started"
- python verify.py
When this pipeline runs the python code prints 1 that means the program failed to find the test word. But the pipeline passes successfully.
I want to fail the pipeline if the python prints 1. Can somebody help me here?
A:
You can use sys.exit(is_word_found). But remember you use sys module (import sys).
Like this:
import sys
word = "test"
def check():
with open("tex.txt", "r") as file:
for line_number, line in enumerate(file, start=1):
if word in line:
return 0
return 1
is_word_found = check()
sys.exit(is_word_found)
However, you have a lot of other options too. check out this:
https://www.geeksforgeeks.org/python-exit-commands-quit-exit-sys-exit-and-os-_exit/
| How to fail a gitlab CI pipeline if the python script throws error code 1? | I have a python file that opens and checks for a word. The program returns 0 if pass and 1 if fails.
import sys
word = "test"
def check():
with open("tex.txt", "r") as file:
for line_number, line in enumerate(file, start=1):
if word in line:
return 0
return 1
is_word_found = check() # store the return value of check() in variable `is_word_found`
print(is_word_found)
output
1
I have gitlab-ci.yml that runs this python script in a pipeline.
image: davidlor/python-git-app:latest
stages:
- Test
Test_stage:
tags:
- docker
stage: Test
script:
- echo "test stage started"
- python verify.py
When this pipeline runs the python code prints 1 that means the program failed to find the test word. But the pipeline passes successfully.
I want to fail the pipeline if the python prints 1. Can somebody help me here?
| [
"You can use sys.exit(is_word_found). But remember you use sys module (import sys).\nLike this:\nimport sys\nword = \"test\"\ndef check():\n with open(\"tex.txt\", \"r\") as file:\n for line_number, line in enumerate(file, start=1): \n if word in line:\n return 0\n return 1\nis_word_found = check() \nsys.exit(is_word_found)\n\nHowever, you have a lot of other options too. check out this:\nhttps://www.geeksforgeeks.org/python-exit-commands-quit-exit-sys-exit-and-os-_exit/\n"
] | [
0
] | [] | [] | [
"gitlab",
"gitlab_ci",
"pipeline",
"python"
] | stackoverflow_0074630301_gitlab_gitlab_ci_pipeline_python.txt |
Q:
AttributeError: module '_Box2D' has no attribute 'RAND_LIMIT_swigconstant'
I am trying to run a lunar_lander on reinforcement
learning, but when I run it, it occurs an error.
Plus my computer is osx system.
Here is the code of lunar lander:
import numpy as np
import gym
import csv
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.optimizers import Adam
from rl.agents.dqn import DQNAgent
from rl.policy import BoltzmannQPolicy, EpsGreedyQPolicy
from rl.memory import SequentialMemory
import io
import sys
import csv
# Path environment changed to make things work properly
# export DYLD_FALLBACK_LIBRARY_PATH=$DYLD_FALLBACK_LIBRARY_PATH:/usr/lib
# Get the environment and extract the number of actions.
ENV_NAME = 'LunarLander-v2'
env = gym.make(ENV_NAME)
np.random.seed(123)
env.seed(123)
nb_actions = env.action_space.n
# Next, we build a very simple model.
model = Sequential()
model.add(Flatten(input_shape=(1,) + env.observation_space.shape))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(nb_actions))
model.add(Activation('linear'))
#print(model.summary())
# Finally, we configure and compile our agent. You can use every built-in Keras optimizer and
# even the metrics!
memory = SequentialMemory(limit=300000, window_length=1)
policy = EpsGreedyQPolicy()
dqn = DQNAgent(model=model, nb_actions=nb_actions, memory=memory, nb_steps_warmup=10,
target_model_update=1e-2, policy=policy)
dqn.compile(Adam(lr=1e-3), metrics=['mae'])
# After training is done, we save the final weights.
dqn.load_weights('dqn_{}_weights.h5f'.format(ENV_NAME))
# Redirect stdout to capture test results
old_stdout = sys.stdout
sys.stdout = mystdout = io.StringIO()
# Evaluate our algorithm for a few episodes.
dqn.test(env, nb_episodes=200, visualize=False)
# Reset stdout
sys.stdout = old_stdout
results_text = mystdout.getvalue()
# Print results text
print("results")
print(results_text)
# Extact a rewards list from the results
total_rewards = list()
for idx, line in enumerate(results_text.split('\n')):
if idx > 0 and len(line) > 1:
reward = float(line.split(':')[2].split(',')[0].strip())
total_rewards.append(reward)
# Print rewards and average
print("total rewards", total_rewards)
print("average total reward", np.mean(total_rewards))
# Write total rewards to file
f = open("lunarlander_rl_rewards.csv",'w')
wr = csv.writer(f)
for r in total_rewards:
wr.writerow([r,])
f.close()
Here is the error:
Traceback (most recent call last):
File "/s/user/Document/Semester2/Advanced Machine Learning/Lab/Lab6/lunar_lander_ml_states_player.py", line 23, in <module>
env = gym.make(ENV_NAME)
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/gym/envs/registration.py", line 167, in make
return registry.make(id)
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/gym/envs/registration.py", line 119, in make
env = spec.make()
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/gym/envs/registration.py", line 85, in make
cls = load(self._entry_point)
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/gym/envs/registration.py", line 14, in load
result = entry_point.load(False)
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2405, in load
return self.resolve()
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2411, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/gym/envs/box2d/__init__.py", line 1, in <module>
from gym.envs.box2d.lunar_lander import LunarLander
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/gym/envs/box2d/lunar_lander.py", line 4, in <module>
import Box2D
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/Box2D/__init__.py", line 20, in <module>
from .Box2D import *
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/Box2D/Box2D.py", line 435, in <module>
_Box2D.RAND_LIMIT_swigconstant(_Box2D)
AttributeError: module '_Box2D' has no attribute 'RAND_LIMIT_swigconstant'
I tried to reinstall Box2d by following the guide of https://github.com/pybox2d/pybox2d/blob/master/INSTALL.md
but it still doesn't work, could anyone help me ?
A:
Try this 'pip3 install box2d box2d-kengz'
A:
"pip install box2d box2d-kengz --user" worked for me :)
Just in case other people might find it informative.
A:
In case you already had box2d-py, uninstall and reinstall it
pip uninstall box2d-py
then,
pip install box2d-py
that worked for me.
| AttributeError: module '_Box2D' has no attribute 'RAND_LIMIT_swigconstant' | I am trying to run a lunar_lander on reinforcement
learning, but when I run it, it occurs an error.
Plus my computer is osx system.
Here is the code of lunar lander:
import numpy as np
import gym
import csv
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.optimizers import Adam
from rl.agents.dqn import DQNAgent
from rl.policy import BoltzmannQPolicy, EpsGreedyQPolicy
from rl.memory import SequentialMemory
import io
import sys
import csv
# Path environment changed to make things work properly
# export DYLD_FALLBACK_LIBRARY_PATH=$DYLD_FALLBACK_LIBRARY_PATH:/usr/lib
# Get the environment and extract the number of actions.
ENV_NAME = 'LunarLander-v2'
env = gym.make(ENV_NAME)
np.random.seed(123)
env.seed(123)
nb_actions = env.action_space.n
# Next, we build a very simple model.
model = Sequential()
model.add(Flatten(input_shape=(1,) + env.observation_space.shape))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(nb_actions))
model.add(Activation('linear'))
#print(model.summary())
# Finally, we configure and compile our agent. You can use every built-in Keras optimizer and
# even the metrics!
memory = SequentialMemory(limit=300000, window_length=1)
policy = EpsGreedyQPolicy()
dqn = DQNAgent(model=model, nb_actions=nb_actions, memory=memory, nb_steps_warmup=10,
target_model_update=1e-2, policy=policy)
dqn.compile(Adam(lr=1e-3), metrics=['mae'])
# After training is done, we save the final weights.
dqn.load_weights('dqn_{}_weights.h5f'.format(ENV_NAME))
# Redirect stdout to capture test results
old_stdout = sys.stdout
sys.stdout = mystdout = io.StringIO()
# Evaluate our algorithm for a few episodes.
dqn.test(env, nb_episodes=200, visualize=False)
# Reset stdout
sys.stdout = old_stdout
results_text = mystdout.getvalue()
# Print results text
print("results")
print(results_text)
# Extact a rewards list from the results
total_rewards = list()
for idx, line in enumerate(results_text.split('\n')):
if idx > 0 and len(line) > 1:
reward = float(line.split(':')[2].split(',')[0].strip())
total_rewards.append(reward)
# Print rewards and average
print("total rewards", total_rewards)
print("average total reward", np.mean(total_rewards))
# Write total rewards to file
f = open("lunarlander_rl_rewards.csv",'w')
wr = csv.writer(f)
for r in total_rewards:
wr.writerow([r,])
f.close()
Here is the error:
Traceback (most recent call last):
File "/s/user/Document/Semester2/Advanced Machine Learning/Lab/Lab6/lunar_lander_ml_states_player.py", line 23, in <module>
env = gym.make(ENV_NAME)
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/gym/envs/registration.py", line 167, in make
return registry.make(id)
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/gym/envs/registration.py", line 119, in make
env = spec.make()
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/gym/envs/registration.py", line 85, in make
cls = load(self._entry_point)
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/gym/envs/registration.py", line 14, in load
result = entry_point.load(False)
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2405, in load
return self.resolve()
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2411, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/gym/envs/box2d/__init__.py", line 1, in <module>
from gym.envs.box2d.lunar_lander import LunarLander
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/gym/envs/box2d/lunar_lander.py", line 4, in <module>
import Box2D
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/Box2D/__init__.py", line 20, in <module>
from .Box2D import *
File "/s/user/anaconda/envs/untitled/lib/python3.6/site-packages/Box2D/Box2D.py", line 435, in <module>
_Box2D.RAND_LIMIT_swigconstant(_Box2D)
AttributeError: module '_Box2D' has no attribute 'RAND_LIMIT_swigconstant'
I tried to reinstall Box2d by following the guide of https://github.com/pybox2d/pybox2d/blob/master/INSTALL.md
but it still doesn't work, could anyone help me ?
| [
"Try this 'pip3 install box2d box2d-kengz'\n",
"\"pip install box2d box2d-kengz --user\" worked for me :) \nJust in case other people might find it informative.\n",
"In case you already had box2d-py, uninstall and reinstall it\npip uninstall box2d-py\n\nthen,\npip install box2d-py\n\nthat worked for me.\n"
] | [
36,
3,
0
] | [] | [] | [
"box2d",
"machine_learning",
"python",
"reinforcement_learning"
] | stackoverflow_0050037674_box2d_machine_learning_python_reinforcement_learning.txt |
Q:
Replace number with symbol in a list of lists
I need to replace 0 in a list of lists with dot ".". I aslo need to replace 1 with "o" and 2 with "*"
It should be something like chess board. So far I have this and I am stuck with the replacement. Thank you for your help! :)
chess =[
["0 1 0 1 0 1 0 1 "],
["1 0 1 0 1 0 1 0 "],
["0 1 0 1 0 1 0 1 "],
["0 0 0 0 0 0 0 0 "],
["0 0 0 0 0 0 0 0 "],
["2 0 2 0 2 0 2 0 "],
["0 2 0 2 0 2 0 2 "],
["2 0 2 0 2 0 2 0 "]]
def prt(n):
for i in range(len(n)):
for j in range(len(n[i])):
if n[j] == "0":
n[j] = "."
print(n[i][j])
prt(chess)
Output should be something like this
A:
I guess if you just need to print the layout, this would be a way to solve it:
from string import ascii_lowercase
chess = [
["0 1 0 1 0 1 0 1"],
["1 0 1 0 1 0 1 0"],
["0 1 0 1 0 1 0 1"],
["0 0 0 0 0 0 0 0"],
["0 0 0 0 0 0 0 0"],
["2 0 2 0 2 0 2 0"],
["0 2 0 2 0 2 0 2"],
["2 0 2 0 2 0 2 0"]
]
def prt(chess):
joined_chars = "".join(chess[0][0].split()) # joining the first list in chess to single string: 01010101
letters = " ".join([ascii_lowercase[i] for i in range(len(joined_chars))]) # Creating a list of letters that is the length of the joined_chars and joins it with spaces: a b c d e f g h
print(f" {letters}") # prints the letters starting with two spaces
for index, lst in enumerate(chess): # Using enumerate to get the index while running through chess
printable = lst[0].replace("0", ".").replace("1", "o").replace("2", "*")
print(f"{index+1} {printable} {index+1}") # prints the index (starts at 0) + 1 to get the correct values.
print(f" {letters}") # prints the letters starting with two spaces
prt(chess)
Result:
a b c d e f g h
1 . o . o . o . o 1
2 o . o . o . o . 2
3 . o . o . o . o 3
4 . . . . . . . . 4
5 . . . . . . . . 5
6 * . * . * . * . 6
7 . * . * . * . * 7
8 * . * . * . * . 8
a b c d e f g h
A:
You should use the replace function for strings.
chess =[
["0 1 0 1 0 1 0 1 "],
["1 0 1 0 1 0 1 0 "],
["0 1 0 1 0 1 0 1 "],
["0 0 0 0 0 0 0 0 "],
["0 0 0 0 0 0 0 0 "],
["2 0 2 0 2 0 2 0 "],
["0 2 0 2 0 2 0 2 "],
["2 0 2 0 2 0 2 0 "]]
def prt (n):
new_list = []
for line in n:
line = line[0]
line = line.replace('0','.')
line = line.replace('1', 'o')
line = line.replace('2', '*')
temp = []
temp.append(line)
new_list.append(temp)
print(new_list)
prt(chess)
A:
You can try it by using the replace function for strings and iterating through the lines itself:
def print_chess(chess):
for line_list in chess:
line = line_list[0]
line = line.replace("0", ".")
line = line.replace("1", "o")
line = line.replace("2", "*")
print(line)
Edit: thanks to @Muhammad Rizwan for pointing out that replace returns the result.
A:
In each list of your list you store one string, i.e. "0 1 0 1 0 1 0 1 ".
So your loop doesn't work:
for i in range(len(n)):
for j in range(len(n[i])): # len(n[i]) is always 1
if n[j] == "0": # n[j]: "0 1 0 1 0 1 0 1 " equal "0" is False
n[j]="."
A comparison for an individual number won't work.
In order for you to get the individual number you can use the split method on the only element of your list. So you need to access it via the 0 index and call split on that element.
Build a new list of list for storing the modified values.
Using this, you can expand your if logic to check for "1" and "2" and for any other value to not change the value.
chess = [
["0 1 0 1 0 1 0 1 "],
["1 0 1 0 1 0 1 0 "],
["0 1 0 1 0 1 0 1 "],
["0 0 0 0 0 0 0 0 "],
["0 0 0 0 0 0 0 0 "],
["2 0 2 0 2 0 2 0 "],
["0 2 0 2 0 2 0 2 "],
["2 0 2 0 2 0 2 0 "]]
def prt(n):
letters = " abcdefgh"
# Initialize chessboard with letters
board = [letters]
# set enumerates start value to 1
for i, line in enumerate(n, 1):
numbers = line[0].split() # splits at whitespace
line_list = []
for num in numbers:
if num == "0":
line_list.append(".")
elif num == "1":
line_list.append("o")
elif num == "2":
line_list.append("*")
else:
line_list.append(num)
# Concatenate current line index at beginning and end of line_list
board.append([i] + line_list + [i])
# Append letters as last line of board
board.append(letters)
# Print new chess board
for line in board:
for el in line:
print(el, end=" ")
print()
prt(chess)
Changes:
Setting letters as first and last element of board
Make use of built-in function enumerate
Set start value of enumerate to 1 to get a modified index
This will give you the desired outcome:
a b c d e f g h
1 . o . o . o . o 1
2 o . o . o . o . 2
3 . o . o . o . o 3
4 . . . . . . . . 4
5 . . . . . . . . 5
6 * . * . * . * . 6
7 . * . * . * . * 7
8 * . * . * . * . 8
a b c d e f g h
A:
Thank you so much for your help.
So far I have managed this, but I have no idea how to put numbers at the beginning and at the end of the line like in the picture.
My new code:
chess =[
["0 1 0 1 0 1 0 1 "],
["1 0 1 0 1 0 1 0 "],
["0 1 0 1 0 1 0 1 "],
["0 0 0 0 0 0 0 0 "],
["0 0 0 0 0 0 0 0 "],
["2 0 2 0 2 0 2 0 "],
["0 2 0 2 0 2 0 2 "],
["2 0 2 0 2 0 2 0 "]]
letters =["a b c d e f g h"]
numbers =["1 2 3 4 5 6 7 8"]
def prt(x):
num_let(letters)
for list in x:
new = list[0]
new = new.replace("0", ".")
new = new.replace("1", "o")
new = new.replace("2", "*")
print(new)
num_let(letters)
def num_let (y):
print(y[0])
prt(chess)
This gives me this:
a b c d e f g h
. o . o . o . o
o . o . o . o .
. o . o . o . o
. . . . . . . .
. . . . . . . .
* . * . * . * .
. * . * . * . *
* . * . * . * .
a b c d e f g h
| Replace number with symbol in a list of lists | I need to replace 0 in a list of lists with dot ".". I aslo need to replace 1 with "o" and 2 with "*"
It should be something like chess board. So far I have this and I am stuck with the replacement. Thank you for your help! :)
chess =[
["0 1 0 1 0 1 0 1 "],
["1 0 1 0 1 0 1 0 "],
["0 1 0 1 0 1 0 1 "],
["0 0 0 0 0 0 0 0 "],
["0 0 0 0 0 0 0 0 "],
["2 0 2 0 2 0 2 0 "],
["0 2 0 2 0 2 0 2 "],
["2 0 2 0 2 0 2 0 "]]
def prt(n):
for i in range(len(n)):
for j in range(len(n[i])):
if n[j] == "0":
n[j] = "."
print(n[i][j])
prt(chess)
Output should be something like this
| [
"I guess if you just need to print the layout, this would be a way to solve it:\nfrom string import ascii_lowercase\n\nchess = [\n [\"0 1 0 1 0 1 0 1\"],\n [\"1 0 1 0 1 0 1 0\"],\n [\"0 1 0 1 0 1 0 1\"],\n [\"0 0 0 0 0 0 0 0\"],\n [\"0 0 0 0 0 0 0 0\"],\n [\"2 0 2 0 2 0 2 0\"],\n [\"0 2 0 2 0 2 0 2\"],\n [\"2 0 2 0 2 0 2 0\"]\n]\n\ndef prt(chess):\n joined_chars = \"\".join(chess[0][0].split()) # joining the first list in chess to single string: 01010101\n letters = \" \".join([ascii_lowercase[i] for i in range(len(joined_chars))]) # Creating a list of letters that is the length of the joined_chars and joins it with spaces: a b c d e f g h\n print(f\" {letters}\") # prints the letters starting with two spaces\n for index, lst in enumerate(chess): # Using enumerate to get the index while running through chess\n printable = lst[0].replace(\"0\", \".\").replace(\"1\", \"o\").replace(\"2\", \"*\")\n print(f\"{index+1} {printable} {index+1}\") # prints the index (starts at 0) + 1 to get the correct values.\n print(f\" {letters}\") # prints the letters starting with two spaces\n\nprt(chess)\n\nResult:\n a b c d e f g h\n1 . o . o . o . o 1\n2 o . o . o . o . 2\n3 . o . o . o . o 3\n4 . . . . . . . . 4\n5 . . . . . . . . 5\n6 * . * . * . * . 6\n7 . * . * . * . * 7\n8 * . * . * . * . 8\n a b c d e f g h\n\n",
"You should use the replace function for strings.\nchess =[\n [\"0 1 0 1 0 1 0 1 \"],\n [\"1 0 1 0 1 0 1 0 \"],\n [\"0 1 0 1 0 1 0 1 \"],\n [\"0 0 0 0 0 0 0 0 \"],\n [\"0 0 0 0 0 0 0 0 \"],\n [\"2 0 2 0 2 0 2 0 \"],\n [\"0 2 0 2 0 2 0 2 \"],\n [\"2 0 2 0 2 0 2 0 \"]]\n\n\ndef prt (n):\n new_list = []\n for line in n:\n line = line[0]\n line = line.replace('0','.')\n line = line.replace('1', 'o')\n line = line.replace('2', '*')\n temp = []\n temp.append(line)\n new_list.append(temp)\n print(new_list)\n \nprt(chess)\n\n",
"You can try it by using the replace function for strings and iterating through the lines itself:\ndef print_chess(chess):\n for line_list in chess:\n line = line_list[0]\n line = line.replace(\"0\", \".\")\n line = line.replace(\"1\", \"o\")\n line = line.replace(\"2\", \"*\")\n print(line)\n\nEdit: thanks to @Muhammad Rizwan for pointing out that replace returns the result.\n",
"In each list of your list you store one string, i.e. \"0 1 0 1 0 1 0 1 \".\nSo your loop doesn't work:\nfor i in range(len(n)): \n for j in range(len(n[i])): # len(n[i]) is always 1\n if n[j] == \"0\": # n[j]: \"0 1 0 1 0 1 0 1 \" equal \"0\" is False\n n[j]=\".\"\n\nA comparison for an individual number won't work.\nIn order for you to get the individual number you can use the split method on the only element of your list. So you need to access it via the 0 index and call split on that element.\nBuild a new list of list for storing the modified values.\nUsing this, you can expand your if logic to check for \"1\" and \"2\" and for any other value to not change the value.\nchess = [\n [\"0 1 0 1 0 1 0 1 \"],\n [\"1 0 1 0 1 0 1 0 \"],\n [\"0 1 0 1 0 1 0 1 \"],\n [\"0 0 0 0 0 0 0 0 \"],\n [\"0 0 0 0 0 0 0 0 \"],\n [\"2 0 2 0 2 0 2 0 \"],\n [\"0 2 0 2 0 2 0 2 \"],\n [\"2 0 2 0 2 0 2 0 \"]]\n\ndef prt(n):\n letters = \" abcdefgh\"\n # Initialize chessboard with letters\n board = [letters]\n # set enumerates start value to 1\n for i, line in enumerate(n, 1):\n numbers = line[0].split() # splits at whitespace\n line_list = []\n for num in numbers:\n if num == \"0\":\n line_list.append(\".\")\n elif num == \"1\":\n line_list.append(\"o\")\n elif num == \"2\":\n line_list.append(\"*\")\n else:\n line_list.append(num)\n # Concatenate current line index at beginning and end of line_list\n board.append([i] + line_list + [i])\n # Append letters as last line of board\n board.append(letters)\n # Print new chess board\n for line in board:\n for el in line:\n print(el, end=\" \")\n print()\n\nprt(chess)\n\nChanges:\n\nSetting letters as first and last element of board\nMake use of built-in function enumerate\nSet start value of enumerate to 1 to get a modified index\n\nThis will give you the desired outcome:\n a b c d e f g h \n1 . o . o . o . o 1 \n2 o . o . o . o . 2 \n3 . o . o . o . o 3 \n4 . . . . . . . . 4 \n5 . . . . . . . . 5 \n6 * . * . * . * . 6 \n7 . * . * . * . * 7 \n8 * . * . * . * . 8 \n a b c d e f g h \n\n",
"Thank you so much for your help.\nSo far I have managed this, but I have no idea how to put numbers at the beginning and at the end of the line like in the picture.\n\nMy new code:\n chess =[\n [\"0 1 0 1 0 1 0 1 \"],\n [\"1 0 1 0 1 0 1 0 \"],\n [\"0 1 0 1 0 1 0 1 \"],\n [\"0 0 0 0 0 0 0 0 \"],\n [\"0 0 0 0 0 0 0 0 \"],\n [\"2 0 2 0 2 0 2 0 \"],\n [\"0 2 0 2 0 2 0 2 \"],\n [\"2 0 2 0 2 0 2 0 \"]]\n\nletters =[\"a b c d e f g h\"]\nnumbers =[\"1 2 3 4 5 6 7 8\"]\n\ndef prt(x):\n num_let(letters) \n for list in x:\n new = list[0]\n new = new.replace(\"0\", \".\")\n new = new.replace(\"1\", \"o\")\n new = new.replace(\"2\", \"*\")\n print(new)\n num_let(letters)\n \ndef num_let (y):\n print(y[0])\n\nprt(chess)\n\nThis gives me this:\na b c d e f g h\n. o . o . o . o \no . o . o . o .\n. o . o . o . o\n. . . . . . . .\n. . . . . . . .\n* . * . * . * .\n. * . * . * . *\n* . * . * . * .\na b c d e f g h\n\n"
] | [
3,
2,
1,
1,
0
] | [] | [] | [
"list",
"python",
"python_3.x"
] | stackoverflow_0074628707_list_python_python_3.x.txt |
Q:
Filter DataFrame for numeric values
Let's assume my DataFrame df has a column called col of type string. What is wrong with the following code line?
df['col'].filter(str.isnumeric)
A:
You can do it like that:
df.loc[df['col'].str.isnumeric()]
A:
First problem, you're using a built-in python method without parenthesis which is str.isnumeric. Hence, the TypeError: 'method_descriptor' object is not iterable.
Second problem, let's suppose you've added parenthesis to str.isnumeric, this function needs one argument/string to check if all characters in the given string are numeric characters. Hence the TypeError: unbound method str.isnumeric() needs an argument.
Third problem, let's suppose you've fixed 1) and 2), since this function returns a boolean (True or False), you cannot pass it as a first parameter of pandas built-in method pandas.Series.filter. Hence, the TypeError: 'bool' object is not iterable.
As per the documentation, the first parameter needs to be a list-like :
items: list-like
Keep labels from axis which are in items.
In your case, I believe you want to use boolean indexing with pandas.DataFrame.loc :
import pandas as pd
df = pd.DataFrame({'col': ['foo', 'bar 123', '456']})
m = df['col'].str.isnumeric()
out = df.loc[m]
Output:
print(out)
col
2 456
| Filter DataFrame for numeric values | Let's assume my DataFrame df has a column called col of type string. What is wrong with the following code line?
df['col'].filter(str.isnumeric)
| [
"You can do it like that:\ndf.loc[df['col'].str.isnumeric()]\n\n",
"First problem, you're using a built-in python method without parenthesis which is str.isnumeric. Hence, the TypeError: 'method_descriptor' object is not iterable.\nSecond problem, let's suppose you've added parenthesis to str.isnumeric, this function needs one argument/string to check if all characters in the given string are numeric characters. Hence the TypeError: unbound method str.isnumeric() needs an argument.\nThird problem, let's suppose you've fixed 1) and 2), since this function returns a boolean (True or False), you cannot pass it as a first parameter of pandas built-in method pandas.Series.filter. Hence, the TypeError: 'bool' object is not iterable.\nAs per the documentation, the first parameter needs to be a list-like :\n\nitems: list-like\nKeep labels from axis which are in items.\n\nIn your case, I believe you want to use boolean indexing with pandas.DataFrame.loc :\nimport pandas as pd\n\ndf = pd.DataFrame({'col': ['foo', 'bar 123', '456']})\nm = df['col'].str.isnumeric()\nout = df.loc[m]\n\nOutput:\nprint(out)\n col\n2 456\n\n"
] | [
1,
1
] | [] | [] | [
"filter",
"numeric",
"pandas",
"python"
] | stackoverflow_0074641647_filter_numeric_pandas_python.txt |
Q:
How to update dataframe already stored in cache using python-streamlit library after some dataframe manipulation?
I’m relatively new to this awesome tool and would like to understand if it is possible to update the cache value with the new value.
Background:
I am developing a tool in python where data is loaded from the SQL database, displayed using Streamlit Ag-Grid, where the user can manipulate the data on the grid, and upon changes, the data should then overwrite the cached data with the new data so that whenever the page is auto-refreshed, “address_frame” takes the last data updated (from the grid) rather than the raw data from the database. Lastly, user has the option to store the data to the database once he’s done with the manipulation.
Where I need support!
I would like to understand how to update the cache once I have the updated dataframe.
Right now, even after I assign the updated dataframe to the variable address_frame, it will still contain the cached data that was loaded from the database
Code snippet:
@st.experimental_memo(show_spinner=True,suppress_st_warning=True)
def load_Raw_Address():
with st.spinner('Loading Cleansed Address Data From Database. This may take a while...'):
query_raw='SELECT * FROM [dbo].[DQ_Raw_Address]'
address_frame_raw = pd.read_sql(query_raw,con)
return address_frame_raw
#Assign raw data to address_frame
address_frame = load_Raw_Address()
#Show on AgGrid, user can manipulate the data on the grid
grid_return = AgGrid(address_frame, gridoptions ,editable=True ,allow_unsafe_jscode=True,theme = 'balham' , width = "100%", height = "800px")
#manipulated data is ultimately assigned to the address_frame again.
#NOTE: THIS NEEDS TO REPLACE PREVIOUS CACHED VALUE WITH NEW ONE
address_frame = grid_return['data']
Actual behavior:
Currently, If i manipulate the data on the dataframe and refresh the page, the cached data is still stored as raw data from DB and is loaded.
A:
You can use st.session_state to store the returned data from AgGrid.
See the official guide: https://docs.streamlit.io/library/api-reference/session-state
By introducing a check if dataframes are equal, you can avoid overwriting the modified df with the originally loaded one.
| How to update dataframe already stored in cache using python-streamlit library after some dataframe manipulation? | I’m relatively new to this awesome tool and would like to understand if it is possible to update the cache value with the new value.
Background:
I am developing a tool in python where data is loaded from the SQL database, displayed using Streamlit Ag-Grid, where the user can manipulate the data on the grid, and upon changes, the data should then overwrite the cached data with the new data so that whenever the page is auto-refreshed, “address_frame” takes the last data updated (from the grid) rather than the raw data from the database. Lastly, user has the option to store the data to the database once he’s done with the manipulation.
Where I need support!
I would like to understand how to update the cache once I have the updated dataframe.
Right now, even after I assign the updated dataframe to the variable address_frame, it will still contain the cached data that was loaded from the database
Code snippet:
@st.experimental_memo(show_spinner=True,suppress_st_warning=True)
def load_Raw_Address():
with st.spinner('Loading Cleansed Address Data From Database. This may take a while...'):
query_raw='SELECT * FROM [dbo].[DQ_Raw_Address]'
address_frame_raw = pd.read_sql(query_raw,con)
return address_frame_raw
#Assign raw data to address_frame
address_frame = load_Raw_Address()
#Show on AgGrid, user can manipulate the data on the grid
grid_return = AgGrid(address_frame, gridoptions ,editable=True ,allow_unsafe_jscode=True,theme = 'balham' , width = "100%", height = "800px")
#manipulated data is ultimately assigned to the address_frame again.
#NOTE: THIS NEEDS TO REPLACE PREVIOUS CACHED VALUE WITH NEW ONE
address_frame = grid_return['data']
Actual behavior:
Currently, If i manipulate the data on the dataframe and refresh the page, the cached data is still stored as raw data from DB and is loaded.
| [
"You can use st.session_state to store the returned data from AgGrid.\nSee the official guide: https://docs.streamlit.io/library/api-reference/session-state\nBy introducing a check if dataframes are equal, you can avoid overwriting the modified df with the originally loaded one.\n"
] | [
0
] | [] | [] | [
"ag_grid",
"caching",
"python",
"streamlit"
] | stackoverflow_0074519675_ag_grid_caching_python_streamlit.txt |
Q:
Convert saved png files into a movie/gif
I am very new to python. I have a folder which contains many .png files (which are scatter plots I previously made in python). I want to write a script that turns these files into a movie.
My files are named 0.png 1.png 2.png .... 49.png
I tried the following
frames = []
for i in range(0,M):
frames.append(f'{i}.png')
gif.save(frames, "mymovie.gif",duration=15, unit="s",between="startend")
Where this gif function saves the list of frames as a movie. But I have no idea what to be calling in the frames.append(), at the moment I just get a list of strings.
A:
You can use it "Pillow" library.
from PIL import Image
sample_image = Image.open("sample_image.png")
sample_image.save('.gif')
You can look at the other features of Pillow;
https://pillow.readthedocs.io/en/stable/
A:
You can use imageio.v3, see example from Documentation (code includes frame creation):
import imageio.v3 as iio
import matplotlib.pyplot as plt
import numpy as np
n = 100
gif_path = "test.gif"
plt.figure(figsize=(4, 4))
for x in range(n):
plt.scatter(x / n, x / n)
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.savefig(f"{x}.jpg")
#Generating the gif with all the frames:
frames = np.stack([iio.imread(f"{x}.jpg") for x in range(n)], axis=0)
iio.imwrite(gif_path, frames)
| Convert saved png files into a movie/gif | I am very new to python. I have a folder which contains many .png files (which are scatter plots I previously made in python). I want to write a script that turns these files into a movie.
My files are named 0.png 1.png 2.png .... 49.png
I tried the following
frames = []
for i in range(0,M):
frames.append(f'{i}.png')
gif.save(frames, "mymovie.gif",duration=15, unit="s",between="startend")
Where this gif function saves the list of frames as a movie. But I have no idea what to be calling in the frames.append(), at the moment I just get a list of strings.
| [
"You can use it \"Pillow\" library.\nfrom PIL import Image\n\nsample_image = Image.open(\"sample_image.png\")\nsample_image.save('.gif')\n\n\nYou can look at the other features of Pillow;\nhttps://pillow.readthedocs.io/en/stable/\n",
"You can use imageio.v3, see example from Documentation (code includes frame creation):\nimport imageio.v3 as iio\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nn = 100\ngif_path = \"test.gif\"\n\nplt.figure(figsize=(4, 4))\nfor x in range(n):\n plt.scatter(x / n, x / n)\n plt.xlim(0, 1)\n plt.ylim(0, 1)\n plt.savefig(f\"{x}.jpg\")\n\n#Generating the gif with all the frames:\nframes = np.stack([iio.imread(f\"{x}.jpg\") for x in range(n)], axis=0)\n\niio.imwrite(gif_path, frames)\n\n\n"
] | [
0,
0
] | [] | [] | [
"directory",
"gif",
"png",
"python",
"video"
] | stackoverflow_0074641705_directory_gif_png_python_video.txt |
Q:
Why does the lookup by index value result in a key error
The code:
df = pd.DataFrame({
'MNumber':['M03400001','M00000021','M10450001','M00003420','M02635915','M51323275','M63061229','M63151022'],
'GPA':[3.01, 4.00, 2.95, 2.90, 3.50, 3.33, 2.99, 3.98],
'major':['IS','BANA','IS','IS','IS','BANA','IS', 'BANA'],
'internship':['P&G', 'IBM', 'P&G', 'IBM', 'P&G', 'EY','EY', 'Great American'],
'job_offers':[2,0,0,3,2,1,4,3],
'graduate_credits':[5,1,0,5,2,2,3,4]
})
x = df.groupby('internship').describe()
#print(x.info())
print(x["IBM"])
The error:
KeyError: 'IBM'
A:
x['IBM'] tries to access the column 'IBM', which does not exist.
x.loc['IBM'] accesses the row 'IBM', which does exist.
A:
Your dataframe x does not contain a key called IBM.
Try
print(x)
To see the keys contained in your dataframe.
Even better, you can try
print(x.columns)
To see the available columns of your dataframe.
A:
Pandas DataFrame groupby when used, it changes the column specified to index of the DataFrame. So, you should try to index the row rather than the column as stated by @timgeb.
| Why does the lookup by index value result in a key error | The code:
df = pd.DataFrame({
'MNumber':['M03400001','M00000021','M10450001','M00003420','M02635915','M51323275','M63061229','M63151022'],
'GPA':[3.01, 4.00, 2.95, 2.90, 3.50, 3.33, 2.99, 3.98],
'major':['IS','BANA','IS','IS','IS','BANA','IS', 'BANA'],
'internship':['P&G', 'IBM', 'P&G', 'IBM', 'P&G', 'EY','EY', 'Great American'],
'job_offers':[2,0,0,3,2,1,4,3],
'graduate_credits':[5,1,0,5,2,2,3,4]
})
x = df.groupby('internship').describe()
#print(x.info())
print(x["IBM"])
The error:
KeyError: 'IBM'
| [
"x['IBM'] tries to access the column 'IBM', which does not exist.\nx.loc['IBM'] accesses the row 'IBM', which does exist.\n",
"Your dataframe x does not contain a key called IBM.\nTry\nprint(x)\n\nTo see the keys contained in your dataframe.\nEven better, you can try\nprint(x.columns)\n\nTo see the available columns of your dataframe.\n",
"Pandas DataFrame groupby when used, it changes the column specified to index of the DataFrame. So, you should try to index the row rather than the column as stated by @timgeb.\n"
] | [
2,
0,
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074641792_dataframe_pandas_python.txt |
Q:
x and y must be the same size error python
that's what i need to find where the error is (. Make a scatter plot of Nsteps versus starting number. You should adjust your marker symbol
and size such that you can discern patterns in the data, rather than just seeing a solid mass of
points.)
*** basically this is my code:***
#in[]
import numpy as np
import matplotlib.pyplot as plt
def collatz(N):
iteration=0
listNumbers=[N]
while N != 1:
iteration += 1
if (N%2) == 0:
N = N/2
else:
N= (N*3) + 1
print(i,'number of interactions',iteration)
print(listNumbers)
return iteration
N= int(input(' enter a number:'))
iteration=collatz(N)
#out[]
enter a number:7
10000 number of interactions 16
[7]
#in[]
s=np.array([])
for i in range(1,10001,1):
K=collatz(i)
s=np.append(s,K)
print(s)
#out
(too big to paste here but i put the first 10 lines)
1 number of interactions 0
[1]
2 number of interactions 1
[2]
3 number of interactions 7
[3]
4 number of interactions 2
[4]
5 number of interactions 5
[5]
6 number of interactions 8
[6]
7 number of interactions 16
[7]
8 number of interactions 3
[8]
9 number of interactions 19
[9]
10 number of interactions 6
[10]
#in[]
plt.figure()
plt.scatter(range(1,100001,1),s) #gives me the x and y must be the same size error here plt.xlabel('number of interations')
plt.ylabel('numbes')
plt.ylim(0,355)
plt.xlim(0,100000)
plt.show()
#Make a scatter plot of Nsteps versus starting number. You should adjust your marker symbol
and size such that you can discern patterns in the data, rather than just seeing a solid mass of
points.
A:
That error means that range(1,100001,1) is not the same length as s. Try
print(len(range(1,100001,1)))
print(len(s))
to confirm. You need to have the same amount of x-coordinates as you do y-coordinates. Make those the same size and it will work.
It's unclear to me from your question/code what you're actually trying to plot, but what I've just described is the root of the issue.
A:
Try:
plt.scatter(np.arange(s.size), s)
This will set the range of x values depending on the actual size of your array.
| x and y must be the same size error python | that's what i need to find where the error is (. Make a scatter plot of Nsteps versus starting number. You should adjust your marker symbol
and size such that you can discern patterns in the data, rather than just seeing a solid mass of
points.)
*** basically this is my code:***
#in[]
import numpy as np
import matplotlib.pyplot as plt
def collatz(N):
iteration=0
listNumbers=[N]
while N != 1:
iteration += 1
if (N%2) == 0:
N = N/2
else:
N= (N*3) + 1
print(i,'number of interactions',iteration)
print(listNumbers)
return iteration
N= int(input(' enter a number:'))
iteration=collatz(N)
#out[]
enter a number:7
10000 number of interactions 16
[7]
#in[]
s=np.array([])
for i in range(1,10001,1):
K=collatz(i)
s=np.append(s,K)
print(s)
#out
(too big to paste here but i put the first 10 lines)
1 number of interactions 0
[1]
2 number of interactions 1
[2]
3 number of interactions 7
[3]
4 number of interactions 2
[4]
5 number of interactions 5
[5]
6 number of interactions 8
[6]
7 number of interactions 16
[7]
8 number of interactions 3
[8]
9 number of interactions 19
[9]
10 number of interactions 6
[10]
#in[]
plt.figure()
plt.scatter(range(1,100001,1),s) #gives me the x and y must be the same size error here plt.xlabel('number of interations')
plt.ylabel('numbes')
plt.ylim(0,355)
plt.xlim(0,100000)
plt.show()
#Make a scatter plot of Nsteps versus starting number. You should adjust your marker symbol
and size such that you can discern patterns in the data, rather than just seeing a solid mass of
points.
| [
"That error means that range(1,100001,1) is not the same length as s. Try\nprint(len(range(1,100001,1)))\nprint(len(s))\n\nto confirm. You need to have the same amount of x-coordinates as you do y-coordinates. Make those the same size and it will work.\nIt's unclear to me from your question/code what you're actually trying to plot, but what I've just described is the root of the issue.\n",
"Try:\nplt.scatter(np.arange(s.size), s)\n\nThis will set the range of x values depending on the actual size of your array.\n"
] | [
0,
0
] | [] | [] | [
"matplotlib",
"plot",
"python",
"scatter_plot"
] | stackoverflow_0074641768_matplotlib_plot_python_scatter_plot.txt |
Q:
IO backend error in the xarray for netcdf file
I am trying to open a .netcdf file using xarray and it is showing this error. I am unable to resolve this error and I have found no such solution to resolve this error. I have tried with different versions of Anaconda and Ubuntu but the problem persists.
ValueError: did not find a match in any of xarray's currently installed IO backends ['scipy']. Consider explicitly selecting one of the installed backends via the engine parameter to xarray.open_dataset(), or installing additional IO dependencies:
http://xarray.pydata.org/en/stable/getting-started-guide/installing.html
http://xarray.pydata.org/en/stable/user-guide/io.html
A:
I had the same issue. I then installed netCDF4 via:
pip install netCDF4
and xarray worked. Beware of dependencies!!
A:
I had the same problem as well. In this matter, you need to install IO dependencies.
Based on their web site here you need to install all IO related packages:
io = netCDF4, h5netcdf, scipy, pydap, zarr, fsspec, cftime, rasterio, cfgrib, pooch
conda install -c anaconda netcdf4 h5netcdf scipy pydap zarr fsspec cftime rasterio cfgrib pooch
A:
Based on your error message, it looks like you are only missing the scipy dependency.
I would recommend installing using conda install in either your terminal/command line or in Jupyter (if you use that IDE):
conda install scipy
If you are using a Python environment other than your base environment, ensure you are installing to the environment are using for the project by navigating there in terminal, or by running the kernel on that environment in Jupyter.
You may need to restart the kernel if you are using Jupyter for the change to take effect.
A:
conda install scipy
solves the problem
A:
This error message can also be triggered when simply having an error in the path to the file to be opened. For example this results in the same/similar error (even with the netcdf backend installed):
import xarray as xr
xr.open_dataset("this_file_does_not_exist.nc")
So before (re)installing any packages, make sure to check whether the input to xr.open_dataset() is correct.
| IO backend error in the xarray for netcdf file | I am trying to open a .netcdf file using xarray and it is showing this error. I am unable to resolve this error and I have found no such solution to resolve this error. I have tried with different versions of Anaconda and Ubuntu but the problem persists.
ValueError: did not find a match in any of xarray's currently installed IO backends ['scipy']. Consider explicitly selecting one of the installed backends via the engine parameter to xarray.open_dataset(), or installing additional IO dependencies:
http://xarray.pydata.org/en/stable/getting-started-guide/installing.html
http://xarray.pydata.org/en/stable/user-guide/io.html
| [
"I had the same issue. I then installed netCDF4 via:\npip install netCDF4 \n\nand xarray worked. Beware of dependencies!!\n",
"I had the same problem as well. In this matter, you need to install IO dependencies.\nBased on their web site here you need to install all IO related packages:\nio = netCDF4, h5netcdf, scipy, pydap, zarr, fsspec, cftime, rasterio, cfgrib, pooch \n\nconda install -c anaconda netcdf4 h5netcdf scipy pydap zarr fsspec cftime rasterio cfgrib pooch\n\n",
"Based on your error message, it looks like you are only missing the scipy dependency.\nI would recommend installing using conda install in either your terminal/command line or in Jupyter (if you use that IDE):\nconda install scipy\nIf you are using a Python environment other than your base environment, ensure you are installing to the environment are using for the project by navigating there in terminal, or by running the kernel on that environment in Jupyter.\nYou may need to restart the kernel if you are using Jupyter for the change to take effect.\n",
"\nconda install scipy\n\nsolves the problem\n",
"This error message can also be triggered when simply having an error in the path to the file to be opened. For example this results in the same/similar error (even with the netcdf backend installed):\nimport xarray as xr\nxr.open_dataset(\"this_file_does_not_exist.nc\")\n\nSo before (re)installing any packages, make sure to check whether the input to xr.open_dataset() is correct.\n"
] | [
5,
2,
0,
0,
0
] | [] | [] | [
"io",
"netcdf",
"python",
"python_xarray"
] | stackoverflow_0067725531_io_netcdf_python_python_xarray.txt |
Q:
How to iterate over Pandas Data frame and replace multiple rows
Here is first 5 rows of my Pd dataframe:
0 Stage II
1 Stage III
2 Stage II
3 Stage IV
4 Stage II
It has 428 rows and one column.
I want to replace Stage I, II, III, and IV with Grade I, II, III, and IV respectively so that I will have a pd df like this:
0 Grade II
1 Grade III
2 Grade II
3 Grade IV
4 Grade II
I used replace like this :
target_data.replace("Stage II", "Grade II", inplace =True ). it worked but it seems I have to repeat it 4times for others.i.e, I, III,IV.
I also tried
for row in target_data.iterrows():
if row == "Stage I":
row.replace("Stage I", "Grade I" )
if row == "Stage II":
row.replace("Stage II", "Grade II" )
if row == "Stage III":
row.replace("Stage III", "Grade III" )
if row == "Stage IV":
row.replace("Stage IV", "Grade IV" )
it did not yield what I wanted.
Any suggestions to implement it in a more pythonic way?
A:
You can try this:
for i in ["Stage I", "Stage II", "Stage III", "Stage IV"]:
your_df[col_name] = your_df[col_name].str.replace(i, "Grade " + i.split()[-1])
| How to iterate over Pandas Data frame and replace multiple rows | Here is first 5 rows of my Pd dataframe:
0 Stage II
1 Stage III
2 Stage II
3 Stage IV
4 Stage II
It has 428 rows and one column.
I want to replace Stage I, II, III, and IV with Grade I, II, III, and IV respectively so that I will have a pd df like this:
0 Grade II
1 Grade III
2 Grade II
3 Grade IV
4 Grade II
I used replace like this :
target_data.replace("Stage II", "Grade II", inplace =True ). it worked but it seems I have to repeat it 4times for others.i.e, I, III,IV.
I also tried
for row in target_data.iterrows():
if row == "Stage I":
row.replace("Stage I", "Grade I" )
if row == "Stage II":
row.replace("Stage II", "Grade II" )
if row == "Stage III":
row.replace("Stage III", "Grade III" )
if row == "Stage IV":
row.replace("Stage IV", "Grade IV" )
it did not yield what I wanted.
Any suggestions to implement it in a more pythonic way?
| [
"You can try this:\nfor i in [\"Stage I\", \"Stage II\", \"Stage III\", \"Stage IV\"]:\n your_df[col_name] = your_df[col_name].str.replace(i, \"Grade \" + i.split()[-1])\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074641945_dataframe_pandas_python.txt |
Q:
Can I create a conda environment from multiple yaml files?
Is it possible to create a conda environment from two yaml files?
something like
conda env create -f env1.yml -f env2.yml
or create a enviornment from one yaml file, then update the environment with the second yaml file?
conda env create -f env1.yml
conda env update -f env2.yml
A:
Does not seem to work. conda env create only uses the final --file argument and ignores the others. conda create and conda update do not support yaml files.
A:
@sam is right, see relevant github issue:
https://github.com/conda/conda/issues/9294
A potential work around is to use conda-merge, with for example:
pip install conda-merge
conda-merge env1.yml env2.yml > env.yml
conda env create -f env.yml
A:
According to the documentation and my testing it is possible:
conda env create --file=file1.yml --file=file2dev.yml
https://docs.conda.io/projects/conda/en/latest/commands/create.html
| Can I create a conda environment from multiple yaml files? | Is it possible to create a conda environment from two yaml files?
something like
conda env create -f env1.yml -f env2.yml
or create a enviornment from one yaml file, then update the environment with the second yaml file?
conda env create -f env1.yml
conda env update -f env2.yml
| [
"Does not seem to work. conda env create only uses the final --file argument and ignores the others. conda create and conda update do not support yaml files.\n",
"@sam is right, see relevant github issue:\nhttps://github.com/conda/conda/issues/9294\nA potential work around is to use conda-merge, with for example:\npip install conda-merge\nconda-merge env1.yml env2.yml > env.yml\nconda env create -f env.yml\n\n",
"According to the documentation and my testing it is possible:\nconda env create --file=file1.yml --file=file2dev.yml\nhttps://docs.conda.io/projects/conda/en/latest/commands/create.html\n"
] | [
1,
1,
0
] | [] | [] | [
"anaconda",
"conda",
"python"
] | stackoverflow_0065668913_anaconda_conda_python.txt |
Q:
AttributeError: 'int' object has no attribute 'keys'. Im not able to understand where I'm going wrong?
So Im basically trying to summarize an output using NLP. Below is the code im using.
stopwords = nltk.corpus.stopwords.words('English')
word_frequencies = {}
for word in nltk.word_tokenize(subject1):
if word not in stopwords:
if word not in word_frequencies.keys():
word_frequencies = 1
else:
word_frequencies +=1
and below is the error I got.
AttributeError Traceback (most recent call last)
Input In [66], in <cell line: 4>()
4 for word in nltk.word_tokenize(subject1):
5 if word not in stopwords:
----> 6 if word not in word_frequencies.keys():
7 word_frequencies = 1
8 else:
AttributeError: 'int' object has no attribute 'keys'
If this does not provide enough clarity please let me know and ill share the complete code.
A:
Here you are assigning int object to variable word_frequencies = 1 I think you wanted to do something like word_frequencies[word] = 1 and word_frequencies[word] += 1
| AttributeError: 'int' object has no attribute 'keys'. Im not able to understand where I'm going wrong? | So Im basically trying to summarize an output using NLP. Below is the code im using.
stopwords = nltk.corpus.stopwords.words('English')
word_frequencies = {}
for word in nltk.word_tokenize(subject1):
if word not in stopwords:
if word not in word_frequencies.keys():
word_frequencies = 1
else:
word_frequencies +=1
and below is the error I got.
AttributeError Traceback (most recent call last)
Input In [66], in <cell line: 4>()
4 for word in nltk.word_tokenize(subject1):
5 if word not in stopwords:
----> 6 if word not in word_frequencies.keys():
7 word_frequencies = 1
8 else:
AttributeError: 'int' object has no attribute 'keys'
If this does not provide enough clarity please let me know and ill share the complete code.
| [
"Here you are assigning int object to variable word_frequencies = 1 I think you wanted to do something like word_frequencies[word] = 1 and word_frequencies[word] += 1\n"
] | [
0
] | [] | [] | [
"data_science",
"machine_learning",
"nlp",
"python"
] | stackoverflow_0074642074_data_science_machine_learning_nlp_python.txt |
Q:
How to Make a Looping Discord Bot Task that is Invoked When a Message is Posted in Python
I am trying to write a discord bot that posts yeterday's Wordle solutions but I cannot seem to figure out how to get a task to be invoked by a message and then have that task loop. I tried to use a while loop but then the bot would only work for one server. Trying to use a looping task does not work either. Here is the code
import imp
import json
import requests
import discord
import os
import time
from datetime import date, timedelta
from discord.ext import tasks
client = discord.Client()
#channel_id_exists = False
#channel_id = 0
@client.event
async def on_ready():
print('We have logged in as {0.user}'.format(client))
@client.event
async def on_message(message):
if message.author == client.user:
return
if message.content.startswith('!wordle_setup'):
await message.delete()
await message.channel.send("Setting up...")
channel_id = message.channel.id
await wordle_guess(channel_id).start()
@tasks.loop(seconds=10)
async def wordle_guess(channel_id):
message_channel = client.get_channel(channel_id)
yesterday = date.today() - timedelta(days=1)
year = yesterday.year
month = yesterday.month
day = yesterday.day
word_guess = json.loads(requests.get(
"https://najemi.cz/wordle_answers/api/?day={0}&month={1}&year={2}".format(day, month, year)).text)["word"]
await message_channel.send("Word guess for " + yesterday.isoformat() + " : " + word_guess + '\nhttps://www.merriam-webster.com/dictionary/' + word_guess)
client.run("TOKEN")
Either the bot gets stuck at setup, or the the task doesn't loop in 10 seconds, or the bot only works for one server. I use the setup command to prevent needing to hardcode the channel id into the code.
A:
I have solved the issue by wrapping the code in a client class and allowing the storage of multiple channel ids. The new code is shown here.
import asyncio
import imp
import json
import requests
import discord
import os
import time
from datetime import date, timedelta
from discord.ext import tasks
import threading
print("test!")
class MyClient(discord.Client):
channel_id = []
channel_id_exists = False
async def on_ready(self):
print('We have logged in as {0.user}'.format(self))
async def on_message(self, message):
if message.author == self.user:
return
if message.content.startswith('!wordle_setup'):
await message.delete()
await message.channel.send("Setting up...")
self.channel_id.append(message.channel.id)
self.channel_id_exists = True
@tasks.loop(seconds=3600*24)
async def wordle_guess(self):
if self.channel_id_exists:
yesterday = date.today() - timedelta(days=1)
year = yesterday.year
month = yesterday.month
day = yesterday.day
for channel_id_iter in self.channel_id:
message_channel = self.get_channel(channel_id_iter)
word_guess = json.loads(requests.get(
"https://najemi.cz/wordle_answers/api/?day={0}&month={1}&year={2}".format(day, month, year)).text)["word"]
await message_channel.send("Wordle guess for " + yesterday.isoformat() + " : " + word_guess + '\nhttps://www.merriam-webster.com/dictionary/' + word_guess)
client = MyClient()
client.wordle_guess.start()
client.run("TOKEN")
A:
You can also launch a task from a command.
In my case, using the module interactions, I have a discord function that has to run every hour or when the command is triggered. There may be better solutions but this one worked for me.
from discord.ext import tasks
import interactions
...
bot = interactions.Client(bot_token,...)
...
# manually triggered command
@bot.command( name='sync_roles', ...)
async def command_sync_roles(ctx: interactions.CommandContext):
global busy
if busy: return
bot._loop.create_task(sync_roles())
# scheduled job
@tasks.loop(hours=1)
async def cron_sync_roles():
global busy
if busy: return
bot._loop.create_task(sync_roles())
# I use busy as a global variable to avoid double calls
busy = False
async def sync_roles():
global busy
busy = True
## Your code here
busy = False
...
cron_sync_roles.start()
bot.start()
| How to Make a Looping Discord Bot Task that is Invoked When a Message is Posted in Python | I am trying to write a discord bot that posts yeterday's Wordle solutions but I cannot seem to figure out how to get a task to be invoked by a message and then have that task loop. I tried to use a while loop but then the bot would only work for one server. Trying to use a looping task does not work either. Here is the code
import imp
import json
import requests
import discord
import os
import time
from datetime import date, timedelta
from discord.ext import tasks
client = discord.Client()
#channel_id_exists = False
#channel_id = 0
@client.event
async def on_ready():
print('We have logged in as {0.user}'.format(client))
@client.event
async def on_message(message):
if message.author == client.user:
return
if message.content.startswith('!wordle_setup'):
await message.delete()
await message.channel.send("Setting up...")
channel_id = message.channel.id
await wordle_guess(channel_id).start()
@tasks.loop(seconds=10)
async def wordle_guess(channel_id):
message_channel = client.get_channel(channel_id)
yesterday = date.today() - timedelta(days=1)
year = yesterday.year
month = yesterday.month
day = yesterday.day
word_guess = json.loads(requests.get(
"https://najemi.cz/wordle_answers/api/?day={0}&month={1}&year={2}".format(day, month, year)).text)["word"]
await message_channel.send("Word guess for " + yesterday.isoformat() + " : " + word_guess + '\nhttps://www.merriam-webster.com/dictionary/' + word_guess)
client.run("TOKEN")
Either the bot gets stuck at setup, or the the task doesn't loop in 10 seconds, or the bot only works for one server. I use the setup command to prevent needing to hardcode the channel id into the code.
| [
"I have solved the issue by wrapping the code in a client class and allowing the storage of multiple channel ids. The new code is shown here.\nimport asyncio\nimport imp\nimport json\nimport requests\nimport discord\nimport os\nimport time\nfrom datetime import date, timedelta\nfrom discord.ext import tasks\nimport threading\n\nprint(\"test!\")\n\n\nclass MyClient(discord.Client):\n channel_id = []\n channel_id_exists = False\n\n async def on_ready(self):\n print('We have logged in as {0.user}'.format(self))\n\n async def on_message(self, message):\n if message.author == self.user:\n return\n\n if message.content.startswith('!wordle_setup'):\n await message.delete()\n await message.channel.send(\"Setting up...\")\n self.channel_id.append(message.channel.id)\n self.channel_id_exists = True\n\n @tasks.loop(seconds=3600*24)\n async def wordle_guess(self):\n if self.channel_id_exists:\n\n yesterday = date.today() - timedelta(days=1)\n\n year = yesterday.year\n month = yesterday.month\n day = yesterday.day\n\n for channel_id_iter in self.channel_id:\n message_channel = self.get_channel(channel_id_iter)\n word_guess = json.loads(requests.get(\n \"https://najemi.cz/wordle_answers/api/?day={0}&month={1}&year={2}\".format(day, month, year)).text)[\"word\"]\n await message_channel.send(\"Wordle guess for \" + yesterday.isoformat() + \" : \" + word_guess + '\\nhttps://www.merriam-webster.com/dictionary/' + word_guess)\n\n\nclient = MyClient()\nclient.wordle_guess.start()\nclient.run(\"TOKEN\")\n\n\n",
"You can also launch a task from a command.\nIn my case, using the module interactions, I have a discord function that has to run every hour or when the command is triggered. There may be better solutions but this one worked for me.\nfrom discord.ext import tasks\nimport interactions\n...\nbot = interactions.Client(bot_token,...)\n...\n\n# manually triggered command\[email protected]( name='sync_roles', ...)\nasync def command_sync_roles(ctx: interactions.CommandContext):\n global busy\n if busy: return\n bot._loop.create_task(sync_roles())\n\n# scheduled job\[email protected](hours=1)\nasync def cron_sync_roles():\n global busy\n if busy: return\n bot._loop.create_task(sync_roles())\n\n# I use busy as a global variable to avoid double calls\nbusy = False\nasync def sync_roles():\n global busy\n busy = True\n ## Your code here\n busy = False\n\n...\ncron_sync_roles.start()\nbot.start()\n\n"
] | [
0,
0
] | [] | [] | [
"bots",
"discord",
"discord.py",
"python",
"python_3.x"
] | stackoverflow_0070992844_bots_discord_discord.py_python_python_3.x.txt |
Q:
Automate update of models based on conditions
I have a model
class Session(models.Model):
name = models.CharField(max_length=12)
active = models.BooleanField(default=False)
date = models.DateField()
startTime = models.TimeField()
The active field is set based on the date and start time.
For eg -
Suppose during the creation of an object, the date is for tomorrow, and let there be any time, I want to know the process and not the code on how and what to study to make this object active on that particular date and time.
BY default the active field is False, or should I change the way I'm thinking to implement it?
Thanks
A:
I would advise to use a DateTimeField for the start timestamp, and make active a property, so:
from django.utils import timezone
class Session(models.Model):
name = models.CharField(max_length=12)
start = models.DateTimeField()
@property
def active(self):
return timezone.now() >= self.start
This will thus not store the active field in the database, but simply determine the value when needed.
| Automate update of models based on conditions | I have a model
class Session(models.Model):
name = models.CharField(max_length=12)
active = models.BooleanField(default=False)
date = models.DateField()
startTime = models.TimeField()
The active field is set based on the date and start time.
For eg -
Suppose during the creation of an object, the date is for tomorrow, and let there be any time, I want to know the process and not the code on how and what to study to make this object active on that particular date and time.
BY default the active field is False, or should I change the way I'm thinking to implement it?
Thanks
| [
"I would advise to use a DateTimeField for the start timestamp, and make active a property, so:\nfrom django.utils import timezone\n\n\nclass Session(models.Model):\n name = models.CharField(max_length=12)\n start = models.DateTimeField()\n \n @property\n def active(self):\n return timezone.now() >= self.start\nThis will thus not store the active field in the database, but simply determine the value when needed.\n"
] | [
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0074642066_django_python.txt |
Q:
Pandas KeyError in get_loc when calling entries from dataframe in for loop
I am using a pandas data-frame and for some reason when trying to access one entry after another in a for loop it does gives me an error.
Here is my (simplified) code snippet:
df_original = pd.read_csv(csv_dataframe_filename, sep='\t', header=[0, 1], encoding_errors="replace")
df_original.columns = ['A', 'B',
'Count_Number', 'D',
'E', 'F',
'use_first', 'H', 'I']
df_use = df_original
df_use = df_use.drop(df_use[((df_use['use_first']=='no'))].index)
df_use.columns = ['A', 'B',
'Count_Number', 'D',
'E', 'F',
'use_first', 'H', 'I']
c_mag = np.zeros((len(df_use), 1))
x = 0
for i in range(len(df_use)):
print(df_use['Count_Number'][x]) #THIS IS THE LINE THAT IS THE ISSUE
x += 1
print(c_mag)
print(df_use['Count_Number'][x])
The line that is the issue is marked by a comment. If I enter a specific number instead of the variable x, it works (both outside and inside the loop, but inside the loop it of course then prints always the same value each time which is not what I want). It also works with df_original instead of df_use (but for my purpose I really need df_use). The printing in the very last line also works (even with variable x that at that point has a certain value).
I also entered the column naming for df_use in the middle later on, so I got the issue with and without it in the same way. I tried whether all other parts of the code work and they do, so both dataframes can be printed correctly etc.
Using x instead of i as a variable is also a result of playing around and trying to find a solution, so using i was giving the same result.
The column contains floats, if that matters.
But for the code as it is I get the following error message ("folder of file" is of course just a replacement for the actual file path):
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 3361, in get_loc
return self._engine.get_loc(casted_key)
File "pandas\_libs\index.pyx", line 76, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 2131, in pandas._libs.hashtable.Int64HashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 2140, in pandas._libs.hashtable.Int64HashTable.get_item
KeyError: 0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "[folder of file]", line 74, in <module>
print(df_use['Count_Number'][x])
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\series.py", line 942, in __getitem__
return self._get_value(key)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\series.py", line 1051, in _get_value
loc = self.index.get_loc(label)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 3363, in get_loc
raise KeyError(key) from err
KeyError: 0
Process finished with exit code 1
I searched for answers and tried out different things, such as checking the spelling etc. But I can not find a solution and do not understand what I am doing wrong.
Does anyone have an idea on how to solve this issue?
Thank you in advance for any helpful comment!
UPDATE: Found a solution after all. using .iloc[x] instead of just [x] solves the issue. Now I am still curious though why that happens - for other variables it worked even without the .iloc, so why not in this case? I feel like an answer would help me to better understand how things are working in python, so thanks for any hints even if I got the code working already.
What I already tried:
The line that is the issue is marked by a comment. If I enter a specific number instead of the variable x, it works. It also works with df_original instead of df_use (but for my purpose I really need df_use). The printing in the very last line also works (even with variable x that at that point has a certain value).
I also entered the column naming for df_use in the middle later on, so I got the issue with and without it in the same way. I tried whether all other parts of the code work and they do, so both data-frames can be printed correctly etc.
Using x instead of i as a variable is also a result of playing around and trying to find a solution, so using i was giving the same result. I also played around with different ways of how to run the loop, but that did not help either.
I searched for answers and tried out different things, such as checking the spelling etc.
What I am expecting:
The entries of the data-frame columns can be called and used successfully (in this simplified case: can be printed) in the for loop one entry after another. If the printing itself can be done differently, that does not help me (of course I can just print the whole column, that is working), because my actual purpose is to do further calculations with each value. print() is just for now to simplify the issue and try to find a solution.
A:
The issue is that you are manually incrementing i in the for loop, but this is something the for loop already does for you. This causes i to increment by 2 every loop.
Try:
...
c_mag = np.zeros((len(df_use), 1))
for i in range(len(df_use)):
print(df_use['Count_Number'][x]) #THIS IS THE LINE THAT IS THE ISSUE
print(c_mag)
...
A:
This is the answer focusing on the UPDATE section you have provided.
The first thing you need to understand between normal indexing of DataFrame and using iloc. iloc basically use position indexing (just like in lists we have positions of elements 0, 1, ... len(list)-1), but the normal indexing, in your case [x] matches the column name (in your case, it is row) with what you have entered rather than checking the position.
The traceback tells us that there is no row name 0, that's why it is producing KeyError. In the case of iloc, it uses position indexing, so it will return the very first value of the column Count_Number (for x=0).
In your case, if you want to use the for loop to print the values of the column in sequence, using iloc is recommended.
As for the last line of your code, it will print the very last value of your column Count_Number, as the very last value of x in for loop is the length of the DataFrame - 1.
For example: A sample DataFrame stored in a variable named df:
a b c
1 1 2 3
2 4 5 6
Now, if I do: df['a'][0]: I get KeyError: 0 exception, the similar one you are getting.
But, if I replace it with iloc, df['a'].iloc[0], the output:
1
Assuming you know how for loops work in python:
for i in range(len(df)):
#print(df['a'][i] will produce KeyError: 0
#This is because, range(2) [length of df] will give
#0, 1
#But we don't have 0 as a row in df
print(df['a'].iloc[i])
The code above will produce:
1
4
I was unable to understand completely the rest of your issue, so if you still have them, please do ask but in short and specific manner.
| Pandas KeyError in get_loc when calling entries from dataframe in for loop | I am using a pandas data-frame and for some reason when trying to access one entry after another in a for loop it does gives me an error.
Here is my (simplified) code snippet:
df_original = pd.read_csv(csv_dataframe_filename, sep='\t', header=[0, 1], encoding_errors="replace")
df_original.columns = ['A', 'B',
'Count_Number', 'D',
'E', 'F',
'use_first', 'H', 'I']
df_use = df_original
df_use = df_use.drop(df_use[((df_use['use_first']=='no'))].index)
df_use.columns = ['A', 'B',
'Count_Number', 'D',
'E', 'F',
'use_first', 'H', 'I']
c_mag = np.zeros((len(df_use), 1))
x = 0
for i in range(len(df_use)):
print(df_use['Count_Number'][x]) #THIS IS THE LINE THAT IS THE ISSUE
x += 1
print(c_mag)
print(df_use['Count_Number'][x])
The line that is the issue is marked by a comment. If I enter a specific number instead of the variable x, it works (both outside and inside the loop, but inside the loop it of course then prints always the same value each time which is not what I want). It also works with df_original instead of df_use (but for my purpose I really need df_use). The printing in the very last line also works (even with variable x that at that point has a certain value).
I also entered the column naming for df_use in the middle later on, so I got the issue with and without it in the same way. I tried whether all other parts of the code work and they do, so both dataframes can be printed correctly etc.
Using x instead of i as a variable is also a result of playing around and trying to find a solution, so using i was giving the same result.
The column contains floats, if that matters.
But for the code as it is I get the following error message ("folder of file" is of course just a replacement for the actual file path):
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 3361, in get_loc
return self._engine.get_loc(casted_key)
File "pandas\_libs\index.pyx", line 76, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 2131, in pandas._libs.hashtable.Int64HashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 2140, in pandas._libs.hashtable.Int64HashTable.get_item
KeyError: 0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "[folder of file]", line 74, in <module>
print(df_use['Count_Number'][x])
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\series.py", line 942, in __getitem__
return self._get_value(key)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\series.py", line 1051, in _get_value
loc = self.index.get_loc(label)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 3363, in get_loc
raise KeyError(key) from err
KeyError: 0
Process finished with exit code 1
I searched for answers and tried out different things, such as checking the spelling etc. But I can not find a solution and do not understand what I am doing wrong.
Does anyone have an idea on how to solve this issue?
Thank you in advance for any helpful comment!
UPDATE: Found a solution after all. using .iloc[x] instead of just [x] solves the issue. Now I am still curious though why that happens - for other variables it worked even without the .iloc, so why not in this case? I feel like an answer would help me to better understand how things are working in python, so thanks for any hints even if I got the code working already.
What I already tried:
The line that is the issue is marked by a comment. If I enter a specific number instead of the variable x, it works. It also works with df_original instead of df_use (but for my purpose I really need df_use). The printing in the very last line also works (even with variable x that at that point has a certain value).
I also entered the column naming for df_use in the middle later on, so I got the issue with and without it in the same way. I tried whether all other parts of the code work and they do, so both data-frames can be printed correctly etc.
Using x instead of i as a variable is also a result of playing around and trying to find a solution, so using i was giving the same result. I also played around with different ways of how to run the loop, but that did not help either.
I searched for answers and tried out different things, such as checking the spelling etc.
What I am expecting:
The entries of the data-frame columns can be called and used successfully (in this simplified case: can be printed) in the for loop one entry after another. If the printing itself can be done differently, that does not help me (of course I can just print the whole column, that is working), because my actual purpose is to do further calculations with each value. print() is just for now to simplify the issue and try to find a solution.
| [
"The issue is that you are manually incrementing i in the for loop, but this is something the for loop already does for you. This causes i to increment by 2 every loop.\nTry:\n...\nc_mag = np.zeros((len(df_use), 1))\n\nfor i in range(len(df_use)):\n print(df_use['Count_Number'][x]) #THIS IS THE LINE THAT IS THE ISSUE\n\nprint(c_mag)\n...\n\n",
"This is the answer focusing on the UPDATE section you have provided.\nThe first thing you need to understand between normal indexing of DataFrame and using iloc. iloc basically use position indexing (just like in lists we have positions of elements 0, 1, ... len(list)-1), but the normal indexing, in your case [x] matches the column name (in your case, it is row) with what you have entered rather than checking the position.\nThe traceback tells us that there is no row name 0, that's why it is producing KeyError. In the case of iloc, it uses position indexing, so it will return the very first value of the column Count_Number (for x=0).\nIn your case, if you want to use the for loop to print the values of the column in sequence, using iloc is recommended.\nAs for the last line of your code, it will print the very last value of your column Count_Number, as the very last value of x in for loop is the length of the DataFrame - 1.\nFor example: A sample DataFrame stored in a variable named df:\n a b c\n1 1 2 3\n2 4 5 6\n\nNow, if I do: df['a'][0]: I get KeyError: 0 exception, the similar one you are getting.\nBut, if I replace it with iloc, df['a'].iloc[0], the output:\n1\n\nAssuming you know how for loops work in python:\nfor i in range(len(df)):\n #print(df['a'][i] will produce KeyError: 0\n #This is because, range(2) [length of df] will give\n #0, 1\n #But we don't have 0 as a row in df\n print(df['a'].iloc[i])\n\nThe code above will produce:\n1\n4\n\nI was unable to understand completely the rest of your issue, so if you still have them, please do ask but in short and specific manner.\n"
] | [
0,
0
] | [] | [] | [
"dataframe",
"keyerror",
"pandas",
"python"
] | stackoverflow_0074641988_dataframe_keyerror_pandas_python.txt |
Q:
Python: Value Error: Watchdog Numpy.Load()
Situation:
Live Camera captures numpy arrays and saves with utc timestamp and exposure time (ms) in a folder.
A parallel running script watches the folder, where the images are saved as .npy files:
import numpy as np
from watchdog.observers import Observer #https://pypi.org/project/watchdog/
from watchdog.events import FileSystemEventHandler
os.chdir("/home/pi/testenv/processing") #dir where camera saves images as .npy
class EventHandler(FileSystemEventHandler):
# after one images arrive: functions activates
def on_created(self, event):
img_id = os.path.basename(event.src_path)
print(f"img_id: {img_id}")
temp = np.load(f"{img_id}", allow_pickle=True)
print(f"temp.shape {temp.shape}")
while True:
path = "/home/pi/testenv/processing"
event_handler = EventHandler()
observer = Observer()
observer.schedule(event_handler, ".", recursive=True)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
my_observer.stop()
my_observer.join()
I get the following error message, it varies in timing (sometimes after 3 frames, sometimes after 10 frames):
img_id: 1669897126_60.npy
temp.shape (1520, 2032)
img_id: 1669897126_106.npy
temp.shape (1520, 2032)
img_id: 1669897126_166.npy
temp.shape (1520, 2032)
img_id: 1669897126_273.npy
temp.shape (1520, 2032)
img_id: 1669897126_470.npy
temp.shape (1520, 2032)
img_id: 1669897126_773.npy
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.9/dist-packages/watchdog/observers/api.py", line 205, in run
self.dispatch_events(self.event_queue)
File "/usr/local/lib/python3.9/dist-packages/watchdog/observers/api.py", line 381, in dispatch_events
handler.dispatch(event)
File "/usr/local/lib/python3.9/dist-packages/watchdog/events.py", line 272, in dispatch
{
File "/home/pi/testenv/watch_npy.py", line 52, in on_created
temp = np.load(f"{img_id}", allow_pickle=True) # type(temp) = narray
File "/home/pi/.local/lib/python3.9/site-packages/numpy/lib/npyio.py", line 413, in load
return format.read_array(fid, allow_pickle=allow_pickle,
File "/home/pi/.local/lib/python3.9/site-packages/numpy/lib/format.py", line 785, in read_array
array.shape = shape
ValueError: cannot reshape array of size 0 into shape (1520,2032)
I suspect since the images are captured and saved live that the file isnt yet fully saved while it is already loading..
If I check the image where the error message occurs, the shape is always correct (1520,2032).
How can I make sure the numpy array is completely saved before I load it... Or what else is triggering that problem?
Thank you.
EDIT:
class EventHandler(FileSystemEventHandler):
# after one images arrive: functions runs once
def on_created(self, event):
img_id = os.path.basename(event.src_path)
img_utc = os.path.basename(event.src_path).split('_')[0]
img_utc_exp = os.path.basename(event.src_path).split('.')[0]
img_exp = img_utc_exp.split('_')[1]
print(f"img_id: {img_id}")
try:
temp = np.load(f"{img_id}", allow_pickle=True) # type(temp) = narray
print(f"temp.shape {temp.shape}")
except:
print("file did not load completely - trying again#1")
try:
time.sleep(0.1)
temp = np.load(f"{img_id}", allow_pickle=True) # type(temp) = narray
print(f"temp.shape {temp.shape}")
except:
print("file did not load completely - trying again#2")
try:
time.sleep(0.5)
temp = np.load(f"{img_id}", allow_pickle=True) # type(temp) = narray
print(f"temp.shape {temp.shape}")
except:
print("fail")
else:
print("success")
img_id: 1669900308_60.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_106.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_166.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_273.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_470.npy
file did not load completely - trying again#1
temp.shape (1520, 2032)
img_id: 1669900308_773.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_1258.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_2032.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_3291.npy
file did not load completely - trying again#1
temp.shape (1520, 2032)
A:
Looking at the code provided, I think as you said, file is read before being saved. try below steps:
Check the ndarray shape to be (1520,2032), else catch the exception, where you wait and call the method again, else avoid the file completely.
Else maintain a .db/.txt/.pickle of the files being saved, with their timestamps and status of whether read/unread. This way even if you catch an error, you can re-read it again. (Try aiofile, asyncio.sleep(x) for better performance.)
| Python: Value Error: Watchdog Numpy.Load() | Situation:
Live Camera captures numpy arrays and saves with utc timestamp and exposure time (ms) in a folder.
A parallel running script watches the folder, where the images are saved as .npy files:
import numpy as np
from watchdog.observers import Observer #https://pypi.org/project/watchdog/
from watchdog.events import FileSystemEventHandler
os.chdir("/home/pi/testenv/processing") #dir where camera saves images as .npy
class EventHandler(FileSystemEventHandler):
# after one images arrive: functions activates
def on_created(self, event):
img_id = os.path.basename(event.src_path)
print(f"img_id: {img_id}")
temp = np.load(f"{img_id}", allow_pickle=True)
print(f"temp.shape {temp.shape}")
while True:
path = "/home/pi/testenv/processing"
event_handler = EventHandler()
observer = Observer()
observer.schedule(event_handler, ".", recursive=True)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
my_observer.stop()
my_observer.join()
I get the following error message, it varies in timing (sometimes after 3 frames, sometimes after 10 frames):
img_id: 1669897126_60.npy
temp.shape (1520, 2032)
img_id: 1669897126_106.npy
temp.shape (1520, 2032)
img_id: 1669897126_166.npy
temp.shape (1520, 2032)
img_id: 1669897126_273.npy
temp.shape (1520, 2032)
img_id: 1669897126_470.npy
temp.shape (1520, 2032)
img_id: 1669897126_773.npy
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.9/dist-packages/watchdog/observers/api.py", line 205, in run
self.dispatch_events(self.event_queue)
File "/usr/local/lib/python3.9/dist-packages/watchdog/observers/api.py", line 381, in dispatch_events
handler.dispatch(event)
File "/usr/local/lib/python3.9/dist-packages/watchdog/events.py", line 272, in dispatch
{
File "/home/pi/testenv/watch_npy.py", line 52, in on_created
temp = np.load(f"{img_id}", allow_pickle=True) # type(temp) = narray
File "/home/pi/.local/lib/python3.9/site-packages/numpy/lib/npyio.py", line 413, in load
return format.read_array(fid, allow_pickle=allow_pickle,
File "/home/pi/.local/lib/python3.9/site-packages/numpy/lib/format.py", line 785, in read_array
array.shape = shape
ValueError: cannot reshape array of size 0 into shape (1520,2032)
I suspect since the images are captured and saved live that the file isnt yet fully saved while it is already loading..
If I check the image where the error message occurs, the shape is always correct (1520,2032).
How can I make sure the numpy array is completely saved before I load it... Or what else is triggering that problem?
Thank you.
EDIT:
class EventHandler(FileSystemEventHandler):
# after one images arrive: functions runs once
def on_created(self, event):
img_id = os.path.basename(event.src_path)
img_utc = os.path.basename(event.src_path).split('_')[0]
img_utc_exp = os.path.basename(event.src_path).split('.')[0]
img_exp = img_utc_exp.split('_')[1]
print(f"img_id: {img_id}")
try:
temp = np.load(f"{img_id}", allow_pickle=True) # type(temp) = narray
print(f"temp.shape {temp.shape}")
except:
print("file did not load completely - trying again#1")
try:
time.sleep(0.1)
temp = np.load(f"{img_id}", allow_pickle=True) # type(temp) = narray
print(f"temp.shape {temp.shape}")
except:
print("file did not load completely - trying again#2")
try:
time.sleep(0.5)
temp = np.load(f"{img_id}", allow_pickle=True) # type(temp) = narray
print(f"temp.shape {temp.shape}")
except:
print("fail")
else:
print("success")
img_id: 1669900308_60.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_106.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_166.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_273.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_470.npy
file did not load completely - trying again#1
temp.shape (1520, 2032)
img_id: 1669900308_773.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_1258.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_2032.npy
temp.shape (1520, 2032)
success
img_id: 1669900308_3291.npy
file did not load completely - trying again#1
temp.shape (1520, 2032)
| [
"Looking at the code provided, I think as you said, file is read before being saved. try below steps:\n\nCheck the ndarray shape to be (1520,2032), else catch the exception, where you wait and call the method again, else avoid the file completely.\n\nElse maintain a .db/.txt/.pickle of the files being saved, with their timestamps and status of whether read/unread. This way even if you catch an error, you can re-read it again. (Try aiofile, asyncio.sleep(x) for better performance.)\n\n\n"
] | [
0
] | [] | [] | [
"load",
"numpy",
"python",
"watchdog"
] | stackoverflow_0074641943_load_numpy_python_watchdog.txt |
Q:
How do I filter a specific number from a list in python?
current code:
list = [1,2,3,4,5]
for i in list:
Dev.step(2)
if i == 2 or 1 or 0:
Dev.turnLeft()
Dev.step(Dev.x-Item[i].x)
Dev.step(Dev.x-15)
Dev.turnRight()
else:
Dev.turnRight()
Dev.step(Item[i].x-Dev.x)
Dev.step(15-Dev.x)
Dev.turnLeft()
How do I create an if statement for the Dev / Character do something for a specific list element or filter the list elements. Example I want, if the number of 'i' is equal to
2 or 1 or 0 the Dev will turnLeft. So the output of the list is seperated with the other numbers.
Example:
[2,1,0] and [4,5]
Create an if statement for a specific list elements / numbers.
A:
The condition below should represent this:
if i in {0, 1, 2}:
#do logic
A:
Your if statement won't work because of i == 2 or 1 or 0. See, when you use or, it checks if each statement is true. So you need to use i == for each number. (If this is a bit confusing, feel free to read more about this here)
You should replace it to:
if i == 2 or i == 1 or i == 0:
Full code:
list = [1,2,3,4,5]
for i in list:
Dev.step(2)
if i == 2 or i == 1 or i == 0:
Dev.turnLeft()
Dev.step(Dev.x-Item[i].x)
Dev.step(Dev.x-15)
Dev.turnRight()
else:
Dev.turnRight()
Dev.step(Item[i].x-Dev.x)
Dev.step(15-Dev.x)
Dev.turnLeft()
You could also shorten this with the in operator, where you place all the values you want to check it against in a list or a tuple, and just check if i is in that list/tuple. So for example:
if i in (0,1,2): # This checks if i is in the tuple, so basically if i is one of those elements in the tuple
Full code:
list = [1,2,3,4,5]
for i in list:
Dev.step(2)
if i in (0,1,2):
Dev.turnLeft()
Dev.step(Dev.x-Item[i].x)
Dev.step(Dev.x-15)
Dev.turnRight()
else:
Dev.turnRight()
Dev.step(Item[i].x-Dev.x)
Dev.step(15-Dev.x)
Dev.turnLeft()
A:
You have to rewrite i for every condition you are evaluating:
list = [1,2,3,4,5]
for i in list:
Dev.step(2)
if i == 2 or i == 1 or i == 0:
Dev.turnLeft()
Dev.step(Dev.x-Item[i].x)
Dev.step(Dev.x-15)
Dev.turnRight()
else:
Dev.turnRight()
Dev.step(Item[i].x-Dev.x)
Dev.step(15-Dev.x)
Dev.turnLeft()
| How do I filter a specific number from a list in python? | current code:
list = [1,2,3,4,5]
for i in list:
Dev.step(2)
if i == 2 or 1 or 0:
Dev.turnLeft()
Dev.step(Dev.x-Item[i].x)
Dev.step(Dev.x-15)
Dev.turnRight()
else:
Dev.turnRight()
Dev.step(Item[i].x-Dev.x)
Dev.step(15-Dev.x)
Dev.turnLeft()
How do I create an if statement for the Dev / Character do something for a specific list element or filter the list elements. Example I want, if the number of 'i' is equal to
2 or 1 or 0 the Dev will turnLeft. So the output of the list is seperated with the other numbers.
Example:
[2,1,0] and [4,5]
Create an if statement for a specific list elements / numbers.
| [
"The condition below should represent this:\nif i in {0, 1, 2}:\n #do logic\n\n",
"Your if statement won't work because of i == 2 or 1 or 0. See, when you use or, it checks if each statement is true. So you need to use i == for each number. (If this is a bit confusing, feel free to read more about this here)\nYou should replace it to:\nif i == 2 or i == 1 or i == 0:\n\nFull code:\nlist = [1,2,3,4,5]\n\nfor i in list:\n Dev.step(2)\n if i == 2 or i == 1 or i == 0:\n Dev.turnLeft()\n Dev.step(Dev.x-Item[i].x)\n Dev.step(Dev.x-15)\n Dev.turnRight()\n else:\n Dev.turnRight()\n Dev.step(Item[i].x-Dev.x)\n Dev.step(15-Dev.x)\n Dev.turnLeft()\n\nYou could also shorten this with the in operator, where you place all the values you want to check it against in a list or a tuple, and just check if i is in that list/tuple. So for example:\nif i in (0,1,2): # This checks if i is in the tuple, so basically if i is one of those elements in the tuple\n\nFull code:\nlist = [1,2,3,4,5]\n\nfor i in list:\n Dev.step(2)\n if i in (0,1,2):\n Dev.turnLeft()\n Dev.step(Dev.x-Item[i].x)\n Dev.step(Dev.x-15)\n Dev.turnRight()\n else:\n Dev.turnRight()\n Dev.step(Item[i].x-Dev.x)\n Dev.step(15-Dev.x)\n Dev.turnLeft()\n\n",
"You have to rewrite i for every condition you are evaluating:\nlist = [1,2,3,4,5]\n\nfor i in list:\n Dev.step(2)\n if i == 2 or i == 1 or i == 0:\n Dev.turnLeft()\n Dev.step(Dev.x-Item[i].x)\n Dev.step(Dev.x-15)\n Dev.turnRight()\n else:\n Dev.turnRight()\n Dev.step(Item[i].x-Dev.x)\n Dev.step(15-Dev.x)\n Dev.turnLeft()\n\n"
] | [
4,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074642172_python.txt |
Q:
How can I solve a non continuous equation in python?
I have a function:
p = np.arange(0,1,0.01)
EU = (((-(1.3+1-0.5)**(1.2-1.5)/(1-1.5)))**-1)*p + (((-(0.1+1-0.5)**(1.2-1.5)/(1-1.5)))**-1)*(1-p)
And I would like to solve this to get a value of p where EU == 0.5.
My issue is that I checked manually and the p value is 0.4244, but in the function p jumps with steps of 0.01.
Now I'm fine with getting the closest p value where EU is closest to 0.5, but I'm worried the code will not find an answer, since none of the values from the p array will return exactly 0.5 for EU. How could I solve this?
A:
your function is a simple linear function, so it could be simply found by just a simple binary search, however if the funciton was more complex and you needed to really calculate the gradient and runt he optimization you could use torch or jax or any other autograd tools to do that
import torch
def func(p):
return (((-(1.3+1-0.5)**(1.2-1.5)/(1-1.5)))**-1)*p + (((-(0.1+1-0.5)**(1.2-1.5)/(1-1.5)))**-1)*(1-p)
x = torch.tensor([0.1], requires_grad=True)
loss = 1
# 1e-7 example threshold for answer to stop
while loss > 1e-7:
x.retain_grad()
y = func(x)
loss = (y - torch.tensor([0.5], requires_grad=True)) ** 2
# or use MSELoss from nn
loss.backward()
x = x - 0.1 * x.grad # naive gradient decent with lr = 0.1
print(x.item())
0.42235034704208374
| How can I solve a non continuous equation in python? | I have a function:
p = np.arange(0,1,0.01)
EU = (((-(1.3+1-0.5)**(1.2-1.5)/(1-1.5)))**-1)*p + (((-(0.1+1-0.5)**(1.2-1.5)/(1-1.5)))**-1)*(1-p)
And I would like to solve this to get a value of p where EU == 0.5.
My issue is that I checked manually and the p value is 0.4244, but in the function p jumps with steps of 0.01.
Now I'm fine with getting the closest p value where EU is closest to 0.5, but I'm worried the code will not find an answer, since none of the values from the p array will return exactly 0.5 for EU. How could I solve this?
| [
"your function is a simple linear function, so it could be simply found by just a simple binary search, however if the funciton was more complex and you needed to really calculate the gradient and runt he optimization you could use torch or jax or any other autograd tools to do that\nimport torch\ndef func(p):\n return (((-(1.3+1-0.5)**(1.2-1.5)/(1-1.5)))**-1)*p + (((-(0.1+1-0.5)**(1.2-1.5)/(1-1.5)))**-1)*(1-p)\n\n\nx = torch.tensor([0.1], requires_grad=True)\nloss = 1\n# 1e-7 example threshold for answer to stop\nwhile loss > 1e-7:\n x.retain_grad()\n y = func(x)\n loss = (y - torch.tensor([0.5], requires_grad=True)) ** 2 \n # or use MSELoss from nn \n loss.backward()\n x = x - 0.1 * x.grad # naive gradient decent with lr = 0.1\n\nprint(x.item())\n\n0.42235034704208374\n\n"
] | [
1
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074641607_numpy_python.txt |
Q:
Rest API request not recognizing date parameter in data Python
I'm trying to pull data from a rest api using python requests library.
I can connect fine with the key and can pull other locations on the API however for some reason it isn't picking up the date field
section of code:
Headers = {
'Accept': 'application/json',
'Content-Type': 'application/json',
'cookie' : 'hazelcast.sessionId='+ token
}
url = 'https://uk.calabriocloud.com/api/rest/scheduling/adherence/agent/'
Data = {
'date':'2022-11-25'
}
response = session.get(url, headers=Headers, data=json.dumps(Data))
#df = json.loads(response.text)
print(response.text)
print(response)
Response is as follows
{"errorMessage":"Missing required query parameter: date"}
<Response [400]>
Documentation for the API:
URI: /api/rest/scheduling/adherence/agent
Method: GET
Content Type: Multipart/form-data
Date for which you are requesting
detailed agent adherence data in YYYY-MM-DD format.
Any help appreciated
Have attempted adding the string to the url as ?date=2022-11-25 but got a server error
A:
Required data is "date" not "Date"
Headers = {
'Accept': 'application/json',
'Content-Type': 'application/json',
'cookie' : 'hazelcast.sessionId='+ token
}
url = 'https://uk.calabriocloud.com/api/rest/scheduling/adherence/agent/'
Data = {
'date':'2022-11-25'
}
response = session.get(url, headers=Headers, data=json.dumps(Data))
#df = json.loads(response.text)
print(response.text)
print(response)
try this code
| Rest API request not recognizing date parameter in data Python | I'm trying to pull data from a rest api using python requests library.
I can connect fine with the key and can pull other locations on the API however for some reason it isn't picking up the date field
section of code:
Headers = {
'Accept': 'application/json',
'Content-Type': 'application/json',
'cookie' : 'hazelcast.sessionId='+ token
}
url = 'https://uk.calabriocloud.com/api/rest/scheduling/adherence/agent/'
Data = {
'date':'2022-11-25'
}
response = session.get(url, headers=Headers, data=json.dumps(Data))
#df = json.loads(response.text)
print(response.text)
print(response)
Response is as follows
{"errorMessage":"Missing required query parameter: date"}
<Response [400]>
Documentation for the API:
URI: /api/rest/scheduling/adherence/agent
Method: GET
Content Type: Multipart/form-data
Date for which you are requesting
detailed agent adherence data in YYYY-MM-DD format.
Any help appreciated
Have attempted adding the string to the url as ?date=2022-11-25 but got a server error
| [
"Required data is \"date\" not \"Date\"\nHeaders = {\n'Accept': 'application/json',\n'Content-Type': 'application/json',\n'cookie' : 'hazelcast.sessionId='+ token\n}\nurl = 'https://uk.calabriocloud.com/api/rest/scheduling/adherence/agent/'\nData = {\n 'date':'2022-11-25'\n }\n\n\nresponse = session.get(url, headers=Headers, data=json.dumps(Data))\n#df = json.loads(response.text)\nprint(response.text)\nprint(response)\n\ntry this code\n"
] | [
0
] | [] | [] | [
"api",
"python",
"python_requests",
"rest"
] | stackoverflow_0074642158_api_python_python_requests_rest.txt |
Q:
Showing flash messages in DJango with close button
I want to display flash messages in Django with the close button.
Existing message framework in Django allows to display messages and does not allow to close it.
As an example, web2py provides such flash messages. I am looking for similar functionality in Django.
If it can be done with few lines of code , it would be great.
I do not want to add any other libraries or framework on top of Django.
Thanks in advance.
A:
I was unaware that such thing can be solved using boot-strap !
I did something like this :
{% if messages %}
{% for msg in messages %}
<div class="alert alert-info alert-dismissable">
<button type="button" class="close" data-dismiss="alert" aria-hidden="true">×</button>
{{msg.message}}
</div>
{% endfor %}
{% endif %}
It shows message like :
A:
in html template add this jquery timeout function
<script>
$(document).ready(function(){
window.setTimeout(function() {
$(".alert").fadeTo(500, 0).slideUp(500, function(){
$(this).remove();
});
}, 5000);
});
</script>
A:
Q. Dismiss Button is not working in alert message in django python
Ans:
data-bs-dismiss="alert" ,this is the change in Bootstrap 5 i.e. in another bootstrap version there is data-dismiss="alert" , but in bootstrap 5 there is bs added so add bs like this
data-bs-dismiss="alert"
A:
If you are using bootstrap 5 use this.
<div class="alert alert-warning alert-dismissible fade show" role="alert">
<strong>Holy guacamole!</strong> You should check in on some of those fields below.
<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button>
</div>
In older versions of bootstrap if you have
data-dismiss="alert"
change this to
data-bs-dismiss="alert"
for more docs visit bootstrap 5 Dismissing
A:
If you are using materializecss you can take the help of chip
<div class="chip" style="display: contents;">
<div class="card-panel red darken-1 ">
<i class="material-icons white-text">info</i>
<span class="white-text text-darken-2" style="vertical-align: super; font-size: large;">
{{message}}
</span>
<i class="close material-icons white-text right">close</i>
</div>
</div>
A:
As @jeevu94 pointed out correctly, I would suggest using it in a more DRY way that fits each message's setup.
{% if messages %}
{% for message in messages %}
<div class="container-fluid p-0">
<div class="alert {{ message.tags }} alert-dismissible" role="alert" >
<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="x"></button>
{{ message }}
</div>
</div>
{% endfor %}
{% endif %}
| Showing flash messages in DJango with close button | I want to display flash messages in Django with the close button.
Existing message framework in Django allows to display messages and does not allow to close it.
As an example, web2py provides such flash messages. I am looking for similar functionality in Django.
If it can be done with few lines of code , it would be great.
I do not want to add any other libraries or framework on top of Django.
Thanks in advance.
| [
"I was unaware that such thing can be solved using boot-strap !\nI did something like this :\n{% if messages %}\n {% for msg in messages %}\n <div class=\"alert alert-info alert-dismissable\">\n <button type=\"button\" class=\"close\" data-dismiss=\"alert\" aria-hidden=\"true\">×</button>\n {{msg.message}}\n </div>\n {% endfor %}\n{% endif %}\n\nIt shows message like :\n\n",
"in html template add this jquery timeout function\n<script>\n $(document).ready(function(){\nwindow.setTimeout(function() {\n $(\".alert\").fadeTo(500, 0).slideUp(500, function(){\n $(this).remove();\n });\n}, 5000);\n});\n</script>\n\n",
"Q. Dismiss Button is not working in alert message in django python\nAns:\ndata-bs-dismiss=\"alert\" ,this is the change in Bootstrap 5 i.e. in another bootstrap version there is data-dismiss=\"alert\" , but in bootstrap 5 there is bs added so add bs like this\ndata-bs-dismiss=\"alert\"\n",
"If you are using bootstrap 5 use this.\n<div class=\"alert alert-warning alert-dismissible fade show\" role=\"alert\">\n <strong>Holy guacamole!</strong> You should check in on some of those fields below.\n <button type=\"button\" class=\"btn-close\" data-bs-dismiss=\"alert\" aria-label=\"Close\"></button>\n</div>\n\nIn older versions of bootstrap if you have\ndata-dismiss=\"alert\"\nchange this to\ndata-bs-dismiss=\"alert\"\n\nfor more docs visit bootstrap 5 Dismissing\n",
"If you are using materializecss you can take the help of chip\n <div class=\"chip\" style=\"display: contents;\">\n <div class=\"card-panel red darken-1 \">\n <i class=\"material-icons white-text\">info</i>\n <span class=\"white-text text-darken-2\" style=\"vertical-align: super; font-size: large;\">\n {{message}}\n </span>\n <i class=\"close material-icons white-text right\">close</i>\n </div>\n</div>\n\n\n",
"As @jeevu94 pointed out correctly, I would suggest using it in a more DRY way that fits each message's setup.\n{% if messages %}\n\n{% for message in messages %}\n\n<div class=\"container-fluid p-0\">\n <div class=\"alert {{ message.tags }} alert-dismissible\" role=\"alert\" >\n <button type=\"button\" class=\"btn-close\" data-bs-dismiss=\"alert\" aria-label=\"x\"></button>\n {{ message }}\n </div>\n</div>\n\n{% endfor %}\n\n{% endif %}\n\n"
] | [
11,
4,
2,
1,
0,
0
] | [] | [] | [
"django",
"django_views",
"flash_message",
"python",
"web2py"
] | stackoverflow_0043560532_django_django_views_flash_message_python_web2py.txt |
Q:
Flask Bootstrap Alerts and Close Button not displaying correctly
I am trying to get bootstraps alerts working but I think I missing something. Here is a cut down version of my code that displays the issues...
Python File
from flask import Flask, render_template
from flask_bootstrap import Bootstrap
app = Flask(__name__)
Bootstrap(app)
@app.route("/", methods=['GET', 'POST'])
def settings() -> None:
return render_template('settings.html')
if __name__ == "__main__":
app.run(debug=True)
HTML Template
{% extends "bootstrap/base.html" %}
{% block content %}
<div class="alert alert-warning alert-dismissible fade show" role="alert">
<strong>Holy guacamole!</strong> You should check in on some of those fields below.
<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close">
</button>
</div>
{% endblock %}
When running this the alert doesn't display. When the fade class is removed it does display. I strongly suspect that the alert immediately fades upon page load but I can't figure out why.
The second issue is that the btn-close class doesn't display a lovely X but a square grey button. Why doesn't this load the bootstrap image? This is also the case when I remove the <button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"> from the alert and have it display in the content in its own right.
A:
The cause of the issues was the use of an older Flask-Bootstrap which only support Bootstrap 2 or 3. The components that I wanted to use were not available in these versions.
I found a couple of options:
Use a native Bootstrap setup:
Follow the Get Started in setting up Bootstrap on the site. For convivence copy the Bootstrap’s CSS and JS section into a file called base.html and add it to all the required html pages. For example...
base.html
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title></title>
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-rbsA2VBKQhggwzxH7pPCaAqO46MgnOM80zW1RWuH61DGLwZJEdK2Kadq2F9CUG65" crossorigin="anonymous">
</head>
<body>
<br/>
<div class="container">
{% block content %}
{% endblock %}
</div>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js" integrity="sha384-kenU1KFdBIe4zVF0s0G1M5b4hcpxyD9F7jL+jjXkk+Q2h455rYXK/7HAuoJl+0I4" crossorigin="anonymous"></script>
</body>
</html>
index.html
{% extends "base.html" %}
{% block content %}
{% endblock %}
Use Bootstrap-Flask
Bootstrap-Flask - Bootstrap 4 & 5 helper for your Flask projects.
Migrate from Flask-Bootstrap
| Flask Bootstrap Alerts and Close Button not displaying correctly | I am trying to get bootstraps alerts working but I think I missing something. Here is a cut down version of my code that displays the issues...
Python File
from flask import Flask, render_template
from flask_bootstrap import Bootstrap
app = Flask(__name__)
Bootstrap(app)
@app.route("/", methods=['GET', 'POST'])
def settings() -> None:
return render_template('settings.html')
if __name__ == "__main__":
app.run(debug=True)
HTML Template
{% extends "bootstrap/base.html" %}
{% block content %}
<div class="alert alert-warning alert-dismissible fade show" role="alert">
<strong>Holy guacamole!</strong> You should check in on some of those fields below.
<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close">
</button>
</div>
{% endblock %}
When running this the alert doesn't display. When the fade class is removed it does display. I strongly suspect that the alert immediately fades upon page load but I can't figure out why.
The second issue is that the btn-close class doesn't display a lovely X but a square grey button. Why doesn't this load the bootstrap image? This is also the case when I remove the <button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"> from the alert and have it display in the content in its own right.
| [
"The cause of the issues was the use of an older Flask-Bootstrap which only support Bootstrap 2 or 3. The components that I wanted to use were not available in these versions.\nI found a couple of options:\nUse a native Bootstrap setup:\n\nFollow the Get Started in setting up Bootstrap on the site. For convivence copy the Bootstrap’s CSS and JS section into a file called base.html and add it to all the required html pages. For example...\n\nbase.html\n<!doctype html>\n<html lang=\"en\">\n\n <head>\n <meta charset=\"utf-8\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n <title></title>\n <link href=\"https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css\" rel=\"stylesheet\" integrity=\"sha384-rbsA2VBKQhggwzxH7pPCaAqO46MgnOM80zW1RWuH61DGLwZJEdK2Kadq2F9CUG65\" crossorigin=\"anonymous\">\n </head>\n\n <body>\n\n <br/>\n <div class=\"container\">\n {% block content %}\n {% endblock %}\n </div>\n\n <script src=\"https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js\" integrity=\"sha384-kenU1KFdBIe4zVF0s0G1M5b4hcpxyD9F7jL+jjXkk+Q2h455rYXK/7HAuoJl+0I4\" crossorigin=\"anonymous\"></script>\n </body>\n</html>\n\nindex.html\n{% extends \"base.html\" %}\n\n{% block content %}\n{% endblock %}\n\nUse Bootstrap-Flask\n\nBootstrap-Flask - Bootstrap 4 & 5 helper for your Flask projects.\nMigrate from Flask-Bootstrap\n\n"
] | [
0
] | [] | [] | [
"bootstrap_5",
"flask",
"python"
] | stackoverflow_0074641054_bootstrap_5_flask_python.txt |
Q:
How to delete a k8s deployment using k8s python client?
Is there a way to delete a k8s deployment using python?
the official k8s python client lacks this feature, you can only delete pods & services
I tried doing it using [subprocess] but I'd like to explore other options
def delete_deployment(deployment_name, name_space):
subprocess.run(f'kubectl delete deployment {deployment_name} -n {name_space}',shell=True)
A:
You can delete a deployment by the following:
k8s_apps_v1 = client.AppsV1Api()
k8s_apps_v1.delete_namespaced_deployment('dep_name','name_space')
| How to delete a k8s deployment using k8s python client? | Is there a way to delete a k8s deployment using python?
the official k8s python client lacks this feature, you can only delete pods & services
I tried doing it using [subprocess] but I'd like to explore other options
def delete_deployment(deployment_name, name_space):
subprocess.run(f'kubectl delete deployment {deployment_name} -n {name_space}',shell=True)
| [
"You can delete a deployment by the following:\n k8s_apps_v1 = client.AppsV1Api()\n k8s_apps_v1.delete_namespaced_deployment('dep_name','name_space')\n\n"
] | [
0
] | [] | [] | [
"kubernetes",
"kubernetes_python_client",
"python"
] | stackoverflow_0074641038_kubernetes_kubernetes_python_client_python.txt |
Q:
How does closures see context variables into the stack?
I would like to understand how the stack frame pushed by calling b() can access the value of x that lives in the stack frame pushed by a().
Is there a pointer from b() frame to a() frame? Or does the runtime copy the value of x as a local variable in the b() frame? Or is there another machanism under the hood?
This example is in python, but is there a universal mechanism to solve that or different languages use different mechanisms?
>>> def a():
... x = 5
... def b():
... return x + 2
... return b()
...
>>> a()
7
A:
In CPython (the implementation most people use) b itself contains a reference to the value. Consider this modification to your function:
def a():
x = 5
def b():
return x + 2
# b.__closure__[0] corresponds to x
print(b.__closure__[0].cell_contents)
x = 9
print(b.__closure__[0].cell_contents)
When you call a, note that the value of the cell content changes with the local variable x.
The __closure__ attribute is a tuple of cell objects, one per variable that b closes over. The cell object basically has one interesting attribute, cell_contents, that acts like a reference to the variable it represents. (You can even assign to the cell_contents attribute to change the value of the variable, but I can't imagine when that would be a good idea.)
| How does closures see context variables into the stack? | I would like to understand how the stack frame pushed by calling b() can access the value of x that lives in the stack frame pushed by a().
Is there a pointer from b() frame to a() frame? Or does the runtime copy the value of x as a local variable in the b() frame? Or is there another machanism under the hood?
This example is in python, but is there a universal mechanism to solve that or different languages use different mechanisms?
>>> def a():
... x = 5
... def b():
... return x + 2
... return b()
...
>>> a()
7
| [
"In CPython (the implementation most people use) b itself contains a reference to the value. Consider this modification to your function:\ndef a():\n x = 5\n def b():\n return x + 2\n\n # b.__closure__[0] corresponds to x\n print(b.__closure__[0].cell_contents)\n x = 9\n print(b.__closure__[0].cell_contents)\n\nWhen you call a, note that the value of the cell content changes with the local variable x.\nThe __closure__ attribute is a tuple of cell objects, one per variable that b closes over. The cell object basically has one interesting attribute, cell_contents, that acts like a reference to the variable it represents. (You can even assign to the cell_contents attribute to change the value of the variable, but I can't imagine when that would be a good idea.)\n"
] | [
1
] | [] | [] | [
"closures",
"programming_languages",
"python",
"stack"
] | stackoverflow_0074642004_closures_programming_languages_python_stack.txt |
Q:
tkinter wont let me change the bg color
I have this problem that tkinter wont let me change the bg color. I tried using the normal "white" or "blue", etc. I tried Hex-Code and RGB-Code. Nothing works and I am going crazy. The color also does not change when using widgets, so it is alwasy a dark screen, but I creates the widgets since I can see the cursor change or button show up when I am using a picture.
Even old tkinter programs wont work anymore since the screen stays black.
import tkinter as tk
window = tk.Tk()
window.config(bg="white")
window.mainloop()
So does anyone know what I am doing wrong or is it a problem with the IDE? I am going crazy here.
A:
Mac OS used keyword highlightbackground.
Change this:
window.config(bg="white")
to:
window.config(highlightbackground="white")
| tkinter wont let me change the bg color | I have this problem that tkinter wont let me change the bg color. I tried using the normal "white" or "blue", etc. I tried Hex-Code and RGB-Code. Nothing works and I am going crazy. The color also does not change when using widgets, so it is alwasy a dark screen, but I creates the widgets since I can see the cursor change or button show up when I am using a picture.
Even old tkinter programs wont work anymore since the screen stays black.
import tkinter as tk
window = tk.Tk()
window.config(bg="white")
window.mainloop()
So does anyone know what I am doing wrong or is it a problem with the IDE? I am going crazy here.
| [
"Mac OS used keyword highlightbackground.\nChange this:\nwindow.config(bg=\"white\")\n\nto:\nwindow.config(highlightbackground=\"white\")\n\n"
] | [
0
] | [] | [] | [
"colors",
"python",
"tkinter"
] | stackoverflow_0074632246_colors_python_tkinter.txt |
Q:
Python using gattlib for BLE Scanning on Windows 10
I want to create a BLE Connection between my Laptop (Windows 10) and a BLE Device which will be the Master.
I installed Bluez and I can detect Bluetooth devices like my Smartphone but no device that only supports BLE. I want to download gattlib with pip install gattlib but I got an OSError: Not supported OS which brings me to the conclusion that I can't do it this way on Windows 10. Is there any other possibility than installing Linux on my Laptop?
A:
gattlib is controlling bluez via dbus, bluez is linux only, so gattlib can't be used on windows.
gattlib is basically wrapper for the dbus api of bluez in python.
use vm instead and mount your bt adapter to the vm in order to control it with bluez.
wsl isn't supporting bluez right now
Windows 11 and Android - BluetoothAdapter return null
A:
The operating you are using is not compatible. Gattlib only work with linux due to it relying on bluez whcih work on linux only. What you can do is using WSL on windows.
A:
As far as I know, gattlib is designed for linux and debian system so you can use another one. Another side, if you are using a Python version greater than 3.9, you can directly Bluetooth RFCOMM Support for Windows 10.
A:
I think you can find a solution by using "vmware" or "virtualbox" programs.
Output that I tested for you:
| Python using gattlib for BLE Scanning on Windows 10 | I want to create a BLE Connection between my Laptop (Windows 10) and a BLE Device which will be the Master.
I installed Bluez and I can detect Bluetooth devices like my Smartphone but no device that only supports BLE. I want to download gattlib with pip install gattlib but I got an OSError: Not supported OS which brings me to the conclusion that I can't do it this way on Windows 10. Is there any other possibility than installing Linux on my Laptop?
| [
"gattlib is controlling bluez via dbus, bluez is linux only, so gattlib can't be used on windows.\ngattlib is basically wrapper for the dbus api of bluez in python.\nuse vm instead and mount your bt adapter to the vm in order to control it with bluez.\nwsl isn't supporting bluez right now\nWindows 11 and Android - BluetoothAdapter return null\n",
"The operating you are using is not compatible. Gattlib only work with linux due to it relying on bluez whcih work on linux only. What you can do is using WSL on windows.\n",
"As far as I know, gattlib is designed for linux and debian system so you can use another one. Another side, if you are using a Python version greater than 3.9, you can directly Bluetooth RFCOMM Support for Windows 10.\n",
"I think you can find a solution by using \"vmware\" or \"virtualbox\" programs.\nOutput that I tested for you:\n\n"
] | [
2,
0,
0,
0
] | [] | [] | [
"bluetooth_lowenergy",
"python",
"windows_10"
] | stackoverflow_0049238744_bluetooth_lowenergy_python_windows_10.txt |
Q:
plotting very long scientific numbers in python with pandas?
I need to manipulate very long scientific numbers in python and the pandas dataframe format seems convenient... except I can't plot. For instance, the following baby code:
import pandas as pd
import matplotlib.pyplot as plt
import decimal as dec
obs={}
dec.getcontext().prec=10
obs['x']=[dec.Decimal(1),dec.Decimal(2),dec.Decimal(3)]
obs['y']=[dec.Decimal(1),dec.Decimal(2),dec.Decimal(3)]
df=pd.DataFrame(data=obs)
df.plot(x='x',y='y',marker='*')
plt.show()
gives: TypeError: no numeric data to plot. How should I get my plot ?
A:
decimal.decimal is an object not a numeric value, you have to do the conversion before plotting!
import pandas as pd
import matplotlib.pyplot as plt
import decimal as dec
obs={}
dec.getcontext().prec=10
obs['x']=[dec.Decimal(1),dec.Decimal(2),dec.Decimal(3)]
obs['y']=[dec.Decimal(1),dec.Decimal(2),dec.Decimal(3)]
df=pd.DataFrame(obs)
print(df.dtypes)
df['x'] = df['x'].astype(int)
df['y'] = df['y'].astype(int)
print(df.dtypes)
df.plot(x ='x',y='y',marker='*')
plt.show()
Outpout
x object
y object
dtype: object
x int32
y int32
dtype: object
| plotting very long scientific numbers in python with pandas? | I need to manipulate very long scientific numbers in python and the pandas dataframe format seems convenient... except I can't plot. For instance, the following baby code:
import pandas as pd
import matplotlib.pyplot as plt
import decimal as dec
obs={}
dec.getcontext().prec=10
obs['x']=[dec.Decimal(1),dec.Decimal(2),dec.Decimal(3)]
obs['y']=[dec.Decimal(1),dec.Decimal(2),dec.Decimal(3)]
df=pd.DataFrame(data=obs)
df.plot(x='x',y='y',marker='*')
plt.show()
gives: TypeError: no numeric data to plot. How should I get my plot ?
| [
"decimal.decimal is an object not a numeric value, you have to do the conversion before plotting!\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport decimal as dec\nobs={}\ndec.getcontext().prec=10\nobs['x']=[dec.Decimal(1),dec.Decimal(2),dec.Decimal(3)]\nobs['y']=[dec.Decimal(1),dec.Decimal(2),dec.Decimal(3)]\ndf=pd.DataFrame(obs)\nprint(df.dtypes)\ndf['x'] = df['x'].astype(int)\ndf['y'] = df['y'].astype(int)\nprint(df.dtypes)\ndf.plot(x ='x',y='y',marker='*')\nplt.show()\n\nOutpout\nx object\ny object\ndtype: object\n\n\nx int32\ny int32\ndtype: object\n\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074642136_pandas_python.txt |
Q:
Python memory management and garbage collection
I am currently working on an example to understand Python memory management and garbage collection. In my example, I create a variable (x = 10), collect its id, delete it, initiate garbage collector and check if I can still access object in heap by its id (using ctypes).
I think it would return 0 or an error but surprisingly I can still access the object and I don't understand why. Could you help me to understand this case?
A big thank you in advance,
import ctypes
x = 10
id_1st_obj = id(x)
del(x)
gc.collect()
print(ctypes.cast(id_1st_obj, ctypes.py_object).value)
A:
Few things come into play, but here's a simple counterexample:
import ctypes
import gc
x = 1
for _ in range(10):
x += x
id_1st_obj = id(x)
del x
gc.collect()
print(ctypes.cast(id_1st_obj, ctypes.py_object).value)
If you comment del x, you get 1024. If you don't, we get some "random" number.
The reason you're seeing the old value is simple - as @jonrsharpe said - small integers ([-4, 255]) are interned, meaning Python keeps only a single copy of them, and whenever you would reference an integer of that value, it gets pointed at this single copy of the object. So even if you don't have any explicit references to 10 in your code, Python still stores that single copy for interning.
Same might happen with "constant" variables, meaning ones that can be optimised away in the bytecode. That's why even if you change x = 10 to x = 1000 in your code you might still see the same behaviour (might, as this will not happen for e.g. in interactive session, as the step with optimising constants does not happen there).
| Python memory management and garbage collection | I am currently working on an example to understand Python memory management and garbage collection. In my example, I create a variable (x = 10), collect its id, delete it, initiate garbage collector and check if I can still access object in heap by its id (using ctypes).
I think it would return 0 or an error but surprisingly I can still access the object and I don't understand why. Could you help me to understand this case?
A big thank you in advance,
import ctypes
x = 10
id_1st_obj = id(x)
del(x)
gc.collect()
print(ctypes.cast(id_1st_obj, ctypes.py_object).value)
| [
"Few things come into play, but here's a simple counterexample:\nimport ctypes\nimport gc\nx = 1\nfor _ in range(10):\n x += x\nid_1st_obj = id(x)\n\ndel x\ngc.collect()\n\nprint(ctypes.cast(id_1st_obj, ctypes.py_object).value)\n\nIf you comment del x, you get 1024. If you don't, we get some \"random\" number.\nThe reason you're seeing the old value is simple - as @jonrsharpe said - small integers ([-4, 255]) are interned, meaning Python keeps only a single copy of them, and whenever you would reference an integer of that value, it gets pointed at this single copy of the object. So even if you don't have any explicit references to 10 in your code, Python still stores that single copy for interning.\nSame might happen with \"constant\" variables, meaning ones that can be optimised away in the bytecode. That's why even if you change x = 10 to x = 1000 in your code you might still see the same behaviour (might, as this will not happen for e.g. in interactive session, as the step with optimising constants does not happen there).\n"
] | [
0
] | [] | [] | [
"garbage_collection",
"heap_memory",
"memory_management",
"python"
] | stackoverflow_0074642256_garbage_collection_heap_memory_memory_management_python.txt |
Q:
ModuleNotFoundError while executing a bazel script
I have a python file main.py in which I am importing github package [import github]
I have created a build file as follows:
py_binary(
name = "main",
srcs = ["main.py"],
visibility = ["//visibility:public"]
)
When I run this through Bazel command, bazel run: main, I am getting ModuleNotFoundError: No module named 'github'.
Can someone please tell how to include python libraries inside build file and run through Bazel?
Note: I have already installed github (version: 1.2.7) through Python and it is getting updated using pip list from command prompt
A:
You can build the python basic bazel example using this link for Bazel Python code build. This. is a good reference to start a one
| ModuleNotFoundError while executing a bazel script | I have a python file main.py in which I am importing github package [import github]
I have created a build file as follows:
py_binary(
name = "main",
srcs = ["main.py"],
visibility = ["//visibility:public"]
)
When I run this through Bazel command, bazel run: main, I am getting ModuleNotFoundError: No module named 'github'.
Can someone please tell how to include python libraries inside build file and run through Bazel?
Note: I have already installed github (version: 1.2.7) through Python and it is getting updated using pip list from command prompt
| [
"You can build the python basic bazel example using this link for Bazel Python code build. This. is a good reference to start a one\n"
] | [
0
] | [] | [] | [
"bazel",
"build",
"modulenotfounderror",
"python"
] | stackoverflow_0074342069_bazel_build_modulenotfounderror_python.txt |
Q:
How can I make a Venn diagram in Python of two lists of values 0 and 1 and get the intersection where both values are 1?
I have two lists of the same length of 0s and 1s: e.g. a = [0,0,0,1,1,0,1], b = [0,1,0,1,0,1,1] and want to get a Venn diagram which has the intersection as the sum of values which are both 1, so in this case 2 values would be 1 in the same position.
How can I achieve this?
Thanks in advance!
I tried something like venn2(subsets = df['a']==1, df['b']==1, set_labels = ('a', 'b'), alpha = 0.5)
but it didn't work.
A:
You need to implement the logic to count the elements in each subset and pass the result in a tuple as the subsets parameter.
import pandas as pd
from matplotlib_venn import venn2
from matplotlib import pyplot as plt
data = { "A": [0,0,0,1,1,0,1], "B": [0,1,0,1,0,1,1] }
df = pd.DataFrame(data)
# Create tuple to store number of elements in each subset
subsets_data = (len(df[df['A']==1]),
len(df[df['B']==1]),
len(df[(df['A']==1) & (df['B']==1)]))
venn2(subsets=subsets_data, set_labels = ('A', 'B'), alpha = 0.5)
| How can I make a Venn diagram in Python of two lists of values 0 and 1 and get the intersection where both values are 1? | I have two lists of the same length of 0s and 1s: e.g. a = [0,0,0,1,1,0,1], b = [0,1,0,1,0,1,1] and want to get a Venn diagram which has the intersection as the sum of values which are both 1, so in this case 2 values would be 1 in the same position.
How can I achieve this?
Thanks in advance!
I tried something like venn2(subsets = df['a']==1, df['b']==1, set_labels = ('a', 'b'), alpha = 0.5)
but it didn't work.
| [
"You need to implement the logic to count the elements in each subset and pass the result in a tuple as the subsets parameter.\nimport pandas as pd\nfrom matplotlib_venn import venn2\nfrom matplotlib import pyplot as plt\n\ndata = { \"A\": [0,0,0,1,1,0,1], \"B\": [0,1,0,1,0,1,1] }\n\ndf = pd.DataFrame(data)\n\n# Create tuple to store number of elements in each subset\nsubsets_data = (len(df[df['A']==1]), \n len(df[df['B']==1]), \n len(df[(df['A']==1) & (df['B']==1)]))\n\nvenn2(subsets=subsets_data, set_labels = ('A', 'B'), alpha = 0.5)\n\n"
] | [
0
] | [] | [] | [
"python",
"venn"
] | stackoverflow_0074639775_python_venn.txt |
Q:
How to overwrite specific page of PDF with another page of another PDF with Python's PyPDF2
I want to overwrite the first page of a PDF with another page of another PDF using the PyPDF2 library in Python.
For more detail, I have two separate PDFs (let's call them overwritten.pdf and other.pdf) and I want to replace the first (it doesn't have to be the first) page of overwritten.pdf with a specific page of other.pdf so the first page of overwritten.pdf is that specific page of other.pdf.
A:
I don't know if you can literally "replace a page" with PyPDF2. I would use the merge function. Example from the PyPDF2 web site:
from PyPDF2 import PdfMerger
merger = PdfMerger()
input1 = open("document1.pdf", "rb")
input2 = open("document2.pdf", "rb")
input3 = open("document3.pdf", "rb")
# add the first 3 pages of input1 document to output
merger.append(fileobj=input1, pages=(0, 3))
# insert the first page of input2 into the output beginning after the second page
merger.merge(position=2, fileobj=input2, pages=(0, 1))
# append entire input3 document to the end of the output document
merger.append(input3)
# Write to an output PDF document
output = open("document-output.pdf", "wb")
merger.write(output)
# Close File Descriptors
merger.close()
output.close()
A:
You can try merging the PDFs with the PdfMerger tool from PyPDF2
| How to overwrite specific page of PDF with another page of another PDF with Python's PyPDF2 | I want to overwrite the first page of a PDF with another page of another PDF using the PyPDF2 library in Python.
For more detail, I have two separate PDFs (let's call them overwritten.pdf and other.pdf) and I want to replace the first (it doesn't have to be the first) page of overwritten.pdf with a specific page of other.pdf so the first page of overwritten.pdf is that specific page of other.pdf.
| [
"I don't know if you can literally \"replace a page\" with PyPDF2. I would use the merge function. Example from the PyPDF2 web site:\n\nfrom PyPDF2 import PdfMerger\n\nmerger = PdfMerger()\n\ninput1 = open(\"document1.pdf\", \"rb\")\ninput2 = open(\"document2.pdf\", \"rb\")\ninput3 = open(\"document3.pdf\", \"rb\")\n\n# add the first 3 pages of input1 document to output\nmerger.append(fileobj=input1, pages=(0, 3))\n\n# insert the first page of input2 into the output beginning after the second page\nmerger.merge(position=2, fileobj=input2, pages=(0, 1))\n\n# append entire input3 document to the end of the output document\nmerger.append(input3)\n\n# Write to an output PDF document\noutput = open(\"document-output.pdf\", \"wb\")\nmerger.write(output)\n\n# Close File Descriptors\nmerger.close()\noutput.close()\n\n",
"You can try merging the PDFs with the PdfMerger tool from PyPDF2\n"
] | [
3,
0
] | [] | [] | [
"pdf",
"pypdf2",
"python",
"python_3.x"
] | stackoverflow_0074587276_pdf_pypdf2_python_python_3.x.txt |
Q:
Parse setup.py without setuptools
I'm using python on my ipad and need a way to grab the name, version, packages etc from a packages setup.py. I do not have access to setuptools or distutils. At first I thought that I'd parse setup.py but that does not seem to be the answer as there are many ways to pass args to setup(). I'd like to create a mock setup() that returns the args passed to it, but I am unsure how to get past the import errors. Any help would be greatly appreciated.
A:
No kidding. This worked on python 3.4.3 and 2.7.6 ;)
export VERSION=$(python my_package/setup.py --version)
contents of setup.py:
from distutils.core import setup
setup(
name='bonsai',
version='0.0.1',
packages=['my_package'],
url='',
license='MIT',
author='',
author_email='',
description='',
test_suite='nose.collector',
tests_require=['nose'],
)
A:
You can dynamically create a setuptools module and capture the values passed to setup indeed:
>>> import imp
>>> module = """
... def setup(*args, **kwargs):
... print(args, kwargs)
... """
>>>
>>> setuptools = imp.new_module("setuptools")
>>> exec module in setuptools.__dict__
>>> setuptools
<module 'setuptools' (built-in)>
>>> setuptools.setup(3)
((3,), {})
After the above you have a setuptools module with a setup function in it. You may need to create a few more functions to make all the imports work. After that you can import setup.py and gather the contents. That being said, in general this is a tricky approach as setup.py can contain any Python code with conditional imports and dynamic computations to pass values to setup().
A:
You could replace the setup method of the setuptools package like this
>>> import setuptools
>>> def setup(**kwargs):
print(kwargs)
>>> setuptools.setup = setup
>>> content = open('setup.py').read()
>>> exec(content)
A:
Parsing setup.py can be dangerous, in case of malicious files like this:
from setuptools import setup
import shutil
setup(
install_requires=[
shutil.rmtree('/'), # very dangerous!
'django',
],
)
I have prepared a simple script (based on idea of @simeon-visser) and docker image, which parse setup.py file in isolated and secure container:
Usage
$ git clone https://github.com/noisy/parse_setup.py
$ cd parse_setup.py/
$ docker build -t parse .
$ ./parse.sh ./example_files/setup.py
#[OK]
lxml==3.4.4
termcolor==1.1.0
$ ./parse.sh ./example_files/dangerous_setup.py
[Errno 39] Directory not empty: '/usr/local/lib'
#nothing bad happend :)
A:
Accordingly to setup.py CLI help there's a nice command line argument for that, but it's not working in my case:
python setup.py --requires
So I chose a more violent approach, using the ast module to parse the file directly, it works pretty well if your setup.py contains a list of strings as requirements otherwise it can be very complex:
from pathlib import Path
import ast
import pkg_resources
class SetupPyAnalyzer(ast.NodeVisitor):
def __init__(self):
self.requirements = list()
def visit_Call(self, node):
is_setup_func = False
if node.func and type(node.func) == ast.Name:
func: ast.Name = node.func
is_setup_func = func.id == "setup"
if is_setup_func:
for kwarg in node.keywords:
if kwarg.arg == 'install_requires':
install_requires: ast.List = kwarg.value
for require in install_requires.elts:
require: ast.Constant = require
self.requirements.append(require.value)
self.generic_visit(node)
def parse_requirements(content):
return pkg_resources.parse_requirements(content)
def parse_setup_py():
with Path('setup.py').open() as file:
tree = ast.parse(file.read())
analyzer = SetupPyAnalyzer()
analyzer.visit(tree)
return [
lib.project_name
for lib in parse_requirements("\n".join(analyzer.requirements))
]
| Parse setup.py without setuptools | I'm using python on my ipad and need a way to grab the name, version, packages etc from a packages setup.py. I do not have access to setuptools or distutils. At first I thought that I'd parse setup.py but that does not seem to be the answer as there are many ways to pass args to setup(). I'd like to create a mock setup() that returns the args passed to it, but I am unsure how to get past the import errors. Any help would be greatly appreciated.
| [
"No kidding. This worked on python 3.4.3 and 2.7.6 ;)\nexport VERSION=$(python my_package/setup.py --version)\n\ncontents of setup.py:\nfrom distutils.core import setup\n\nsetup(\n name='bonsai',\n version='0.0.1',\n packages=['my_package'],\n url='',\n license='MIT',\n author='',\n author_email='',\n description='',\n test_suite='nose.collector',\n tests_require=['nose'],\n)\n\n",
"You can dynamically create a setuptools module and capture the values passed to setup indeed:\n>>> import imp\n>>> module = \"\"\"\n... def setup(*args, **kwargs):\n... print(args, kwargs)\n... \"\"\"\n>>>\n>>> setuptools = imp.new_module(\"setuptools\")\n>>> exec module in setuptools.__dict__\n>>> setuptools\n<module 'setuptools' (built-in)>\n>>> setuptools.setup(3)\n((3,), {})\n\nAfter the above you have a setuptools module with a setup function in it. You may need to create a few more functions to make all the imports work. After that you can import setup.py and gather the contents. That being said, in general this is a tricky approach as setup.py can contain any Python code with conditional imports and dynamic computations to pass values to setup().\n",
"You could replace the setup method of the setuptools package like this\n>>> import setuptools\n>>> def setup(**kwargs):\n print(kwargs)\n>>> setuptools.setup = setup\n>>> content = open('setup.py').read()\n>>> exec(content)\n\n",
"Parsing setup.py can be dangerous, in case of malicious files like this:\nfrom setuptools import setup\nimport shutil\n\nsetup(\n install_requires=[\n shutil.rmtree('/'), # very dangerous!\n 'django',\n ],\n)\n\nI have prepared a simple script (based on idea of @simeon-visser) and docker image, which parse setup.py file in isolated and secure container:\nUsage\n$ git clone https://github.com/noisy/parse_setup.py\n$ cd parse_setup.py/\n$ docker build -t parse .\n$ ./parse.sh ./example_files/setup.py\n#[OK]\nlxml==3.4.4\ntermcolor==1.1.0\n\n$ ./parse.sh ./example_files/dangerous_setup.py\n[Errno 39] Directory not empty: '/usr/local/lib'\n#nothing bad happend :)\n\n",
"Accordingly to setup.py CLI help there's a nice command line argument for that, but it's not working in my case:\npython setup.py --requires\n\nSo I chose a more violent approach, using the ast module to parse the file directly, it works pretty well if your setup.py contains a list of strings as requirements otherwise it can be very complex:\nfrom pathlib import Path\nimport ast\nimport pkg_resources\n\n\nclass SetupPyAnalyzer(ast.NodeVisitor):\n def __init__(self):\n self.requirements = list()\n\n def visit_Call(self, node):\n is_setup_func = False\n\n if node.func and type(node.func) == ast.Name:\n func: ast.Name = node.func\n is_setup_func = func.id == \"setup\"\n\n if is_setup_func:\n for kwarg in node.keywords:\n if kwarg.arg == 'install_requires':\n install_requires: ast.List = kwarg.value\n for require in install_requires.elts:\n require: ast.Constant = require\n\n self.requirements.append(require.value)\n\n self.generic_visit(node)\n\n\ndef parse_requirements(content):\n return pkg_resources.parse_requirements(content)\n\ndef parse_setup_py():\n with Path('setup.py').open() as file:\n tree = ast.parse(file.read())\n\n analyzer = SetupPyAnalyzer()\n analyzer.visit(tree)\n\n return [\n lib.project_name\n for lib in parse_requirements(\"\\n\".join(analyzer.requirements))\n ]\n\n"
] | [
10,
5,
1,
0,
0
] | [] | [] | [
"python",
"python_2.7",
"setuptools"
] | stackoverflow_0027790297_python_python_2.7_setuptools.txt |
Q:
why I can't run terminal pygame?
recently I watch a youtube video learing pygame for 90 min
I write the exactly code that they guide on the video
import pygame
WIDTH, HEIGHT = 900, 500
WIN = pygame.display.set_mode((WIDTH,HEIGHT))
def main() :
run = True
while run :
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
pygame.quit()
if __name__ == "__name__":
main()
but the result end up with a flash screen and the program just close it self. Come with a line
PS C:\Users\Windows 10 Pro\Desktop\textpython> & "C:/python/New folder/python.exe" "c:/Users/Windows 10 Pro/Desktop/textpython/import pygame.py"
pygame 2.1.3.dev8 (SDL 2.0.22, Python 3.11.0)
Hello from the pygame community. https://www.pygame.org/contribute.html
PS C:\Users\Windows 10 Pro\Desktop\textpython>
A:
You code in main() is never started really because of this block:
if __name__ == "__name__":
main()
So all you see is effect of pygame.display.set_mode() and then program terminates. Proper condition should look like this:
if __name__ == "__main__":
main()
Also you should move pygame.display.set_mode() to your main().
In future, you should run your code with debugger and see the flow and then peek the value of variables.
| why I can't run terminal pygame? | recently I watch a youtube video learing pygame for 90 min
I write the exactly code that they guide on the video
import pygame
WIDTH, HEIGHT = 900, 500
WIN = pygame.display.set_mode((WIDTH,HEIGHT))
def main() :
run = True
while run :
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
pygame.quit()
if __name__ == "__name__":
main()
but the result end up with a flash screen and the program just close it self. Come with a line
PS C:\Users\Windows 10 Pro\Desktop\textpython> & "C:/python/New folder/python.exe" "c:/Users/Windows 10 Pro/Desktop/textpython/import pygame.py"
pygame 2.1.3.dev8 (SDL 2.0.22, Python 3.11.0)
Hello from the pygame community. https://www.pygame.org/contribute.html
PS C:\Users\Windows 10 Pro\Desktop\textpython>
| [
"You code in main() is never started really because of this block:\nif __name__ == \"__name__\":\n main()\n\nSo all you see is effect of pygame.display.set_mode() and then program terminates. Proper condition should look like this:\nif __name__ == \"__main__\":\n main()\n\nAlso you should move pygame.display.set_mode() to your main().\nIn future, you should run your code with debugger and see the flow and then peek the value of variables.\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074642384_python.txt |
Q:
Filtering Data from Firestore in Django, passing to HTML template
I have data stored in Firestore and i would like to get data from a collection, filter it and publish it in HTML template.
I am using Django as the framework.
VIEWS.py
from django.shortcuts import render
import pyrebase
from firebase_admin import firestore
import datetime
db = firestore.Client()
config = {
"apiKey": "xxxxxx",
"authDomain": "xxxxxx.firebaseapp.com",
"databaseURL": "https://xxxxxx.firebaseio.com",
"projectId": "xxxxxx",
"storageBucket": "xxxxxx.appspot.com",
"messagingSenderId": "xxxxxx",
"appId": "xxxxxx",
"measurementId": "xxxxxx",
"serviceAccount": "xxxxxx.json",
}
# DATABASE
firebase = pyrebase.initialize_app(config)
authe = firebase.auth()
database = firebase.database()
print(database)
# TIME & DATE
today_date = datetime.datetime.now()
tomorrow_date = today_date + datetime.timedelta(days=1)
games_today = today_date.strftime("%Y-%m-%d")
games_tomorrow = tomorrow_date.strftime("%Y-%m-%d")
print(games_today)
print(games_tomorrow)
# NBA EVENT DATA
def xxxxx_basketball_nba_events(request):
nba_events = db.collection('xxxx_au').document('basketball_nba').collection('event_info').stream()
event_info = [doc.to_dict() for doc in nba_events]
nba_games = sorted(event_info, key=lambda k: k['event_start'], reverse=True)
# print(nba_games)
for nba_game in nba_games:
if nba_game['event_start'][:10] == games_tomorrow:
event_id = nba_game['event_id']
event_name = nba_game['event_name']
event_status = nba_game['event_status']
competition = nba_game['competition']
event_start = nba_game['event_start'][:10]
timestamp = nba_game['timestamp']
print(event_id, event_name, event_status, competition, event_start, timestamp)
data = ({
u'event_id': event_id,
u'event_name': event_name,
u'event_status': event_status,
u'competition': competition,
u'event_start': event_start,
u'timestamp': timestamp,
})
return render(request, 'html/nba.html', {'nba_games': data})
nba_games variable content
[{'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 195000, tzinfo=datetime.timezone.utc), 'event_name': 'Los Angeles Lakers - Portland Trail Blazers', 'event_id': 1018936256, 'event_status': 'NOT_STARTED', 'event_start': '2022-12-01T03:30:00Z', 'competition': 'NBA'}, {'event_start': '2022-12-01T03:00:00Z', 'event_name': 'Sacramento Kings - Indiana Pacers', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 175000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936251}, {'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 130000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936209, 'event_start': '2022-12-01T02:00:00Z', 'event_name': 'Phoenix Suns - Chicago Bulls', 'competition': 'NBA'}, {'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 148000, tzinfo=datetime.timezone.utc), 'event_start': '2022-12-01T02:00:00Z', 'event_name': 'Utah Jazz - Los Angeles Clippers', 'event_status': 'NOT_STARTED', 'competition': 'NBA', 'event_id': 1018936229}, {'event_id': 1018936241, 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 110000, tzinfo=datetime.timezone.utc), 'event_start': '2022-12-01T02:00:00Z', 'event_status': 'NOT_STARTED', 'competition': 'NBA', 'event_name': 'Denver Nuggets - Houston Rockets'}, {'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 92000, tzinfo=datetime.timezone.utc), 'event_name': 'Oklahoma City Thunder - San Antonio Spurs', 'event_start': '2022-12-01T01:00:00Z', 'event_status': 'NOT_STARTED', 'event_id': 1018936233, 'competition': 'NBA'}, {'event_id': 1018936246, 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 53000, tzinfo=datetime.timezone.utc), 'event_start': '2022-12-01T01:00:00Z', 'event_status': 'NOT_STARTED', 'competition': 'NBA', 'event_name': 'Minnesota Timberwolves - Memphis Grizzlies'}, {'event_name': 'New Orleans Pelicans - Toronto Raptors', 'event_id': 1018936258, 'event_start': '2022-12-01T01:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 76000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'competition': 'NBA'}, {'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 33000, tzinfo=datetime.timezone.utc), 'event_id': 1018936245, 'event_name': 'New York Knicks - Milwaukee Bucks', 'event_start': '2022-12-01T00:41:56Z', 'event_status': 'STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 15000, tzinfo=datetime.timezone.utc), 'event_start': '2022-12-01T00:40:58Z', 'event_name': 'Boston Celtics - Miami Heat', 'event_status': 'STARTED', 'competition': 'NBA', 'event_id': 1018936268}, {'competition': 'NBA', 'event_status': 'STARTED', 'event_id': 1018936243, 'event_start': '2022-12-01T00:40:43Z', 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 1, 996000, tzinfo=datetime.timezone.utc), 'event_name': 'Brooklyn Nets - Washington Wizards'}, {'competition': 'NBA', 'event_id': 1018936226, 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 1, 978000, tzinfo=datetime.timezone.utc), 'event_start': '2022-12-01T00:10:25Z', 'event_name': 'Cleveland Cavaliers - Philadelphia 76ers', 'event_status': 'STARTED'}, {'event_name': 'Orlando Magic - Atlanta Hawks', 'event_status': 'STARTED', 'event_id': 1018936242, 'event_start': '2022-12-01T00:10:19Z', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 1, 960000, tzinfo=datetime.timezone.utc)}, {'event_start': '2022-11-30T03:00:00Z', 'event_name': 'Portland Trail Blazers - Los Angeles Clippers', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 29, 19, 30, 5, 436000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936272}, {'event_id': 1018936236, 'event_start': '2022-11-30T00:30:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 29, 19, 30, 5, 419000, tzinfo=datetime.timezone.utc), 'event_name': 'Dallas Mavericks - Golden State Warriors', 'competition': 'NBA', 'event_status': 'NOT_STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 29, 19, 30, 5, 403000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936230, 'event_start': '2022-11-30T00:00:00Z', 'event_name': 'Detroit Pistons - New York Knicks', 'competition': 'NBA'}, {'event_id': 1018936255, 'event_start': '2022-11-29T03:30:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 681000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_name': 'Los Angeles Lakers - Indiana Pacers', 'event_status': 'NOT_STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 664000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936259, 'event_start': '2022-11-29T03:00:00Z', 'event_name': 'Sacramento Kings - Phoenix Suns', 'competition': 'NBA'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 619000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936225, 'event_status': 'NOT_STARTED', 'event_start': '2022-11-29T02:00:00Z', 'event_name': 'Denver Nuggets - Houston Rockets'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 635000, tzinfo=datetime.timezone.utc), 'event_name': 'Utah Jazz - Chicago Bulls', 'event_status': 'NOT_STARTED', 'event_start': '2022-11-29T02:00:00Z', 'event_id': 1018936240, 'competition': 'NBA'}, {'event_name': 'New Orleans Pelicans - Oklahoma City Thunder', 'event_id': 1018936249, 'event_start': '2022-11-29T01:00:00Z', 'event_status': 'NOT_STARTED', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 583000, tzinfo=datetime.timezone.utc)}, {'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 569000, tzinfo=datetime.timezone.utc), 'event_id': 1018936264, 'event_name': 'Toronto Raptors - Cleveland Cavaliers', 'event_start': '2022-11-29T00:30:00Z', 'event_status': 'NOT_STARTED'}, {'event_start': '2022-11-29T00:30:00Z', 'event_name': 'Boston Celtics - Charlotte Hornets', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 492000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936266, 'event_status': 'NOT_STARTED'}, {'event_status': 'NOT_STARTED', 'competition': 'NBA', 'event_name': 'Brooklyn Nets - Orlando Magic', 'event_id': 1018936275, 'event_start': '2022-11-29T00:30:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 533000, tzinfo=datetime.timezone.utc)}, {'event_name': 'Philadelphia 76ers - Atlanta Hawks', 'event_id': 1018936215, 'event_start': '2022-11-29T00:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 414000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'competition': 'NBA'}, {'event_id': 1018936237, 'event_status': 'NOT_STARTED', 'event_start': '2022-11-29T00:00:00Z', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 463000, tzinfo=datetime.timezone.utc), 'event_name': 'Washington Wizards - Minnesota Timberwolves'}, {'competition': 'NBA', 'event_name': 'Milwaukee Bucks - Dallas Mavericks', 'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 585000, tzinfo=datetime.timezone.utc), 'event_id': 1018936297, 'event_start': '2022-11-28T01:00:00Z', 'event_status': 'NOT_STARTED'}, {'event_id': 1018936274, 'event_start': '2022-11-27T23:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 524000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_name': 'Boston Celtics - Washington Wizards', 'event_status': 'NOT_STARTED'}, {'event_id': 1018936279, 'event_start': '2022-11-27T23:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 570000, tzinfo=datetime.timezone.utc), 'event_name': 'Orlando Magic - Philadelphia 76ers', 'competition': 'NBA', 'event_status': 'NOT_STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 540000, tzinfo=datetime.timezone.utc), 'event_start': '2022-11-27T23:00:00Z', 'event_name': 'Detroit Pistons - Cleveland Cavaliers', 'competition': 'NBA', 'event_status': 'NOT_STARTED', 'event_id': 1018936306}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 555000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936310, 'event_start': '2022-11-27T23:00:00Z', 'event_name': 'New York Knicks - Memphis Grizzlies', 'competition': 'NBA'}, {'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 508000, tzinfo=datetime.timezone.utc), 'event_id': 1018936287, 'event_status': 'NOT_STARTED', 'event_start': '2022-11-27T22:00:00Z', 'event_name': 'Atlanta Hawks - Miami Heat'}, {'event_name': 'Los Angeles Clippers - Indiana Pacers', 'event_status': 'NOT_STARTED', 'event_id': 1018936294, 'event_start': '2022-11-27T21:00:00Z', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 487000, tzinfo=datetime.timezone.utc)}, {'event_start': '2022-11-27T20:30:00Z', 'event_name': 'Minnesota Timberwolves - Golden State Warriors', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 465000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936267}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 448000, tzinfo=datetime.timezone.utc), 'event_name': 'Brooklyn Nets - Portland Trail Blazers', 'event_status': 'NOT_STARTED', 'event_id': 1018936300, 'event_start': '2022-11-27T20:00:00Z', 'competition': 'NBA'}, {'event_name': 'Phoenix Suns - Utah Jazz', 'event_id': 1018936305, 'event_start': '2022-11-27T02:00:00Z', 'event_status': 'NOT_STARTED', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 26, 19, 30, 6, 126000, tzinfo=datetime.timezone.utc)}, {'event_id': 1018936280, 'event_status': 'NOT_STARTED', 'event_start': '2022-11-27T01:00:00Z', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 26, 19, 30, 6, 80000, tzinfo=datetime.timezone.utc), 'event_name': 'Houston Rockets - Oklahoma City Thunder'}, {'competition': 'NBA', 'event_name': 'San Antonio Spurs - Los Angeles Lakers', 'event_id': 1018936288, 'timestamp': DatetimeWithNanoseconds(2022, 11, 26, 19, 30, 6, 106000, tzinfo=datetime.timezone.utc), 'event_start': '2022-11-27T01:00:00Z', 'event_status': 'NOT_STARTED'}, {'event_start': '2022-11-26T22:00:00Z', 'event_name': 'Toronto Raptors - Dallas Mavericks', 'timestamp': DatetimeWithNanoseconds(2022, 11, 26, 19, 30, 6, 57000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936289, 'event_status': 'NOT_STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 26, 4, 49, 39, 510000, tzinfo=datetime.timezone.utc), 'event_name': 'Los Angeles Clippers - Denver Nuggets', 'competition': 'NBA', 'event_status': 'STARTED', 'event_id': 1018936316, 'event_start': '2022-11-26T03:41:14Z'}, {'competition': 'NBA', 'event_status': 'STARTED', 'event_name': 'Golden State Warriors - Utah Jazz', 'event_id': 1018936308, 'event_start': '2022-11-26T03:10:44Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 26, 4, 49, 39, 417000, tzinfo=datetime.timezone.utc)}, {'event_start': '2022-11-26T02:00:00Z', 'event_name': 'Phoenix Suns - Detroit Pistons', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 448000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936262, 'event_status': 'NOT_STARTED'}, {'event_id': 1018936276, 'event_start': '2022-11-26T01:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 248000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_name': 'Houston Rockets - Atlanta Hawks', 'event_status': 'NOT_STARTED'}, {'competition': 'NBA', 'event_name': 'Indiana Pacers - Brooklyn Nets', 'event_id': 1018936285, 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 292000, tzinfo=datetime.timezone.utc), 'event_start': '2022-11-26T01:00:00Z', 'event_status': 'NOT_STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 433000, tzinfo=datetime.timezone.utc), 'event_start': '2022-11-26T01:00:00Z', 'event_name': 'San Antonio Spurs - Los Angeles Lakers', 'competition': 'NBA', 'event_status': 'NOT_STARTED', 'event_id': 1018936286}, {'competition': 'NBA', 'event_name': 'Miami Heat - Washington Wizards', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 338000, tzinfo=datetime.timezone.utc), 'event_id': 1018936295, 'event_start': '2022-11-26T01:00:00Z', 'event_status': 'NOT_STARTED'}, {'event_start': '2022-11-26T01:00:00Z', 'event_name': 'Boston Celtics - Sacramento Kings', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 230000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936302}, {'event_start': '2022-11-26T01:00:00Z', 'event_name': 'Oklahoma City Thunder - Chicago Bulls', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 407000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936319, 'event_status': 'NOT_STARTED'}, {'event_name': 'Milwaukee Bucks - Cleveland Cavaliers', 'event_id': 1018936322, 'event_start': '2022-11-26T01:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 375000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'competition': 'NBA'}, {'event_id': 1018936329, 'event_start': '2022-11-26T01:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 325000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_name': 'Memphis Grizzlies - New Orleans Pelicans', 'event_status': 'NOT_STARTED'}, {'event_start': '2022-11-26T00:30:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 199000, tzinfo=datetime.timezone.utc), 'event_name': 'New York Knicks - Portland Trail Blazers', 'competition': 'NBA', 'event_status': 'NOT_STARTED', 'event_id': 1018936292}, {'event_start': '2022-11-26T00:00:00Z', 'event_name': 'Orlando Magic - Philadelphia 76ers', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 177000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936265}, {'event_start': '2022-11-25T22:10:31Z', 'event_name': 'Charlotte Hornets - Minnesota Timberwolves', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 121000, tzinfo=datetime.timezone.utc), 'event_status': 'STARTED', 'event_id': 1018936312}, {'competition': 'NBA', 'event_status': 'STARTED', 'event_name': 'Golden State Warriors - Los Angeles Clippers', 'event_id': 1018936278, 'event_start': '2022-11-24T03:10:36Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 24, 4, 9, 50, 895000, tzinfo=datetime.timezone.utc)}, {'event_start': '2022-11-24T02:10:29Z', 'event_name': 'Utah Jazz - Detroit Pistons', 'timestamp': DatetimeWithNanoseconds(2022, 11, 24, 4, 9, 50, 831000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936303, 'event_status': 'STARTED'}, {'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 24, 2, 55, 28, 44000, tzinfo=datetime.timezone.utc), 'event_id': 1018936301, 'event_name': 'Oklahoma City Thunder - Denver Nuggets', 'event_start': '2022-11-24T01:10:18Z', 'event_status': 'STARTED'}, {'competition': 'NBA', 'event_id': 1018936299, 'timestamp': DatetimeWithNanoseconds(2022, 11, 24, 2, 55, 28, 8000, tzinfo=datetime.timezone.utc), 'event_start': '2022-11-24T01:10:16Z', 'event_name': 'Milwaukee Bucks - Chicago Bulls', 'event_status': 'STARTED'}, {'event_name': 'San Antonio Spurs - New Orleans Pelicans', 'event_id': 1018936281, 'event_start': '2022-11-24T01:10:15Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 24, 2, 55, 27, 968000, tzinfo=datetime.timezone.utc), 'event_status': 'STARTED', 'competition': 'NBA'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 24, 2, 55, 27, 923000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936284, 'event_status': 'STARTED', 'event_start': '2022-11-24T00:43:40Z', 'event_name': 'Miami Heat - Washington Wizards'}, {'competition': 'NBA', 'event_status': 'NOT_STARTED', 'event_name': 'Toronto Raptors - Brooklyn Nets', 'event_id': 1018936293, 'event_start': '2022-11-24T00:30:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 23, 16, 36, 183000, tzinfo=datetime.timezone.utc)}, {'event_start': '2022-11-24T00:30:00Z', 'event_name': 'Atlanta Hawks - Sacramento Kings', 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 23, 16, 36, 75000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936311, 'event_status': 'NOT_STARTED'}, {'event_status': 'NOT_STARTED', 'competition': 'NBA', 'event_name': 'Boston Celtics - Dallas Mavericks', 'event_id': 1018936314, 'event_start': '2022-11-24T00:30:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 23, 16, 36, 130000, tzinfo=datetime.timezone.utc)}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 23, 16, 36, 47000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936282, 'event_start': '2022-11-24T00:00:00Z', 'event_name': 'Indiana Pacers - Minnesota Timberwolves', 'competition': 'NBA'}, {'event_name': 'Cleveland Cavaliers - Portland Trail Blazers', 'event_status': 'NOT_STARTED', 'event_id': 1018936296, 'event_start': '2022-11-24T00:00:00Z', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 23, 16, 35, 988000, tzinfo=datetime.timezone.utc)}, {'competition': 'NBA', 'event_id': 1018936309, 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 23, 16, 35, 921000, tzinfo=datetime.timezone.utc), 'event_name': 'Charlotte Hornets - Philadelphia 76ers', 'event_start': '2022-11-24T00:00:00Z', 'event_status': 'NOT_STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 4, 42, 57, 664000, tzinfo=datetime.timezone.utc), 'event_status': 'STARTED', 'event_id': 1018936307, 'event_start': '2022-11-23T03:04:46Z', 'event_name': 'Phoenix Suns - Los Angeles Lakers', 'competition': 'NBA'}, {'event_start': '2022-11-23T02:11:59Z', 'event_name': 'Denver Nuggets - Detroit Pistons', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 3, 17, 19, 117000, tzinfo=datetime.timezone.utc), 'event_status': 'STARTED', 'event_id': 1018936291}, {'event_name': 'Memphis Grizzlies - Sacramento Kings', 'event_id': 1018936283, 'event_start': '2022-11-23T01:11:03Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 3, 17, 19, 55000, tzinfo=datetime.timezone.utc), 'event_status': 'STARTED', 'competition': 'NBA'}, {'event_id': 1018936277, 'event_status': 'NOT_STARTED', 'event_start': '2022-11-23T00:30:00Z', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 22, 22, 3, 18, 277000, tzinfo=datetime.timezone.utc), 'event_name': 'Philadelphia 76ers - Brooklyn Nets'}]
HTML template
{% block content %}
<table class="table table-striped" style="padding: 15px; width: 1000px">
<thead>
<tr>
<th scope="col">Event ID</th>
<th scope="col">Competition</th>
<th scope="col">Event Name</th>
<th scope="col">Event Start</th>
<th scope="col">Event Status</th>
</tr>
</thead>
{% for xxx_nba in data %}
<tbody>
<tr>
<th>{{xxx_nba.event_id}}</th>
<td>{{xxx_nba.competition}}</td>
<td>{{xxx_nba.event_name}}</td>
<td>{{xxx_nba.event_start}}</td>
<td>{{xxx_nba.event_status}}</td>
</tr>
</tbody>
{% endfor %}
</table>
{% endblock %}
HTML Output
1018936256 NBA Los Angeles Lakers - Portland Trail Blazers 2022-12-01T03:30:00Z NOT_STARTED
1018936251 NBA Sacramento Kings - Indiana Pacers 2022-12-01T03:00:00Z NOT_STARTED
1018936209 NBA Phoenix Suns - Chicago Bulls 2022-12-01T02:00:00Z NOT_STARTED
1018936229 NBA Utah Jazz - Los Angeles Clippers 2022-12-01T02:00:00Z NOT_STARTED
1018936241 NBA Denver Nuggets - Houston Rockets 2022-12-01T02:00:00Z NOT_STARTED
1018936233 NBA Oklahoma City Thunder - San Antonio Spurs 2022-12-01T01:00:00Z NOT_STARTED
1018936246 NBA Minnesota Timberwolves - Memphis Grizzlies 2022-12-01T01:00:00Z NOT_STARTED
1018936258 NBA New Orleans Pelicans - Toronto Raptors 2022-12-01T01:00:00Z NOT_STARTED
1018936245 NBA New York Knicks - Milwaukee Bucks 2022-12-01T00:41:56Z STARTED
1018936268 NBA Boston Celtics - Miami Heat 2022-12-01T00:40:58Z STARTED
1018936243 NBA Brooklyn Nets - Washington Wizards 2022-12-01T00:40:43Z STARTED
1018936226 NBA Cleveland Cavaliers - Philadelphia 76ers 2022-12-01T00:10:25Z STARTED
1018936242 NBA Orlando Magic - Atlanta Hawks 2022-12-01T00:10:19Z STARTED
1018936272 NBA Portland Trail Blazers - Los Angeles Clippers 2022-11-30T03:00:00Z NOT_STARTED
1018936236 NBA Dallas Mavericks - Golden State Warriors 2022-11-30T00:30:00Z NOT_STARTED
1018936230 NBA Detroit Pistons - New York Knicks 2022-11-30T00:00:00Z NOT_STARTED
1018936255 NBA Los Angeles Lakers - Indiana Pacers 2022-11-29T03:30:00Z NOT_STARTED
1018936259 NBA Sacramento Kings - Phoenix Suns 2022-11-29T03:00:00Z NOT_STARTED
1018936225 NBA Denver Nuggets - Houston Rockets 2022-11-29T02:00:00Z NOT_STARTED
1018936240 NBA Utah Jazz - Chicago Bulls 2022-11-29T02:00:00Z NOT_STARTED
1018936249 NBA New Orleans Pelicans - Oklahoma City Thunder 2022-11-29T01:00:00Z NOT_STARTED
1018936264 NBA Toronto Raptors - Cleveland Cavaliers 2022-11-29T00:30:00Z NOT_STARTED
1018936266 NBA Boston Celtics - Charlotte Hornets 2022-11-29T00:30:00Z NOT_STARTED
1018936275 NBA Brooklyn Nets - Orlando Magic 2022-11-29T00:30:00Z NOT_STARTED
1018936215 NBA Philadelphia 76ers - Atlanta Hawks 2022-11-29T00:00:00Z NOT_STARTED
1018936237 NBA Washington Wizards - Minnesota Timberwolves 2022-11-29T00:00:00Z NOT_STARTED
1018936297 NBA Milwaukee Bucks - Dallas Mavericks 2022-11-28T01:00:00Z NOT_STARTED
I need to filter by event_start which is formatted like
2022-12-01T02:00:00Z
I am trying to reformat that date so that I can use 2022-12-01 section of the date data to filter the games for the day.
I would like to send data to the HTML for only the games that are in the database with the start time (event_start) matching the days date (games_today).
Any help would be apreciated in filtering this data by date and publishing in the HTML document.
A:
instead of creating a stream/snapshot of the event_info subCollection, why don't you create a query snapshot/stream of the event_info subCollection which will only return data that satisfies your query filter only::
I am inferring from this section of your code:
nba_events = db.collection('xxxx_au').document('basketball_nba').collection('event_info').stream()
event_info = [doc.to_dict() for doc in nba_events]
This only returns all data for event_info sub-collection
You can create a query stream/snapshot like this
# Create an Event for notifying main thread.
callback_done = threading.Event()
# Create a callback on_snapshot function to capture changes
def on_snapshot(col_snapshot, changes, read_time):
print(u'Callback received query snapshot.')
print(u'Current cities in California:')
for doc in col_snapshot:
print(f'{doc.id}')
callback_done.set()
col_query = db.collection(u'cities').where(u'state', u'==', u'CA')
# Watch the collection query
query_watch = col_query.on_snapshot(on_snapshot)
Read more about it here: https://firebase.google.com/docs/firestore/query-data/listen#python_3
A:
I think you have to use date format of the following to filter the queries try :
today_date = datetime.datetime.now()
tomorrow_date = today_date + datetime.timedelta(days=1)
games_today = today_date.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
games_tomorrow = tomorrow_date.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
print(games_today) #will give you same date format that is stored in firestore
print(games_tomorrow)
then to filter based on the date as you have filtered.
Then assign it to your collection then present it in your html.
| Filtering Data from Firestore in Django, passing to HTML template | I have data stored in Firestore and i would like to get data from a collection, filter it and publish it in HTML template.
I am using Django as the framework.
VIEWS.py
from django.shortcuts import render
import pyrebase
from firebase_admin import firestore
import datetime
db = firestore.Client()
config = {
"apiKey": "xxxxxx",
"authDomain": "xxxxxx.firebaseapp.com",
"databaseURL": "https://xxxxxx.firebaseio.com",
"projectId": "xxxxxx",
"storageBucket": "xxxxxx.appspot.com",
"messagingSenderId": "xxxxxx",
"appId": "xxxxxx",
"measurementId": "xxxxxx",
"serviceAccount": "xxxxxx.json",
}
# DATABASE
firebase = pyrebase.initialize_app(config)
authe = firebase.auth()
database = firebase.database()
print(database)
# TIME & DATE
today_date = datetime.datetime.now()
tomorrow_date = today_date + datetime.timedelta(days=1)
games_today = today_date.strftime("%Y-%m-%d")
games_tomorrow = tomorrow_date.strftime("%Y-%m-%d")
print(games_today)
print(games_tomorrow)
# NBA EVENT DATA
def xxxxx_basketball_nba_events(request):
nba_events = db.collection('xxxx_au').document('basketball_nba').collection('event_info').stream()
event_info = [doc.to_dict() for doc in nba_events]
nba_games = sorted(event_info, key=lambda k: k['event_start'], reverse=True)
# print(nba_games)
for nba_game in nba_games:
if nba_game['event_start'][:10] == games_tomorrow:
event_id = nba_game['event_id']
event_name = nba_game['event_name']
event_status = nba_game['event_status']
competition = nba_game['competition']
event_start = nba_game['event_start'][:10]
timestamp = nba_game['timestamp']
print(event_id, event_name, event_status, competition, event_start, timestamp)
data = ({
u'event_id': event_id,
u'event_name': event_name,
u'event_status': event_status,
u'competition': competition,
u'event_start': event_start,
u'timestamp': timestamp,
})
return render(request, 'html/nba.html', {'nba_games': data})
nba_games variable content
[{'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 195000, tzinfo=datetime.timezone.utc), 'event_name': 'Los Angeles Lakers - Portland Trail Blazers', 'event_id': 1018936256, 'event_status': 'NOT_STARTED', 'event_start': '2022-12-01T03:30:00Z', 'competition': 'NBA'}, {'event_start': '2022-12-01T03:00:00Z', 'event_name': 'Sacramento Kings - Indiana Pacers', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 175000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936251}, {'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 130000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936209, 'event_start': '2022-12-01T02:00:00Z', 'event_name': 'Phoenix Suns - Chicago Bulls', 'competition': 'NBA'}, {'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 148000, tzinfo=datetime.timezone.utc), 'event_start': '2022-12-01T02:00:00Z', 'event_name': 'Utah Jazz - Los Angeles Clippers', 'event_status': 'NOT_STARTED', 'competition': 'NBA', 'event_id': 1018936229}, {'event_id': 1018936241, 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 110000, tzinfo=datetime.timezone.utc), 'event_start': '2022-12-01T02:00:00Z', 'event_status': 'NOT_STARTED', 'competition': 'NBA', 'event_name': 'Denver Nuggets - Houston Rockets'}, {'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 92000, tzinfo=datetime.timezone.utc), 'event_name': 'Oklahoma City Thunder - San Antonio Spurs', 'event_start': '2022-12-01T01:00:00Z', 'event_status': 'NOT_STARTED', 'event_id': 1018936233, 'competition': 'NBA'}, {'event_id': 1018936246, 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 53000, tzinfo=datetime.timezone.utc), 'event_start': '2022-12-01T01:00:00Z', 'event_status': 'NOT_STARTED', 'competition': 'NBA', 'event_name': 'Minnesota Timberwolves - Memphis Grizzlies'}, {'event_name': 'New Orleans Pelicans - Toronto Raptors', 'event_id': 1018936258, 'event_start': '2022-12-01T01:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 76000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'competition': 'NBA'}, {'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 33000, tzinfo=datetime.timezone.utc), 'event_id': 1018936245, 'event_name': 'New York Knicks - Milwaukee Bucks', 'event_start': '2022-12-01T00:41:56Z', 'event_status': 'STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 2, 15000, tzinfo=datetime.timezone.utc), 'event_start': '2022-12-01T00:40:58Z', 'event_name': 'Boston Celtics - Miami Heat', 'event_status': 'STARTED', 'competition': 'NBA', 'event_id': 1018936268}, {'competition': 'NBA', 'event_status': 'STARTED', 'event_id': 1018936243, 'event_start': '2022-12-01T00:40:43Z', 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 1, 996000, tzinfo=datetime.timezone.utc), 'event_name': 'Brooklyn Nets - Washington Wizards'}, {'competition': 'NBA', 'event_id': 1018936226, 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 1, 978000, tzinfo=datetime.timezone.utc), 'event_start': '2022-12-01T00:10:25Z', 'event_name': 'Cleveland Cavaliers - Philadelphia 76ers', 'event_status': 'STARTED'}, {'event_name': 'Orlando Magic - Atlanta Hawks', 'event_status': 'STARTED', 'event_id': 1018936242, 'event_start': '2022-12-01T00:10:19Z', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 12, 1, 1, 0, 1, 960000, tzinfo=datetime.timezone.utc)}, {'event_start': '2022-11-30T03:00:00Z', 'event_name': 'Portland Trail Blazers - Los Angeles Clippers', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 29, 19, 30, 5, 436000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936272}, {'event_id': 1018936236, 'event_start': '2022-11-30T00:30:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 29, 19, 30, 5, 419000, tzinfo=datetime.timezone.utc), 'event_name': 'Dallas Mavericks - Golden State Warriors', 'competition': 'NBA', 'event_status': 'NOT_STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 29, 19, 30, 5, 403000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936230, 'event_start': '2022-11-30T00:00:00Z', 'event_name': 'Detroit Pistons - New York Knicks', 'competition': 'NBA'}, {'event_id': 1018936255, 'event_start': '2022-11-29T03:30:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 681000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_name': 'Los Angeles Lakers - Indiana Pacers', 'event_status': 'NOT_STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 664000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936259, 'event_start': '2022-11-29T03:00:00Z', 'event_name': 'Sacramento Kings - Phoenix Suns', 'competition': 'NBA'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 619000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936225, 'event_status': 'NOT_STARTED', 'event_start': '2022-11-29T02:00:00Z', 'event_name': 'Denver Nuggets - Houston Rockets'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 635000, tzinfo=datetime.timezone.utc), 'event_name': 'Utah Jazz - Chicago Bulls', 'event_status': 'NOT_STARTED', 'event_start': '2022-11-29T02:00:00Z', 'event_id': 1018936240, 'competition': 'NBA'}, {'event_name': 'New Orleans Pelicans - Oklahoma City Thunder', 'event_id': 1018936249, 'event_start': '2022-11-29T01:00:00Z', 'event_status': 'NOT_STARTED', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 583000, tzinfo=datetime.timezone.utc)}, {'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 569000, tzinfo=datetime.timezone.utc), 'event_id': 1018936264, 'event_name': 'Toronto Raptors - Cleveland Cavaliers', 'event_start': '2022-11-29T00:30:00Z', 'event_status': 'NOT_STARTED'}, {'event_start': '2022-11-29T00:30:00Z', 'event_name': 'Boston Celtics - Charlotte Hornets', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 492000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936266, 'event_status': 'NOT_STARTED'}, {'event_status': 'NOT_STARTED', 'competition': 'NBA', 'event_name': 'Brooklyn Nets - Orlando Magic', 'event_id': 1018936275, 'event_start': '2022-11-29T00:30:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 533000, tzinfo=datetime.timezone.utc)}, {'event_name': 'Philadelphia 76ers - Atlanta Hawks', 'event_id': 1018936215, 'event_start': '2022-11-29T00:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 414000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'competition': 'NBA'}, {'event_id': 1018936237, 'event_status': 'NOT_STARTED', 'event_start': '2022-11-29T00:00:00Z', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 28, 19, 30, 5, 463000, tzinfo=datetime.timezone.utc), 'event_name': 'Washington Wizards - Minnesota Timberwolves'}, {'competition': 'NBA', 'event_name': 'Milwaukee Bucks - Dallas Mavericks', 'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 585000, tzinfo=datetime.timezone.utc), 'event_id': 1018936297, 'event_start': '2022-11-28T01:00:00Z', 'event_status': 'NOT_STARTED'}, {'event_id': 1018936274, 'event_start': '2022-11-27T23:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 524000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_name': 'Boston Celtics - Washington Wizards', 'event_status': 'NOT_STARTED'}, {'event_id': 1018936279, 'event_start': '2022-11-27T23:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 570000, tzinfo=datetime.timezone.utc), 'event_name': 'Orlando Magic - Philadelphia 76ers', 'competition': 'NBA', 'event_status': 'NOT_STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 540000, tzinfo=datetime.timezone.utc), 'event_start': '2022-11-27T23:00:00Z', 'event_name': 'Detroit Pistons - Cleveland Cavaliers', 'competition': 'NBA', 'event_status': 'NOT_STARTED', 'event_id': 1018936306}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 555000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936310, 'event_start': '2022-11-27T23:00:00Z', 'event_name': 'New York Knicks - Memphis Grizzlies', 'competition': 'NBA'}, {'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 508000, tzinfo=datetime.timezone.utc), 'event_id': 1018936287, 'event_status': 'NOT_STARTED', 'event_start': '2022-11-27T22:00:00Z', 'event_name': 'Atlanta Hawks - Miami Heat'}, {'event_name': 'Los Angeles Clippers - Indiana Pacers', 'event_status': 'NOT_STARTED', 'event_id': 1018936294, 'event_start': '2022-11-27T21:00:00Z', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 487000, tzinfo=datetime.timezone.utc)}, {'event_start': '2022-11-27T20:30:00Z', 'event_name': 'Minnesota Timberwolves - Golden State Warriors', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 465000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936267}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 27, 19, 30, 6, 448000, tzinfo=datetime.timezone.utc), 'event_name': 'Brooklyn Nets - Portland Trail Blazers', 'event_status': 'NOT_STARTED', 'event_id': 1018936300, 'event_start': '2022-11-27T20:00:00Z', 'competition': 'NBA'}, {'event_name': 'Phoenix Suns - Utah Jazz', 'event_id': 1018936305, 'event_start': '2022-11-27T02:00:00Z', 'event_status': 'NOT_STARTED', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 26, 19, 30, 6, 126000, tzinfo=datetime.timezone.utc)}, {'event_id': 1018936280, 'event_status': 'NOT_STARTED', 'event_start': '2022-11-27T01:00:00Z', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 26, 19, 30, 6, 80000, tzinfo=datetime.timezone.utc), 'event_name': 'Houston Rockets - Oklahoma City Thunder'}, {'competition': 'NBA', 'event_name': 'San Antonio Spurs - Los Angeles Lakers', 'event_id': 1018936288, 'timestamp': DatetimeWithNanoseconds(2022, 11, 26, 19, 30, 6, 106000, tzinfo=datetime.timezone.utc), 'event_start': '2022-11-27T01:00:00Z', 'event_status': 'NOT_STARTED'}, {'event_start': '2022-11-26T22:00:00Z', 'event_name': 'Toronto Raptors - Dallas Mavericks', 'timestamp': DatetimeWithNanoseconds(2022, 11, 26, 19, 30, 6, 57000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936289, 'event_status': 'NOT_STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 26, 4, 49, 39, 510000, tzinfo=datetime.timezone.utc), 'event_name': 'Los Angeles Clippers - Denver Nuggets', 'competition': 'NBA', 'event_status': 'STARTED', 'event_id': 1018936316, 'event_start': '2022-11-26T03:41:14Z'}, {'competition': 'NBA', 'event_status': 'STARTED', 'event_name': 'Golden State Warriors - Utah Jazz', 'event_id': 1018936308, 'event_start': '2022-11-26T03:10:44Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 26, 4, 49, 39, 417000, tzinfo=datetime.timezone.utc)}, {'event_start': '2022-11-26T02:00:00Z', 'event_name': 'Phoenix Suns - Detroit Pistons', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 448000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936262, 'event_status': 'NOT_STARTED'}, {'event_id': 1018936276, 'event_start': '2022-11-26T01:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 248000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_name': 'Houston Rockets - Atlanta Hawks', 'event_status': 'NOT_STARTED'}, {'competition': 'NBA', 'event_name': 'Indiana Pacers - Brooklyn Nets', 'event_id': 1018936285, 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 292000, tzinfo=datetime.timezone.utc), 'event_start': '2022-11-26T01:00:00Z', 'event_status': 'NOT_STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 433000, tzinfo=datetime.timezone.utc), 'event_start': '2022-11-26T01:00:00Z', 'event_name': 'San Antonio Spurs - Los Angeles Lakers', 'competition': 'NBA', 'event_status': 'NOT_STARTED', 'event_id': 1018936286}, {'competition': 'NBA', 'event_name': 'Miami Heat - Washington Wizards', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 338000, tzinfo=datetime.timezone.utc), 'event_id': 1018936295, 'event_start': '2022-11-26T01:00:00Z', 'event_status': 'NOT_STARTED'}, {'event_start': '2022-11-26T01:00:00Z', 'event_name': 'Boston Celtics - Sacramento Kings', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 230000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936302}, {'event_start': '2022-11-26T01:00:00Z', 'event_name': 'Oklahoma City Thunder - Chicago Bulls', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 407000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936319, 'event_status': 'NOT_STARTED'}, {'event_name': 'Milwaukee Bucks - Cleveland Cavaliers', 'event_id': 1018936322, 'event_start': '2022-11-26T01:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 375000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'competition': 'NBA'}, {'event_id': 1018936329, 'event_start': '2022-11-26T01:00:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 325000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_name': 'Memphis Grizzlies - New Orleans Pelicans', 'event_status': 'NOT_STARTED'}, {'event_start': '2022-11-26T00:30:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 199000, tzinfo=datetime.timezone.utc), 'event_name': 'New York Knicks - Portland Trail Blazers', 'competition': 'NBA', 'event_status': 'NOT_STARTED', 'event_id': 1018936292}, {'event_start': '2022-11-26T00:00:00Z', 'event_name': 'Orlando Magic - Philadelphia 76ers', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 177000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936265}, {'event_start': '2022-11-25T22:10:31Z', 'event_name': 'Charlotte Hornets - Minnesota Timberwolves', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 25, 22, 41, 18, 121000, tzinfo=datetime.timezone.utc), 'event_status': 'STARTED', 'event_id': 1018936312}, {'competition': 'NBA', 'event_status': 'STARTED', 'event_name': 'Golden State Warriors - Los Angeles Clippers', 'event_id': 1018936278, 'event_start': '2022-11-24T03:10:36Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 24, 4, 9, 50, 895000, tzinfo=datetime.timezone.utc)}, {'event_start': '2022-11-24T02:10:29Z', 'event_name': 'Utah Jazz - Detroit Pistons', 'timestamp': DatetimeWithNanoseconds(2022, 11, 24, 4, 9, 50, 831000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936303, 'event_status': 'STARTED'}, {'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 24, 2, 55, 28, 44000, tzinfo=datetime.timezone.utc), 'event_id': 1018936301, 'event_name': 'Oklahoma City Thunder - Denver Nuggets', 'event_start': '2022-11-24T01:10:18Z', 'event_status': 'STARTED'}, {'competition': 'NBA', 'event_id': 1018936299, 'timestamp': DatetimeWithNanoseconds(2022, 11, 24, 2, 55, 28, 8000, tzinfo=datetime.timezone.utc), 'event_start': '2022-11-24T01:10:16Z', 'event_name': 'Milwaukee Bucks - Chicago Bulls', 'event_status': 'STARTED'}, {'event_name': 'San Antonio Spurs - New Orleans Pelicans', 'event_id': 1018936281, 'event_start': '2022-11-24T01:10:15Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 24, 2, 55, 27, 968000, tzinfo=datetime.timezone.utc), 'event_status': 'STARTED', 'competition': 'NBA'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 24, 2, 55, 27, 923000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936284, 'event_status': 'STARTED', 'event_start': '2022-11-24T00:43:40Z', 'event_name': 'Miami Heat - Washington Wizards'}, {'competition': 'NBA', 'event_status': 'NOT_STARTED', 'event_name': 'Toronto Raptors - Brooklyn Nets', 'event_id': 1018936293, 'event_start': '2022-11-24T00:30:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 23, 16, 36, 183000, tzinfo=datetime.timezone.utc)}, {'event_start': '2022-11-24T00:30:00Z', 'event_name': 'Atlanta Hawks - Sacramento Kings', 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 23, 16, 36, 75000, tzinfo=datetime.timezone.utc), 'competition': 'NBA', 'event_id': 1018936311, 'event_status': 'NOT_STARTED'}, {'event_status': 'NOT_STARTED', 'competition': 'NBA', 'event_name': 'Boston Celtics - Dallas Mavericks', 'event_id': 1018936314, 'event_start': '2022-11-24T00:30:00Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 23, 16, 36, 130000, tzinfo=datetime.timezone.utc)}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 23, 16, 36, 47000, tzinfo=datetime.timezone.utc), 'event_status': 'NOT_STARTED', 'event_id': 1018936282, 'event_start': '2022-11-24T00:00:00Z', 'event_name': 'Indiana Pacers - Minnesota Timberwolves', 'competition': 'NBA'}, {'event_name': 'Cleveland Cavaliers - Portland Trail Blazers', 'event_status': 'NOT_STARTED', 'event_id': 1018936296, 'event_start': '2022-11-24T00:00:00Z', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 23, 16, 35, 988000, tzinfo=datetime.timezone.utc)}, {'competition': 'NBA', 'event_id': 1018936309, 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 23, 16, 35, 921000, tzinfo=datetime.timezone.utc), 'event_name': 'Charlotte Hornets - Philadelphia 76ers', 'event_start': '2022-11-24T00:00:00Z', 'event_status': 'NOT_STARTED'}, {'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 4, 42, 57, 664000, tzinfo=datetime.timezone.utc), 'event_status': 'STARTED', 'event_id': 1018936307, 'event_start': '2022-11-23T03:04:46Z', 'event_name': 'Phoenix Suns - Los Angeles Lakers', 'competition': 'NBA'}, {'event_start': '2022-11-23T02:11:59Z', 'event_name': 'Denver Nuggets - Detroit Pistons', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 3, 17, 19, 117000, tzinfo=datetime.timezone.utc), 'event_status': 'STARTED', 'event_id': 1018936291}, {'event_name': 'Memphis Grizzlies - Sacramento Kings', 'event_id': 1018936283, 'event_start': '2022-11-23T01:11:03Z', 'timestamp': DatetimeWithNanoseconds(2022, 11, 23, 3, 17, 19, 55000, tzinfo=datetime.timezone.utc), 'event_status': 'STARTED', 'competition': 'NBA'}, {'event_id': 1018936277, 'event_status': 'NOT_STARTED', 'event_start': '2022-11-23T00:30:00Z', 'competition': 'NBA', 'timestamp': DatetimeWithNanoseconds(2022, 11, 22, 22, 3, 18, 277000, tzinfo=datetime.timezone.utc), 'event_name': 'Philadelphia 76ers - Brooklyn Nets'}]
HTML template
{% block content %}
<table class="table table-striped" style="padding: 15px; width: 1000px">
<thead>
<tr>
<th scope="col">Event ID</th>
<th scope="col">Competition</th>
<th scope="col">Event Name</th>
<th scope="col">Event Start</th>
<th scope="col">Event Status</th>
</tr>
</thead>
{% for xxx_nba in data %}
<tbody>
<tr>
<th>{{xxx_nba.event_id}}</th>
<td>{{xxx_nba.competition}}</td>
<td>{{xxx_nba.event_name}}</td>
<td>{{xxx_nba.event_start}}</td>
<td>{{xxx_nba.event_status}}</td>
</tr>
</tbody>
{% endfor %}
</table>
{% endblock %}
HTML Output
1018936256 NBA Los Angeles Lakers - Portland Trail Blazers 2022-12-01T03:30:00Z NOT_STARTED
1018936251 NBA Sacramento Kings - Indiana Pacers 2022-12-01T03:00:00Z NOT_STARTED
1018936209 NBA Phoenix Suns - Chicago Bulls 2022-12-01T02:00:00Z NOT_STARTED
1018936229 NBA Utah Jazz - Los Angeles Clippers 2022-12-01T02:00:00Z NOT_STARTED
1018936241 NBA Denver Nuggets - Houston Rockets 2022-12-01T02:00:00Z NOT_STARTED
1018936233 NBA Oklahoma City Thunder - San Antonio Spurs 2022-12-01T01:00:00Z NOT_STARTED
1018936246 NBA Minnesota Timberwolves - Memphis Grizzlies 2022-12-01T01:00:00Z NOT_STARTED
1018936258 NBA New Orleans Pelicans - Toronto Raptors 2022-12-01T01:00:00Z NOT_STARTED
1018936245 NBA New York Knicks - Milwaukee Bucks 2022-12-01T00:41:56Z STARTED
1018936268 NBA Boston Celtics - Miami Heat 2022-12-01T00:40:58Z STARTED
1018936243 NBA Brooklyn Nets - Washington Wizards 2022-12-01T00:40:43Z STARTED
1018936226 NBA Cleveland Cavaliers - Philadelphia 76ers 2022-12-01T00:10:25Z STARTED
1018936242 NBA Orlando Magic - Atlanta Hawks 2022-12-01T00:10:19Z STARTED
1018936272 NBA Portland Trail Blazers - Los Angeles Clippers 2022-11-30T03:00:00Z NOT_STARTED
1018936236 NBA Dallas Mavericks - Golden State Warriors 2022-11-30T00:30:00Z NOT_STARTED
1018936230 NBA Detroit Pistons - New York Knicks 2022-11-30T00:00:00Z NOT_STARTED
1018936255 NBA Los Angeles Lakers - Indiana Pacers 2022-11-29T03:30:00Z NOT_STARTED
1018936259 NBA Sacramento Kings - Phoenix Suns 2022-11-29T03:00:00Z NOT_STARTED
1018936225 NBA Denver Nuggets - Houston Rockets 2022-11-29T02:00:00Z NOT_STARTED
1018936240 NBA Utah Jazz - Chicago Bulls 2022-11-29T02:00:00Z NOT_STARTED
1018936249 NBA New Orleans Pelicans - Oklahoma City Thunder 2022-11-29T01:00:00Z NOT_STARTED
1018936264 NBA Toronto Raptors - Cleveland Cavaliers 2022-11-29T00:30:00Z NOT_STARTED
1018936266 NBA Boston Celtics - Charlotte Hornets 2022-11-29T00:30:00Z NOT_STARTED
1018936275 NBA Brooklyn Nets - Orlando Magic 2022-11-29T00:30:00Z NOT_STARTED
1018936215 NBA Philadelphia 76ers - Atlanta Hawks 2022-11-29T00:00:00Z NOT_STARTED
1018936237 NBA Washington Wizards - Minnesota Timberwolves 2022-11-29T00:00:00Z NOT_STARTED
1018936297 NBA Milwaukee Bucks - Dallas Mavericks 2022-11-28T01:00:00Z NOT_STARTED
I need to filter by event_start which is formatted like
2022-12-01T02:00:00Z
I am trying to reformat that date so that I can use 2022-12-01 section of the date data to filter the games for the day.
I would like to send data to the HTML for only the games that are in the database with the start time (event_start) matching the days date (games_today).
Any help would be apreciated in filtering this data by date and publishing in the HTML document.
| [
"instead of creating a stream/snapshot of the event_info subCollection, why don't you create a query snapshot/stream of the event_info subCollection which will only return data that satisfies your query filter only::\nI am inferring from this section of your code:\nnba_events = db.collection('xxxx_au').document('basketball_nba').collection('event_info').stream()\nevent_info = [doc.to_dict() for doc in nba_events]\n\nThis only returns all data for event_info sub-collection\nYou can create a query stream/snapshot like this\n# Create an Event for notifying main thread.\ncallback_done = threading.Event()\n\n# Create a callback on_snapshot function to capture changes\ndef on_snapshot(col_snapshot, changes, read_time):\n print(u'Callback received query snapshot.')\n print(u'Current cities in California:')\n for doc in col_snapshot:\n print(f'{doc.id}')\n callback_done.set()\n\ncol_query = db.collection(u'cities').where(u'state', u'==', u'CA')\n\n# Watch the collection query\nquery_watch = col_query.on_snapshot(on_snapshot)\n\nRead more about it here: https://firebase.google.com/docs/firestore/query-data/listen#python_3\n",
"I think you have to use date format of the following to filter the queries try :\ntoday_date = datetime.datetime.now()\ntomorrow_date = today_date + datetime.timedelta(days=1)\ngames_today = today_date.strftime(\"%Y-%m-%dT%H:%M:%S.%fZ\")\ngames_tomorrow = tomorrow_date.strftime(\"%Y-%m-%dT%H:%M:%S.%fZ\")\nprint(games_today) #will give you same date format that is stored in firestore\nprint(games_tomorrow)\n\n\nthen to filter based on the date as you have filtered.\nThen assign it to your collection then present it in your html.\n"
] | [
0,
0
] | [] | [] | [
"datetime",
"django",
"firebase",
"google_cloud_firestore",
"python"
] | stackoverflow_0074636064_datetime_django_firebase_google_cloud_firestore_python.txt |
Q:
Stop tkinter from flashing when switching Canvas PhotoImages?
My tkinter user-interface involves two large Canvas widgets that are used to display photographs. The photographs update periodically, since they are being fed from live cameras. Problem: with some probability, the Canvas flashes white as it switches photographs. This makes for a very irritating display. I cannot include my whole program, as it is complex and includes external hardware, but here is the core update code. Please, can someone suggest an improvement to it to get rid of the flashing?
from PIL import Image,ImageTk
def previewUpdate(bytes, cameraID):
# Update whichever view just got a new image
view = canvasPicker[cameraID]
# gets a View object, subclass of Canvas, below
image = bytesToImage(bytes)
view.display(image)
class View(tk.Canvas):
def __init__(self, parent, width=1000, height=750, index=-1):
super().__init__(parent, width=width, height=height)
self.width = width
self.height = height
. . .
def display(self, image):
self.scale = min(self.width / image.width, self.height / image.height)
image1 = image.resize((int(image.width * self.scale), int(image.height * self.scale)))
self.photoImage = ImageTk.PhotoImage(image=image1)
try:
self.itemconfig(self.imageObject, image=self.photoImage)
except Exception as e:
print("display failure: ", e)
A:
Thank you, Lone Lunatic, for highlighting the problem. This line:
self.photoImage = ImageTk.PhotoImage(image=image1)
with some probability allows the prior PhotoImage to be garbage collected before the following line displays the new one, and that leaves a white interval. Quite simple… if you have your head around the weird inner workings of tkinter and PhotoImages. Here is the corrected, working code:
def display(self, image):
self.scale = min(self.width / image.width, self.height / image.height)
image1 = image.resize((int(image.width * self.scale), int(image.height * self.scale)))
image2 = ImageTk.PhotoImage(image=image1)
try:
self.itemconfig(self.imageObject, image=image2)
self.photoImage = image2
except Exception as e:
print("display failure: ", e)
| Stop tkinter from flashing when switching Canvas PhotoImages? | My tkinter user-interface involves two large Canvas widgets that are used to display photographs. The photographs update periodically, since they are being fed from live cameras. Problem: with some probability, the Canvas flashes white as it switches photographs. This makes for a very irritating display. I cannot include my whole program, as it is complex and includes external hardware, but here is the core update code. Please, can someone suggest an improvement to it to get rid of the flashing?
from PIL import Image,ImageTk
def previewUpdate(bytes, cameraID):
# Update whichever view just got a new image
view = canvasPicker[cameraID]
# gets a View object, subclass of Canvas, below
image = bytesToImage(bytes)
view.display(image)
class View(tk.Canvas):
def __init__(self, parent, width=1000, height=750, index=-1):
super().__init__(parent, width=width, height=height)
self.width = width
self.height = height
. . .
def display(self, image):
self.scale = min(self.width / image.width, self.height / image.height)
image1 = image.resize((int(image.width * self.scale), int(image.height * self.scale)))
self.photoImage = ImageTk.PhotoImage(image=image1)
try:
self.itemconfig(self.imageObject, image=self.photoImage)
except Exception as e:
print("display failure: ", e)
| [
"Thank you, Lone Lunatic, for highlighting the problem. This line:\nself.photoImage = ImageTk.PhotoImage(image=image1)\n\nwith some probability allows the prior PhotoImage to be garbage collected before the following line displays the new one, and that leaves a white interval. Quite simple… if you have your head around the weird inner workings of tkinter and PhotoImages. Here is the corrected, working code:\ndef display(self, image):\n self.scale = min(self.width / image.width, self.height / image.height)\n image1 = image.resize((int(image.width * self.scale), int(image.height * self.scale)))\n image2 = ImageTk.PhotoImage(image=image1)\n try:\n self.itemconfig(self.imageObject, image=image2)\n self.photoImage = image2\n except Exception as e:\n print(\"display failure: \", e)\n\n"
] | [
0
] | [] | [] | [
"canvas",
"python",
"tkinter",
"tkinter_photoimage"
] | stackoverflow_0074615737_canvas_python_tkinter_tkinter_photoimage.txt |
Q:
Can't get code to execute at certain time
I am trying to get some code to execute at a certain time but I can't figure out what the problem is here. Please help?
import datetime
dt=datetime
set_time=dt.time(12,53)
timenow=dt.datetime.now()
time=False
while not time:
if timenow==set_time:
print("yeeehaaa")
time=True
break
else:
print("naaaaa")
A:
First of all you have to update the time inside the loop or it will always be comparing the same timenow to set_time, then convert all to just an hour/minute string and compare
import datetime
dt=datetime
set_time=str(dt.time(14,19))[0:5]
timenow=dt.datetime.now().time()
time=False
while not time:
timenow=str(dt.datetime.now().time())[0:5]
# print(timenow)
if timenow==set_time:
print("yeeehaaa")
time=True
break
else:
print("naaaaa")
A:
Changing your code to something like this should solve your issue:
import datetime.datetime as dt
set_time=dt.time(12,53)
# the loop waits for the time condition to be met.
# we use the lower than condition in order not to miss the time
# by a few fraction of second.
while (dt.now() < set_time):
time.sleep(0.1) # 100ms delay
# reaching there implies the time condition is met!
print("it is time!")
However there is a much simpler alternative which would consists in get the time delta between the current time and the target time in order to make one single wait with time.sleep(time_delta_s).
| Can't get code to execute at certain time | I am trying to get some code to execute at a certain time but I can't figure out what the problem is here. Please help?
import datetime
dt=datetime
set_time=dt.time(12,53)
timenow=dt.datetime.now()
time=False
while not time:
if timenow==set_time:
print("yeeehaaa")
time=True
break
else:
print("naaaaa")
| [
"First of all you have to update the time inside the loop or it will always be comparing the same timenow to set_time, then convert all to just an hour/minute string and compare\nimport datetime\ndt=datetime\nset_time=str(dt.time(14,19))[0:5]\ntimenow=dt.datetime.now().time()\ntime=False\nwhile not time:\n timenow=str(dt.datetime.now().time())[0:5]\n# print(timenow)\n if timenow==set_time:\n print(\"yeeehaaa\")\n time=True\n break\nelse:\n print(\"naaaaa\")\n\n",
"Changing your code to something like this should solve your issue:\nimport datetime.datetime as dt\nset_time=dt.time(12,53)\n\n# the loop waits for the time condition to be met.\n# we use the lower than condition in order not to miss the time\n# by a few fraction of second.\nwhile (dt.now() < set_time):\n time.sleep(0.1) # 100ms delay\n\n# reaching there implies the time condition is met!\nprint(\"it is time!\")\n\nHowever there is a much simpler alternative which would consists in get the time delta between the current time and the target time in order to make one single wait with time.sleep(time_delta_s).\n"
] | [
3,
0
] | [] | [] | [
"jupyter_notebook",
"python"
] | stackoverflow_0074642337_jupyter_notebook_python.txt |
Q:
How do I access nested elements inside a json array in python
I want to iterate over the below json array to extract all the referenceValues and the corresponding paymentIDs into one
{
"payments": [{
"paymentID": "xxx",
"externalReferences": [{
"referenceKind": "TRADE_ID",
"referenceValue": "xxx"
}, {
"referenceKind": "ID",
"referenceValue": "xxx"
}]
}, {
"paymentID": "xxx",
"externalReferences": [{
"referenceKind": "ID",
"referenceValue": "xxx"
}]
}]
}
The below piece only extracts in case of a single payment and single externalreferences. I want to be able to do it for multiple payments and multiple externalreferences as well.
payment_ids = []
for notification in notifications:
payments= [(payment[0], payment["externalReferences"][0]["referenceValue"])
for payment in notification[0][0]]
if payments[0][1] in invoice_ids:
payment_ids.extend([payment[0] for payment in payments])
A:
Looking at your structure, first you have to iterate through every dictionary in payments, then iterate through their external references. So the below code should extract all reference values and their payment IDs to a dictionary (and append to a list)
refVals = [] # List of all reference values
for payment in data["payments"]:
for reference in payment["externalReferences"]:
refVals.append({ # Dictionary of the current values
"referenceValue": reference["referenceValue"], # The current reference value
"paymentID": payment["paymentID"] # The current payment ID
})
print(refVals)
This code should output a list of a dictionary with all reference values and their payment IDs, in the data dictionary (assuming you read your data into the data variable)
| How do I access nested elements inside a json array in python | I want to iterate over the below json array to extract all the referenceValues and the corresponding paymentIDs into one
{
"payments": [{
"paymentID": "xxx",
"externalReferences": [{
"referenceKind": "TRADE_ID",
"referenceValue": "xxx"
}, {
"referenceKind": "ID",
"referenceValue": "xxx"
}]
}, {
"paymentID": "xxx",
"externalReferences": [{
"referenceKind": "ID",
"referenceValue": "xxx"
}]
}]
}
The below piece only extracts in case of a single payment and single externalreferences. I want to be able to do it for multiple payments and multiple externalreferences as well.
payment_ids = []
for notification in notifications:
payments= [(payment[0], payment["externalReferences"][0]["referenceValue"])
for payment in notification[0][0]]
if payments[0][1] in invoice_ids:
payment_ids.extend([payment[0] for payment in payments])
| [
"Looking at your structure, first you have to iterate through every dictionary in payments, then iterate through their external references. So the below code should extract all reference values and their payment IDs to a dictionary (and append to a list)\nrefVals = [] # List of all reference values\n\nfor payment in data[\"payments\"]:\n for reference in payment[\"externalReferences\"]:\n refVals.append({ # Dictionary of the current values\n \"referenceValue\": reference[\"referenceValue\"], # The current reference value\n \"paymentID\": payment[\"paymentID\"] # The current payment ID\n })\n\nprint(refVals)\n\nThis code should output a list of a dictionary with all reference values and their payment IDs, in the data dictionary (assuming you read your data into the data variable)\n"
] | [
3
] | [] | [] | [
"arrays",
"json_arrayagg",
"python"
] | stackoverflow_0074642364_arrays_json_arrayagg_python.txt |
Q:
Merging and deleting duplicate items within a list of dictionaries
I have a list of dictionaries
[{'elementid': 'BsWfsElement.1.1', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86400', 'lat': '32.05570', 'paramname': 'multiplicity', 'paramvalue': '4'}, {'elementid': 'BsWfsElement.1.2', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86400', 'lat': '32.05570', 'paramname': 'peak_current', 'paramvalue': '-11'}, {'elementid': 'BsWfsElement.1.3', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86400', 'lat': '32.05570', 'paramname': 'cloud_indicator', 'paramvalue': '0'}, {'elementid': 'BsWfsElement.1.4', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86400', 'lat': '32.05570', 'paramname': 'ellipse_major', 'paramvalue': '5.8'}, {'elementid': 'BsWfsElement.2.1', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86350', 'lat': '32.02770', 'paramname': 'multiplicity', 'paramvalue': '0'}, {'elementid': 'BsWfsElement.2.2', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86350', 'lat': '32.02770', 'paramname': 'peak_current', 'paramvalue': '-16'}, {'elementid': 'BsWfsElement.2.3', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86350', 'lat': '32.02770', 'paramname': 'cloud_indicator', 'paramvalue': '0'}, {'elementid': 'BsWfsElement.2.4', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86350', 'lat': '32.02770', 'paramname': 'ellipse_major', 'paramvalue': '1.6'}, {'elementid': 'BsWfsElement.3.1', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86730', 'lat': '32.07100', 'paramname': 'multiplicity', 'paramvalue': '0'}, {'elementid': 'BsWfsElement.3.2', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86730', 'lat': '32.07100', 'paramname': 'peak_current', 'paramvalue': '-35'}, {'elementid': 'BsWfsElement.3.3', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86730', 'lat': '32.07100', 'paramname': 'cloud_indicator', 'paramvalue': '0'}, {'elementid': 'BsWfsElement.3.4'
which I want to group by the id subsection in the key elementid, in a way that appends the paramname and paramvalue values from dictionaries that have the .2, .3 and .4 to the "first" dictionary that has the .1 ending, since every other item in the .2, .3 and .4 dictionaries are duplicates. When this would be done, I'd remove the elementid item and combine the paramname and paramvalue items.
So an example of my desired output in the end would then be
[{'obstime': '2022-07-11T20:00:05', 'lon': '59.86400', 'lat': '32.05570', 'multiplicity': '4', 'peak_current': '-11', 'cloud_indicator': '0', 'ellipse_major': '58'} ... ]
My code that creates the list of dictionaries from an XML file
from urllib.request import urlopen
import xml.etree.ElementTree as ET
from xml.etree.ElementTree import fromstring, ElementTree
from itertools import groupby
from operator import itemgetter
file = urlopen('https://opendata.fmi.fi/wfs?service=WFS&version=2.0.0&request=getFeature&storedquery_id=fmi::observations::lightning::simple×tep=1&starttime=2022-07-11T20:00:00Z&endtime=2022-07-11T20:05:00Z')
data = file.read()
tree = ElementTree(fromstring(data))
root = tree.getroot()
paramnames = []
paramvalues = []
lon = []
lat = []
obstime = []
ids = []
ET.register_namespace('wfs', "http://www.opengis.net/wfs/2.0")
ET.register_namespace('gml', "http://www.opengis.net/gml/3.2")
ET.register_namespace('BsWfs', "http://xml.fmi.fi/schema/wfs/2.0")
ET.register_namespace('xsi', "http://www.w3.org/2001/XMLSchema-instance")
for pn in root.findall('.//{http://xml.fmi.fi/schema/wfs/2.0}ParameterName'):
pnstr = (pn.text.replace('', ''))
paramnames.append(pnstr)
for pv in root.findall('.//{http://xml.fmi.fi/schema/wfs/2.0}ParameterValue'):
pvstr = (pv.text.replace('', ''))
paramvalues.append(pvstr)
for ps in root.findall('.//{http://www.opengis.net/gml/3.2}pos'):
psstr = (ps.text.replace('', ''))
lons = psstr.split(None, 1)
del lons[-1]
lats = psstr.split(None, 2)
del lats [-0]
lon.append(lons[0])
lat.append(lats[0])
for tm in root.findall('.//{http://xml.fmi.fi/schema/wfs/2.0}Time'):
tmstr = (tm.text.replace('Z', ''))
obstime.append(tmstr)
for i in root.findall('.//{http://xml.fmi.fi/schema/wfs/2.0}BsWfsElement'):
idstr = i.get("{http://www.opengis.net/gml/3.2}id")
ids.append(idstr)
zippedlist = list(zip(ids, obstime, lon, lat, paramnames, paramvalues)))
dictnames = ('elementid', 'obstime', 'lon', 'lat', 'paramname', 'paramvalue')
list_of_dicts = [dict(zip(dictnames,l)) for l in zippedlist]
print(list_of_dicts)
I tried sorting them by the lon item, but found out that it actually doesn't produce the result I wanted
list_of_dicts = sorted(list_of_dicts,
key = itemgetter('lon'))
for key, value in groupby(list_of_dicts,
key = itemgetter('lon')):
for k in value:
print(k)
print(list_of_dicts)
Output:
{'elementid': 'BsWfsElement.250.1', 'obstime': '2022-07-11T20:02:42', 'lon': '55.16820', 'lat': '30.88440', 'paramname': 'multiplicity', 'paramvalue': '1'}
{'elementid': 'BsWfsElement.250.2', 'obstime': '2022-07-11T20:02:42', 'lon': '55.16820', 'lat': '30.88440', 'paramname': 'peak_current', 'paramvalue': '21'}
{'elementid': 'BsWfsElement.250.3', 'obstime': '2022-07-11T20:02:42', 'lon': '55.16820', 'lat': '30.88440', 'paramname': 'cloud_indicator', 'paramvalue': '0'}
{'elementid': 'BsWfsElement.250.4', 'obstime': '2022-07-11T20:02:42', 'lon': '55.16820', 'lat': '30.88440', 'paramname': 'ellipse_major', 'paramvalue': '2.8'}
{'elementid': 'BsWfsElement.240.1', 'obstime': '2022-07-11T20:02:40', 'lon': '55.67710', 'lat': '31.12120', 'paramname': 'multiplicity', 'paramvalue': '1'}
{'elementid': 'BsWfsElement.240.2', 'obstime': '2022-07-11T20:02:40', 'lon': '55.67710', 'lat': '31.12120', 'paramname': 'peak_current', 'paramvalue': '109'}
{'elementid': 'BsWfsElement.240.3', 'obstime': '2022-07-11T20:02:40', 'lon': '55.67710', 'lat': '31.12120', 'paramname': 'cloud_indicator', 'paramvalue': '0'}
{'elementid': 'BsWfsElement.240.4', 'obstime': '2022-07-11T20:02:40', 'lon': '55.67710', 'lat': '31.12120', 'paramname': 'ellipse_major', 'paramvalue': '1.6'}
...
A:
from collections import defaultdict
combined_elements = defaultdict(dict)
for element in elements:
# get values
elementid = element['elementid'].rsplit('.',1)[0]
paramname = element['paramname']
paramvalue = element['paramvalue']
# remove keys
for key in ['elementid','paramname','paramvalue']:
element.pop(key)
# add to combined dict
element.update({paramname:paramvalue})
combined_elements[elementid].update(element)
# print elements
for elem in combined_elements.values():
print(elem)
I used your first list as elements. The combined_elements still has the elementids as keys (without the last .x part) so you can refer to them if you want.
Outputs:
{'obstime': '2022-07-11T20:02:42', 'lon': '55.16820', 'lat': '30.88440', 'multiplicity': '1', 'peak_current': '21', 'cloud_indicator': '0', 'ellipse_major': '2.8'}
{'obstime': '2022-07-11T20:02:40', 'lon': '55.67710', 'lat': '31.12120', 'multiplicity': '1', 'peak_current': '109', 'cloud_indicator': '0', 'ellipse_major': '1.6'}
A:
import re
import json
tmp={}
for x in data:
x['elementid']=re.sub(r'\.[0-9]+$', '', x['elementid'])
idx = json.dumps({k: v for k,v in sorted(x.items()) if k not in ['paramname', 'paramvalue']})
try:
tmp[idx].append({x['paramname']: x['paramvalue']})
except KeyError:
tmp[idx]=[{x['paramname']: x['paramvalue']}]
ouput=[{**json.loads(k), **{k:v for x in list(tmp.values())[0] for k,v in x.items()}} for k,v in tmp.items()]
returns:
[{'elementid': 'BsWfsElement',
'lat': '32.05570',
'lon': '59.86400',
'obstime': '2022-07-11T20:00:05',
'multiplicity': '4',
'peak_current': '-11',
'cloud_indicator': '0',
'ellipse_major': '5.8'},
{'elementid': 'BsWfsElement',
'lat': '32.02770',
'lon': '59.86350',
'obstime': '2022-07-11T20:00:05',
'multiplicity': '4',
'peak_current': '-11',
'cloud_indicator': '0',
'ellipse_major': '5.8'},
{'elementid': 'BsWfsElement',
'lat': '32.07100',
'lon': '59.86730',
'obstime': '2022-07-11T20:00:05',
'multiplicity': '4',
'peak_current': '-11',
'cloud_indicator': '0',
'ellipse_major': '5.8'}]
| Merging and deleting duplicate items within a list of dictionaries | I have a list of dictionaries
[{'elementid': 'BsWfsElement.1.1', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86400', 'lat': '32.05570', 'paramname': 'multiplicity', 'paramvalue': '4'}, {'elementid': 'BsWfsElement.1.2', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86400', 'lat': '32.05570', 'paramname': 'peak_current', 'paramvalue': '-11'}, {'elementid': 'BsWfsElement.1.3', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86400', 'lat': '32.05570', 'paramname': 'cloud_indicator', 'paramvalue': '0'}, {'elementid': 'BsWfsElement.1.4', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86400', 'lat': '32.05570', 'paramname': 'ellipse_major', 'paramvalue': '5.8'}, {'elementid': 'BsWfsElement.2.1', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86350', 'lat': '32.02770', 'paramname': 'multiplicity', 'paramvalue': '0'}, {'elementid': 'BsWfsElement.2.2', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86350', 'lat': '32.02770', 'paramname': 'peak_current', 'paramvalue': '-16'}, {'elementid': 'BsWfsElement.2.3', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86350', 'lat': '32.02770', 'paramname': 'cloud_indicator', 'paramvalue': '0'}, {'elementid': 'BsWfsElement.2.4', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86350', 'lat': '32.02770', 'paramname': 'ellipse_major', 'paramvalue': '1.6'}, {'elementid': 'BsWfsElement.3.1', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86730', 'lat': '32.07100', 'paramname': 'multiplicity', 'paramvalue': '0'}, {'elementid': 'BsWfsElement.3.2', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86730', 'lat': '32.07100', 'paramname': 'peak_current', 'paramvalue': '-35'}, {'elementid': 'BsWfsElement.3.3', 'obstime': '2022-07-11T20:00:05', 'lon': '59.86730', 'lat': '32.07100', 'paramname': 'cloud_indicator', 'paramvalue': '0'}, {'elementid': 'BsWfsElement.3.4'
which I want to group by the id subsection in the key elementid, in a way that appends the paramname and paramvalue values from dictionaries that have the .2, .3 and .4 to the "first" dictionary that has the .1 ending, since every other item in the .2, .3 and .4 dictionaries are duplicates. When this would be done, I'd remove the elementid item and combine the paramname and paramvalue items.
So an example of my desired output in the end would then be
[{'obstime': '2022-07-11T20:00:05', 'lon': '59.86400', 'lat': '32.05570', 'multiplicity': '4', 'peak_current': '-11', 'cloud_indicator': '0', 'ellipse_major': '58'} ... ]
My code that creates the list of dictionaries from an XML file
from urllib.request import urlopen
import xml.etree.ElementTree as ET
from xml.etree.ElementTree import fromstring, ElementTree
from itertools import groupby
from operator import itemgetter
file = urlopen('https://opendata.fmi.fi/wfs?service=WFS&version=2.0.0&request=getFeature&storedquery_id=fmi::observations::lightning::simple×tep=1&starttime=2022-07-11T20:00:00Z&endtime=2022-07-11T20:05:00Z')
data = file.read()
tree = ElementTree(fromstring(data))
root = tree.getroot()
paramnames = []
paramvalues = []
lon = []
lat = []
obstime = []
ids = []
ET.register_namespace('wfs', "http://www.opengis.net/wfs/2.0")
ET.register_namespace('gml', "http://www.opengis.net/gml/3.2")
ET.register_namespace('BsWfs', "http://xml.fmi.fi/schema/wfs/2.0")
ET.register_namespace('xsi', "http://www.w3.org/2001/XMLSchema-instance")
for pn in root.findall('.//{http://xml.fmi.fi/schema/wfs/2.0}ParameterName'):
pnstr = (pn.text.replace('', ''))
paramnames.append(pnstr)
for pv in root.findall('.//{http://xml.fmi.fi/schema/wfs/2.0}ParameterValue'):
pvstr = (pv.text.replace('', ''))
paramvalues.append(pvstr)
for ps in root.findall('.//{http://www.opengis.net/gml/3.2}pos'):
psstr = (ps.text.replace('', ''))
lons = psstr.split(None, 1)
del lons[-1]
lats = psstr.split(None, 2)
del lats [-0]
lon.append(lons[0])
lat.append(lats[0])
for tm in root.findall('.//{http://xml.fmi.fi/schema/wfs/2.0}Time'):
tmstr = (tm.text.replace('Z', ''))
obstime.append(tmstr)
for i in root.findall('.//{http://xml.fmi.fi/schema/wfs/2.0}BsWfsElement'):
idstr = i.get("{http://www.opengis.net/gml/3.2}id")
ids.append(idstr)
zippedlist = list(zip(ids, obstime, lon, lat, paramnames, paramvalues)))
dictnames = ('elementid', 'obstime', 'lon', 'lat', 'paramname', 'paramvalue')
list_of_dicts = [dict(zip(dictnames,l)) for l in zippedlist]
print(list_of_dicts)
I tried sorting them by the lon item, but found out that it actually doesn't produce the result I wanted
list_of_dicts = sorted(list_of_dicts,
key = itemgetter('lon'))
for key, value in groupby(list_of_dicts,
key = itemgetter('lon')):
for k in value:
print(k)
print(list_of_dicts)
Output:
{'elementid': 'BsWfsElement.250.1', 'obstime': '2022-07-11T20:02:42', 'lon': '55.16820', 'lat': '30.88440', 'paramname': 'multiplicity', 'paramvalue': '1'}
{'elementid': 'BsWfsElement.250.2', 'obstime': '2022-07-11T20:02:42', 'lon': '55.16820', 'lat': '30.88440', 'paramname': 'peak_current', 'paramvalue': '21'}
{'elementid': 'BsWfsElement.250.3', 'obstime': '2022-07-11T20:02:42', 'lon': '55.16820', 'lat': '30.88440', 'paramname': 'cloud_indicator', 'paramvalue': '0'}
{'elementid': 'BsWfsElement.250.4', 'obstime': '2022-07-11T20:02:42', 'lon': '55.16820', 'lat': '30.88440', 'paramname': 'ellipse_major', 'paramvalue': '2.8'}
{'elementid': 'BsWfsElement.240.1', 'obstime': '2022-07-11T20:02:40', 'lon': '55.67710', 'lat': '31.12120', 'paramname': 'multiplicity', 'paramvalue': '1'}
{'elementid': 'BsWfsElement.240.2', 'obstime': '2022-07-11T20:02:40', 'lon': '55.67710', 'lat': '31.12120', 'paramname': 'peak_current', 'paramvalue': '109'}
{'elementid': 'BsWfsElement.240.3', 'obstime': '2022-07-11T20:02:40', 'lon': '55.67710', 'lat': '31.12120', 'paramname': 'cloud_indicator', 'paramvalue': '0'}
{'elementid': 'BsWfsElement.240.4', 'obstime': '2022-07-11T20:02:40', 'lon': '55.67710', 'lat': '31.12120', 'paramname': 'ellipse_major', 'paramvalue': '1.6'}
...
| [
"from collections import defaultdict\n\ncombined_elements = defaultdict(dict)\nfor element in elements:\n # get values\n elementid = element['elementid'].rsplit('.',1)[0]\n paramname = element['paramname']\n paramvalue = element['paramvalue']\n # remove keys\n for key in ['elementid','paramname','paramvalue']:\n element.pop(key)\n # add to combined dict\n element.update({paramname:paramvalue})\n combined_elements[elementid].update(element)\n \n# print elements\nfor elem in combined_elements.values():\n print(elem)\n\nI used your first list as elements. The combined_elements still has the elementids as keys (without the last .x part) so you can refer to them if you want.\nOutputs:\n{'obstime': '2022-07-11T20:02:42', 'lon': '55.16820', 'lat': '30.88440', 'multiplicity': '1', 'peak_current': '21', 'cloud_indicator': '0', 'ellipse_major': '2.8'}\n{'obstime': '2022-07-11T20:02:40', 'lon': '55.67710', 'lat': '31.12120', 'multiplicity': '1', 'peak_current': '109', 'cloud_indicator': '0', 'ellipse_major': '1.6'}\n\n",
"import re\nimport json\n\ntmp={}\nfor x in data:\n x['elementid']=re.sub(r'\\.[0-9]+$', '', x['elementid'])\n idx = json.dumps({k: v for k,v in sorted(x.items()) if k not in ['paramname', 'paramvalue']})\n try:\n tmp[idx].append({x['paramname']: x['paramvalue']})\n except KeyError:\n tmp[idx]=[{x['paramname']: x['paramvalue']}]\n\nouput=[{**json.loads(k), **{k:v for x in list(tmp.values())[0] for k,v in x.items()}} for k,v in tmp.items()]\n\nreturns:\n[{'elementid': 'BsWfsElement',\n 'lat': '32.05570',\n 'lon': '59.86400',\n 'obstime': '2022-07-11T20:00:05',\n 'multiplicity': '4',\n 'peak_current': '-11',\n 'cloud_indicator': '0',\n 'ellipse_major': '5.8'},\n {'elementid': 'BsWfsElement',\n 'lat': '32.02770',\n 'lon': '59.86350',\n 'obstime': '2022-07-11T20:00:05',\n 'multiplicity': '4',\n 'peak_current': '-11',\n 'cloud_indicator': '0',\n 'ellipse_major': '5.8'},\n {'elementid': 'BsWfsElement',\n 'lat': '32.07100',\n 'lon': '59.86730',\n 'obstime': '2022-07-11T20:00:05',\n 'multiplicity': '4',\n 'peak_current': '-11',\n 'cloud_indicator': '0',\n 'ellipse_major': '5.8'}]\n\n"
] | [
0,
0
] | [] | [] | [
"dictionary",
"python",
"xml"
] | stackoverflow_0074642259_dictionary_python_xml.txt |
Q:
How to grab substring which meets certain requirements in python?
I have a python string that consists of "0", "1" and "x". I would to know what is the bit location of x in the string. eg. string1= '0100xx11xxx0x1' and the output is [(4,5),(8,10),(12)]. x appears in bit locations 4 to 5, locations 8 to 10, and location 12. The following scripts are written in python; however, the results are not really as expected.
string1= '0100xx11xxx0x1'
temparray=[]
trackarray=[]
i=0
for x in string1:
if x=="x":
#print('i: ',i)
temparray.append(i)
i=i+1
else:
if len(temparray)>1:
bitsRange= (temparray[0],temparray[-1])
trackarray.append(bitsRange)
temparray=[]
i=i+1
print(trackarray)
A:
Here's the solution using regex:
import re
string1= '0100xx11xxx0x1'
matches = []
for match in re.finditer(r'x+',string1): # iterates over re.Match object
if match.start() == match.end()-1: matches.append(tuple([match.start()])) # if you want tuple else remove tuple() and []
else: matches.append((match.start(), match.end()-1))
print(matches)
Here's a fix to your code:
string1= '0100xx11xxx0x1'
temparray=[]
trackarray=[]
i = 0
for x in string1:
if x == "x":
temparray.append(i)
elif len(temparray) > 1:
trackarray.append((temparray[0],temparray[-1]))
temparray = []
elif len(temparray) == 1:
trackarray.append(tuple(temparray)) # if you want in tuple else remove tuple()
temparray = []
i += 1
print(trackarray)
What did I fix?
You were just checking if the len is greater than 1. However, when there is one value i.e 12 the length is one. So, I simply added support for that.
Docs:
re
A:
Edited:
string1= '0100xx11xxx0x1'
temparray=[]
trackarray=[]
# ------------------ modified ------------------
for i, x in enumerate(string1):
if x == 'x':
temparray.append(i)
continue
if len(temparray) > 1:
trackarray.append((temparray[0], temparray[-1]))
temparray = []
elif len(temparray) == 1:
trackarray.append((temparray[0],))
temparray = []
if len(temparray) == 1:
trackarray.append((temparray[0],))
elif len(temparray) > 1:
trackarray.append((temparray[0], temparray[-1]))
# ----------------------------------------------
print(trackarray)
# [(4, 5), (8, 10), (12,)]
A:
The required output appears to be a list of tuples (except that the last value isn't a valid representation of a tuple)
You just need to enumerate the string and construct the tuples as you check the characters in the string.
Something like this:
string1= '0100xx11xxx0x1'
result = []
t = None
for i, c in enumerate(string1):
if c == 'x':
t = (t[0], i) if t else (i,)
else:
if t:
t = result.append(t)
if t: # final check needed in case last character in string is 'x'
result.append(t)
print(result)
Output:
[(4, 5), (8, 10), (12,)]
| How to grab substring which meets certain requirements in python? | I have a python string that consists of "0", "1" and "x". I would to know what is the bit location of x in the string. eg. string1= '0100xx11xxx0x1' and the output is [(4,5),(8,10),(12)]. x appears in bit locations 4 to 5, locations 8 to 10, and location 12. The following scripts are written in python; however, the results are not really as expected.
string1= '0100xx11xxx0x1'
temparray=[]
trackarray=[]
i=0
for x in string1:
if x=="x":
#print('i: ',i)
temparray.append(i)
i=i+1
else:
if len(temparray)>1:
bitsRange= (temparray[0],temparray[-1])
trackarray.append(bitsRange)
temparray=[]
i=i+1
print(trackarray)
| [
"Here's the solution using regex:\nimport re\nstring1= '0100xx11xxx0x1'\nmatches = []\nfor match in re.finditer(r'x+',string1): # iterates over re.Match object\n if match.start() == match.end()-1: matches.append(tuple([match.start()])) # if you want tuple else remove tuple() and []\n else: matches.append((match.start(), match.end()-1))\nprint(matches)\n\nHere's a fix to your code:\nstring1= '0100xx11xxx0x1'\ntemparray=[]\ntrackarray=[]\ni = 0\nfor x in string1:\n if x == \"x\":\n temparray.append(i) \n elif len(temparray) > 1:\n trackarray.append((temparray[0],temparray[-1]))\n temparray = []\n elif len(temparray) == 1:\n trackarray.append(tuple(temparray)) # if you want in tuple else remove tuple()\n temparray = []\n i += 1 \n\nprint(trackarray)\n\nWhat did I fix?\nYou were just checking if the len is greater than 1. However, when there is one value i.e 12 the length is one. So, I simply added support for that.\nDocs:\nre\n",
"Edited:\nstring1= '0100xx11xxx0x1'\n\ntemparray=[]\ntrackarray=[]\n\n# ------------------ modified ------------------\nfor i, x in enumerate(string1):\n if x == 'x':\n temparray.append(i)\n continue\n \n if len(temparray) > 1:\n trackarray.append((temparray[0], temparray[-1]))\n temparray = []\n elif len(temparray) == 1:\n trackarray.append((temparray[0],))\n temparray = []\n\nif len(temparray) == 1:\n trackarray.append((temparray[0],))\nelif len(temparray) > 1:\n trackarray.append((temparray[0], temparray[-1]))\n# ----------------------------------------------\n\nprint(trackarray)\n# [(4, 5), (8, 10), (12,)]\n\n",
"The required output appears to be a list of tuples (except that the last value isn't a valid representation of a tuple)\nYou just need to enumerate the string and construct the tuples as you check the characters in the string.\nSomething like this:\nstring1= '0100xx11xxx0x1'\n\nresult = []\nt = None\n\nfor i, c in enumerate(string1):\n if c == 'x':\n t = (t[0], i) if t else (i,)\n else:\n if t:\n t = result.append(t)\n\nif t: # final check needed in case last character in string is 'x'\n result.append(t)\n \nprint(result)\n\nOutput:\n[(4, 5), (8, 10), (12,)]\n\n"
] | [
1,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074638079_python.txt |
Q:
How can I trigger a Airflow DAG through a REST API , DAG is hosted in google cloud composer?
i want to trigger the dag externally
I was unable to find the solution , i'm new to programming
A:
You can trigger a DAG externally in a several ways :
Solution 1 :
trigger a DAG with gcloud cli and gcloud composer command :
gcloud composer environments run ENVIRONMENT_NAME \
--location LOCATION \
dags trigger -- DAG_ID
Replace :
ENVIRONMENT_NAME with the name of the environment.
LOCATION with the region where the environment is located.
DAG_ID with the name of the DAG.
Solution 2 :
trigger a DAG with a Cloud function
from google.auth.transport.requests import Request
from google.oauth2 import id_token
import requests
IAM_SCOPE = 'https://www.googleapis.com/auth/iam'
OAUTH_TOKEN_URI = 'https://www.googleapis.com/oauth2/v4/token'
# If you are using the stable API, set this value to False
# For more info about Airflow APIs see https://cloud.google.com/composer/docs/access-airflow-api
USE_EXPERIMENTAL_API = True
def trigger_dag(data, context=None):
"""Makes a POST request to the Composer DAG Trigger API
When called via Google Cloud Functions (GCF),
data and context are Background function parameters.
For more info, refer to
https://cloud.google.com/functions/docs/writing/background#functions_background_parameters-python
To call this function from a Python script, omit the ``context`` argument
and pass in a non-null value for the ``data`` argument.
This function is currently only compatible with Composer v1 environments.
"""
# Fill in with your Composer info here
# Navigate to your webserver's login page and get this from the URL
# Or use the script found at
# https://github.com/GoogleCloudPlatform/python-docs-samples/blob/main/composer/rest/get_client_id.py
client_id = 'YOUR-CLIENT-ID'
# This should be part of your webserver's URL:
# {tenant-project-id}.appspot.com
webserver_id = 'YOUR-TENANT-PROJECT'
# The name of the DAG you wish to trigger
dag_name = 'composer_sample_trigger_response_dag'
if USE_EXPERIMENTAL_API:
endpoint = f'api/experimental/dags/{dag_name}/dag_runs'
json_data = {'conf': data, 'replace_microseconds': 'false'}
else:
endpoint = f'api/v1/dags/{dag_name}/dagRuns'
json_data = {'conf': data}
webserver_url = (
'https://'
+ webserver_id
+ '.appspot.com/'
+ endpoint
)
# Make a POST request to IAP which then Triggers the DAG
make_iap_request(
webserver_url, client_id, method='POST', json=json_data)
# This code is copied from
# https://github.com/GoogleCloudPlatform/python-docs-samples/blob/main/iap/make_iap_request.py
# START COPIED IAP CODE
def make_iap_request(url, client_id, method='GET', **kwargs):
"""Makes a request to an application protected by Identity-Aware Proxy.
Args:
url: The Identity-Aware Proxy-protected URL to fetch.
client_id: The client ID used by Identity-Aware Proxy.
method: The request method to use
('GET', 'OPTIONS', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE')
**kwargs: Any of the parameters defined for the request function:
https://github.com/requests/requests/blob/master/requests/api.py
If no timeout is provided, it is set to 90 by default.
Returns:
The page body, or raises an exception if the page couldn't be retrieved.
"""
# Set the default timeout, if missing
if 'timeout' not in kwargs:
kwargs['timeout'] = 90
# Obtain an OpenID Connect (OIDC) token from metadata server or using service
# account.
google_open_id_connect_token = id_token.fetch_id_token(Request(), client_id)
# Fetch the Identity-Aware Proxy-protected URL, including an
# Authorization header containing "Bearer " followed by a
# Google-issued OpenID Connect token for the service account.
resp = requests.request(
method, url,
headers={'Authorization': 'Bearer {}'.format(
google_open_id_connect_token)}, **kwargs)
if resp.status_code == 403:
raise Exception('Service account does not have permission to '
'access the IAP-protected application.')
elif resp.status_code != 200:
raise Exception(
'Bad response from application: {!r} / {!r} / {!r}'.format(
resp.status_code, resp.headers, resp.text))
else:
return resp.text
# END COPIED IAP CODE
| How can I trigger a Airflow DAG through a REST API , DAG is hosted in google cloud composer? | i want to trigger the dag externally
I was unable to find the solution , i'm new to programming
| [
"You can trigger a DAG externally in a several ways :\nSolution 1 :\ntrigger a DAG with gcloud cli and gcloud composer command :\n gcloud composer environments run ENVIRONMENT_NAME \\\n --location LOCATION \\\n dags trigger -- DAG_ID\n\nReplace :\n\nENVIRONMENT_NAME with the name of the environment.\nLOCATION with the region where the environment is located.\nDAG_ID with the name of the DAG.\n\nSolution 2 :\ntrigger a DAG with a Cloud function\n\nfrom google.auth.transport.requests import Request\nfrom google.oauth2 import id_token\nimport requests\n\n\nIAM_SCOPE = 'https://www.googleapis.com/auth/iam'\nOAUTH_TOKEN_URI = 'https://www.googleapis.com/oauth2/v4/token'\n# If you are using the stable API, set this value to False\n# For more info about Airflow APIs see https://cloud.google.com/composer/docs/access-airflow-api\nUSE_EXPERIMENTAL_API = True\n\n\ndef trigger_dag(data, context=None):\n \"\"\"Makes a POST request to the Composer DAG Trigger API\n\n When called via Google Cloud Functions (GCF),\n data and context are Background function parameters.\n\n For more info, refer to\n https://cloud.google.com/functions/docs/writing/background#functions_background_parameters-python\n\n To call this function from a Python script, omit the ``context`` argument\n and pass in a non-null value for the ``data`` argument.\n\n This function is currently only compatible with Composer v1 environments.\n \"\"\"\n\n # Fill in with your Composer info here\n # Navigate to your webserver's login page and get this from the URL\n # Or use the script found at\n # https://github.com/GoogleCloudPlatform/python-docs-samples/blob/main/composer/rest/get_client_id.py\n client_id = 'YOUR-CLIENT-ID'\n # This should be part of your webserver's URL:\n # {tenant-project-id}.appspot.com\n webserver_id = 'YOUR-TENANT-PROJECT'\n # The name of the DAG you wish to trigger\n dag_name = 'composer_sample_trigger_response_dag'\n\n if USE_EXPERIMENTAL_API:\n endpoint = f'api/experimental/dags/{dag_name}/dag_runs'\n json_data = {'conf': data, 'replace_microseconds': 'false'}\n else:\n endpoint = f'api/v1/dags/{dag_name}/dagRuns'\n json_data = {'conf': data}\n webserver_url = (\n 'https://'\n + webserver_id\n + '.appspot.com/'\n + endpoint\n )\n # Make a POST request to IAP which then Triggers the DAG\n make_iap_request(\n webserver_url, client_id, method='POST', json=json_data)\n\n\n# This code is copied from\n# https://github.com/GoogleCloudPlatform/python-docs-samples/blob/main/iap/make_iap_request.py\n# START COPIED IAP CODE\ndef make_iap_request(url, client_id, method='GET', **kwargs):\n \"\"\"Makes a request to an application protected by Identity-Aware Proxy.\n Args:\n url: The Identity-Aware Proxy-protected URL to fetch.\n client_id: The client ID used by Identity-Aware Proxy.\n method: The request method to use\n ('GET', 'OPTIONS', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE')\n **kwargs: Any of the parameters defined for the request function:\n https://github.com/requests/requests/blob/master/requests/api.py\n If no timeout is provided, it is set to 90 by default.\n Returns:\n The page body, or raises an exception if the page couldn't be retrieved.\n \"\"\"\n # Set the default timeout, if missing\n if 'timeout' not in kwargs:\n kwargs['timeout'] = 90\n\n # Obtain an OpenID Connect (OIDC) token from metadata server or using service\n # account.\n google_open_id_connect_token = id_token.fetch_id_token(Request(), client_id)\n\n # Fetch the Identity-Aware Proxy-protected URL, including an\n # Authorization header containing \"Bearer \" followed by a\n # Google-issued OpenID Connect token for the service account.\n resp = requests.request(\n method, url,\n headers={'Authorization': 'Bearer {}'.format(\n google_open_id_connect_token)}, **kwargs)\n if resp.status_code == 403:\n raise Exception('Service account does not have permission to '\n 'access the IAP-protected application.')\n elif resp.status_code != 200:\n raise Exception(\n 'Bad response from application: {!r} / {!r} / {!r}'.format(\n resp.status_code, resp.headers, resp.text))\n else:\n return resp.text\n# END COPIED IAP CODE\n\n"
] | [
0
] | [] | [] | [
"airflow",
"google_cloud_composer",
"python"
] | stackoverflow_0074642344_airflow_google_cloud_composer_python.txt |
Q:
Can you override a function from a class outside of a class in Python
Can you override a function from a class, like:
class A:
def func():
print("Out of A")
classA = A
# Is something like this possible
def classA.func():
print("Overrided!")
Wanted Output:
Overrided
I googled "python override function", "python override function from class" and so on but couldnt find anything that fits. I found just how to override the parent function.
A:
You most likely shouldn't do this. If you want to change a single part of some class, make a new class that inherits from it and reimplement the parts you want changed:
class A:
@staticmethod
def func():
print("Out of A")
classA = A
class B(A):
@staticmethod
def func():
print("Overridden!")
A.func()
B.func()
| Can you override a function from a class outside of a class in Python | Can you override a function from a class, like:
class A:
def func():
print("Out of A")
classA = A
# Is something like this possible
def classA.func():
print("Overrided!")
Wanted Output:
Overrided
I googled "python override function", "python override function from class" and so on but couldnt find anything that fits. I found just how to override the parent function.
| [
"You most likely shouldn't do this. If you want to change a single part of some class, make a new class that inherits from it and reimplement the parts you want changed:\nclass A:\n @staticmethod\n def func():\n print(\"Out of A\")\n\nclassA = A\n\nclass B(A):\n @staticmethod\n def func():\n print(\"Overridden!\")\n\nA.func()\nB.func()\n\n"
] | [
0
] | [] | [] | [
"class",
"function",
"overriding",
"python"
] | stackoverflow_0074642654_class_function_overriding_python.txt |
Q:
Find the first two elements that are out of order and swap them
hoping someone can help me with my code. I'm new to python and software development. I'm trying to find the first two elements that are out of order and swap them.
arr = [5, 22, 29, 39, 19, 51, 78, 96, 84]
i = 0
while (i < arr.len() - 1) and (arr[i] < arr[i+1]):
i += i
print(i)
arr[i] = arr[i+1]
arr[i+1] = arr[i]
A:
Try this:
arr = [1, 2, 8, 4, 5, 6]
for i in range(len(arr)-1):
if arr[i] > arr[i+1]:
arr[i], arr[i+1] = arr[i+1], arr[i]
break
print(arr)
A:
arr = [5,22,29,39,19,51,78,96,84]
for i in range(len(arr)):
if arr[i]>arr[i+1]:
temp = arr[i]
arr[i] = arr[i+1]
arr[i+1] = temp
break
print(arr)
# [5, 22, 29, 19, 39, 51, 78, 96, 84]
| Find the first two elements that are out of order and swap them | hoping someone can help me with my code. I'm new to python and software development. I'm trying to find the first two elements that are out of order and swap them.
arr = [5, 22, 29, 39, 19, 51, 78, 96, 84]
i = 0
while (i < arr.len() - 1) and (arr[i] < arr[i+1]):
i += i
print(i)
arr[i] = arr[i+1]
arr[i+1] = arr[i]
| [
"Try this:\narr = [1, 2, 8, 4, 5, 6] \nfor i in range(len(arr)-1):\n if arr[i] > arr[i+1]:\n arr[i], arr[i+1] = arr[i+1], arr[i]\n break\nprint(arr)\n\n",
"arr = [5,22,29,39,19,51,78,96,84]\n\nfor i in range(len(arr)):\n if arr[i]>arr[i+1]:\n temp = arr[i]\n arr[i] = arr[i+1]\n arr[i+1] = temp\n break\n\nprint(arr)\n# [5, 22, 29, 19, 39, 51, 78, 96, 84]\n\n"
] | [
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074642575_python.txt |
Q:
How to set up Kafka as a dependency when using Delta Lake in PySpark?
This is the code to set up Delta Lake as part of a regular Python script, according to their documentation:
import pyspark
from delta import *
builder = pyspark.sql.SparkSession.builder.appName("MyApp") \
.config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \
.config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
spark = configure_spark_with_delta_pip(builder).getOrCreate()
The official docs for Kafka integration in Spark show how to set up Kafka when using a spark-submit command (through the --packages parameter), but not in Python.
Digging around, turns out that you can also include this parameter when building the Spark session:
import pyspark
from delta import *
packages = [
"org.apache.spark:spark-sql-kafka-0-10_2.12:3.3.1",
]
builder = pyspark.sql.SparkSession.builder.appName("MyApp") \
.config("spark.jars.packages", ",".join(packages))
.config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \
.config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
spark = configure_spark_with_delta_pip(builder).getOrCreate()
However, when I try to stream to Kafka using the spark session created above I still get the following error:
Failed to find data source: kafka. Please deploy the application as per the deployment section of "Structured Streaming + Kafka Integration Guide".
I'm using Delta 2.1.0 and PySpark 3.3.0.
A:
Turns out that Delta overwrites any packages provided in spark.jars.packages if you're using configure_spark_with_delta_pip (source). The proper way is to make use of the extra_packages parameter when setting up your Spark Session:
import pyspark
from delta import *
packages = [
"org.apache.spark:spark-sql-kafka-0-10_2.12:3.3.1",
]
builder = pyspark.sql.SparkSession.builder.appName("MyApp") \
.config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \
.config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
spark = configure_spark_with_delta_pip(builder, extra_packages=packages).getOrCreate()
| How to set up Kafka as a dependency when using Delta Lake in PySpark? | This is the code to set up Delta Lake as part of a regular Python script, according to their documentation:
import pyspark
from delta import *
builder = pyspark.sql.SparkSession.builder.appName("MyApp") \
.config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \
.config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
spark = configure_spark_with_delta_pip(builder).getOrCreate()
The official docs for Kafka integration in Spark show how to set up Kafka when using a spark-submit command (through the --packages parameter), but not in Python.
Digging around, turns out that you can also include this parameter when building the Spark session:
import pyspark
from delta import *
packages = [
"org.apache.spark:spark-sql-kafka-0-10_2.12:3.3.1",
]
builder = pyspark.sql.SparkSession.builder.appName("MyApp") \
.config("spark.jars.packages", ",".join(packages))
.config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \
.config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
spark = configure_spark_with_delta_pip(builder).getOrCreate()
However, when I try to stream to Kafka using the spark session created above I still get the following error:
Failed to find data source: kafka. Please deploy the application as per the deployment section of "Structured Streaming + Kafka Integration Guide".
I'm using Delta 2.1.0 and PySpark 3.3.0.
| [
"Turns out that Delta overwrites any packages provided in spark.jars.packages if you're using configure_spark_with_delta_pip (source). The proper way is to make use of the extra_packages parameter when setting up your Spark Session:\nimport pyspark\nfrom delta import *\n\npackages = [\n \"org.apache.spark:spark-sql-kafka-0-10_2.12:3.3.1\",\n]\n\nbuilder = pyspark.sql.SparkSession.builder.appName(\"MyApp\") \\\n .config(\"spark.sql.extensions\", \"io.delta.sql.DeltaSparkSessionExtension\") \\\n .config(\"spark.sql.catalog.spark_catalog\", \"org.apache.spark.sql.delta.catalog.DeltaCatalog\")\n\nspark = configure_spark_with_delta_pip(builder, extra_packages=packages).getOrCreate()\n\n"
] | [
0
] | [] | [] | [
"apache_kafka",
"databricks",
"delta_lake",
"pyspark",
"python"
] | stackoverflow_0074642812_apache_kafka_databricks_delta_lake_pyspark_python.txt |
Q:
Python: Gaussian filtering an N-channel image along only spatial dimensions?
I have an HxWxN image arr that I want to Gaussian blur.
scipy.ndimage.gaussian_filter seems to treat the image as a generic array and also blur along the final channel dimension. The desired behavior is Gaussian blurring arr[:, :, i] independently for all is and then concatenating the resultant slices back into an HxWxN result array.
Is there a better library or function that I can use to directly achieve that, or do I really need to just put scipy.ndimage.gaussian_filter in a for-loop over i?
A bonus question is what if I have M images organized as an MxHxWxN array? How do I blur over just the H and W dimensions?
A:
Using scipy.ndimage.gaussian_filter
Solution
To Gaussian blur only the spatial dimensions H and W of an HxWxN image arr with a standard deviation of 1.6, use:
std=1.6
gaussian_filter(arr, sigma=(std, std, 0))
Explanation
According to the SciPy Docs scipy.ndimage.gaussian_filter allows to specify the standard derivation for each axis individually by passing a sequence of scalars to the parameter sigma:
sigma: scalar or sequence of scalars
Standard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.
Setting sigma to 0 for an axis means no filtering in this direction.
If only one scalar is given for sigma, not only the spatial dimensions but also the RGB channels of an image would be blurred together leading to a desaturated image.
Bonus Answer
For an array of M images with the shape MxHxWxN you would set sigma=(0,std, std, 0)
Performance
After a quick look into the source code of scipy.ndimage.gaussian_filter it seems that the method gaussian_filter1d is applied only to those dimensions that have a sigma > 1e-15. With sigma=0 not being > 1e-15 the filtering is skipped for this dimension. So this solution should not cause any unnecessary calculations.
A:
I figured it out.
cv2.GaussianBlur() does exactly that: it blurs each channel independently, and the image can have an arbitrary number of channels.
| Python: Gaussian filtering an N-channel image along only spatial dimensions? | I have an HxWxN image arr that I want to Gaussian blur.
scipy.ndimage.gaussian_filter seems to treat the image as a generic array and also blur along the final channel dimension. The desired behavior is Gaussian blurring arr[:, :, i] independently for all is and then concatenating the resultant slices back into an HxWxN result array.
Is there a better library or function that I can use to directly achieve that, or do I really need to just put scipy.ndimage.gaussian_filter in a for-loop over i?
A bonus question is what if I have M images organized as an MxHxWxN array? How do I blur over just the H and W dimensions?
| [
"Using scipy.ndimage.gaussian_filter\nSolution\nTo Gaussian blur only the spatial dimensions H and W of an HxWxN image arr with a standard deviation of 1.6, use:\nstd=1.6\ngaussian_filter(arr, sigma=(std, std, 0))\n\nExplanation\nAccording to the SciPy Docs scipy.ndimage.gaussian_filter allows to specify the standard derivation for each axis individually by passing a sequence of scalars to the parameter sigma:\n\nsigma: scalar or sequence of scalars\nStandard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.\n\nSetting sigma to 0 for an axis means no filtering in this direction.\nIf only one scalar is given for sigma, not only the spatial dimensions but also the RGB channels of an image would be blurred together leading to a desaturated image.\nBonus Answer\nFor an array of M images with the shape MxHxWxN you would set sigma=(0,std, std, 0)\nPerformance\nAfter a quick look into the source code of scipy.ndimage.gaussian_filter it seems that the method gaussian_filter1d is applied only to those dimensions that have a sigma > 1e-15. With sigma=0 not being > 1e-15 the filtering is skipped for this dimension. So this solution should not cause any unnecessary calculations.\n",
"I figured it out.\ncv2.GaussianBlur() does exactly that: it blurs each channel independently, and the image can have an arbitrary number of channels.\n"
] | [
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0067302611_python.txt |
Q:
How to solve ERROR: Failed building wheel for psycopg2?
I'm having issues building wheel for psycopg2 thru pip install -r requirements.txt. I'm on ubuntu 20.04 + python 3.8.5 + venv.
This is my requirements.txt:
amqp==2.6.1
anyjson==0.3.3
asgiref==3.2.10
billiard==3.6.3.0
brotlipy==0.7.0
celery==4.4.7
celery-progress==0.0.12
certifi==2020.6.20
cffi==1.14.2
chardet==3.0.4
cryptography==3.1
Django==3.0.3
dj-database-url==0.5.0
django-celery-results==1.2.1
django-cors-headers==3.5.0
django-crispy-forms==1.9.2
django-heroku==0.3.1
django-rest-framework==0.1.0
django-templated-mail==1.1.1
djangorestframework==3.11.1
djoser==2.0.5
fake-useragent==0.1.11
future==0.18.2
gunicorn==20.0.4
httpie==2.2.0
idna==2.10
kombu==4.6.11
lxml==4.5.2
pika==1.1.0
psycopg2==2.8.5
pycparser==2.20
Pygments==2.7.0
pyOpenSSL==19.1.0
PySocks==1.7.1
python-dateutil==2.8.1
python-decouple==3.3
pytz==2020.1
requests==2.24.0
six==1.15.0
SQLAlchemy==1.3.19
sqlparse==0.3.1
urllib3==1.25.10
vine==1.3.0
whitenoise==5.2.0
This is the output when I pip install -r requirements.txt:
[...]
Collecting urllib3==1.25.10
Using cached urllib3-1.25.10-py2.py3-none-any.whl (127 kB)
Collecting vine==1.3.0
Using cached vine-1.3.0-py2.py3-none-any.whl (14 kB)
Collecting whitenoise==5.2.0
Using cached whitenoise-5.2.0-py2.py3-none-any.whl (19 kB)
Requirement already satisfied: setuptools>=3.0 in ./venv/lib/python3.8/site-packages (from gunicorn==20.0.4->-r requirements.txt (line 24)) (44.0.0)
Building wheels for collected packages: psycopg2
Building wheel for psycopg2 (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /home/pierre/Workspace/campground_scavanger/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1xr9yjk0/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1xr9yjk0/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-b8g9assp
cwd: /tmp/pip-install-1xr9yjk0/psycopg2/
Complete output (6 lines):
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
----------------------------------------
ERROR: Failed building wheel for psycopg2
Running setup.py clean for psycopg2
Failed to build psycopg2
Installing collected packages: vine, amqp, anyjson, asgiref, billiard, pycparser, cffi, brotlipy, kombu, pytz, celery, celery-progress, certifi, chardet, six, cryptography, sqlparse, Django, dj-database-url, django-celery-results, django-cors-headers, django-crispy-forms, whitenoise, psycopg2, django-heroku, djangorestframework, django-rest-framework, django-templated-mail, djoser, fake-useragent, future, gunicorn, idna, urllib3, requests, Pygments, httpie, lxml, pika, pyOpenSSL, PySocks, python-dateutil, python-decouple, SQLAlchemy
Running setup.py install for psycopg2 ... error
ERROR: Command errored out with exit status 1:
command: /home/pierre/Workspace/campground_scavanger/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1xr9yjk0/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1xr9yjk0/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-8dtfz_uf/install-record.txt --single-version-externally-managed --compile --install-headers /home/pierre/Workspace/campground_scavanger/venv/include/site/python3.8/psycopg2
cwd: /tmp/pip-install-1xr9yjk0/psycopg2/
Complete output (40 lines):
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/psycopg2
copying lib/_ipaddress.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_range.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/extensions.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_json.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/__init__.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_lru_cache.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/compat.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/errors.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/sql.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/tz.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/errorcodes.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/pool.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/extras.py -> build/lib.linux-x86_64-3.8/psycopg2
running build_ext
building 'psycopg2._psycopg' extension
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/psycopg
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DPSYCOPG_VERSION=2.8.5 (dt dec pq3 ext lo64) -DPG_VERSION_NUM=120005 -DHAVE_LO64=1 -I/home/pierre/Workspace/campground_scavanger/venv/include -I/usr/include/python3.8 -I. -I/usr/include/postgresql -I/usr/include/postgresql/12/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-3.8/psycopg/psycopgmodule.o -Wdeclaration-after-statement
In file included from psycopg/psycopgmodule.c:28:
./psycopg/psycopg.h:36:10: fatal error: libpq-fe.h: No such file or directory
36 | #include <libpq-fe.h>
| ^~~~~~~~~~~~
compilation terminated.
It appears you are missing some prerequisite to build the package from source.
You may install a binary package by installing 'psycopg2-binary' from PyPI.
If you want to install psycopg2 from source, please install the packages
required for the build and try again.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /home/pierre/Workspace/campground_scavanger/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1xr9yjk0/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1xr9yjk0/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-8dtfz_uf/install-record.txt --single-version-externally-managed --compile --install-headers /home/pierre/Workspace/campground_scavanger/venv/include/site/python3.8/psycopg2 Check the logs for full command output.
Googling the first error error: invalid command 'bdist_wheel' led me to run pip install wheel in my venv. Successfully installed wheel-0.36.2. Not sure if this is related whatsoever with my psycopg2 issue hereafter but I'm showing all the steps I'm doing.
I then reran pip install -r requirements.txt and now only the psycopg2 issue shows up:
[...]
Requirement already satisfied: vine==1.3.0 in ./venv/lib/python3.8/site-packages (from -r requirements.txt (line 43)) (1.3.0)
Requirement already satisfied: whitenoise==5.2.0 in ./venv/lib/python3.8/site-packages (from -r requirements.txt (line 44)) (5.2.0)
Requirement already satisfied: setuptools>=3.0 in ./venv/lib/python3.8/site-packages (from gunicorn==20.0.4->-r requirements.txt (line 24)) (44.0.0)
Building wheels for collected packages: psycopg2
Building wheel for psycopg2 (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /home/pierre/Workspace/campground_scavanger/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-zxwqo979/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-zxwqo979/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-fknnvsn3
cwd: /tmp/pip-install-zxwqo979/psycopg2/
Complete output (40 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/psycopg2
copying lib/_ipaddress.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_range.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/extensions.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_json.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/__init__.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_lru_cache.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/compat.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/errors.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/sql.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/tz.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/errorcodes.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/pool.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/extras.py -> build/lib.linux-x86_64-3.8/psycopg2
running build_ext
building 'psycopg2._psycopg' extension
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/psycopg
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DPSYCOPG_VERSION=2.8.5 (dt dec pq3 ext lo64) -DPG_VERSION_NUM=120005 -DHAVE_LO64=1 -I/home/pierre/Workspace/campground_scavanger/venv/include -I/usr/include/python3.8 -I. -I/usr/include/postgresql -I/usr/include/postgresql/12/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-3.8/psycopg/psycopgmodule.o -Wdeclaration-after-statement
In file included from psycopg/psycopgmodule.c:28:
./psycopg/psycopg.h:36:10: fatal error: libpq-fe.h: No such file or directory
36 | #include <libpq-fe.h>
| ^~~~~~~~~~~~
compilation terminated.
It appears you are missing some prerequisite to build the package from source.
You may install a binary package by installing 'psycopg2-binary' from PyPI.
If you want to install psycopg2 from source, please install the packages
required for the build and try again.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for psycopg2
Running setup.py clean for psycopg2
Failed to build psycopg2
Installing collected packages: psycopg2, django-heroku, djangorestframework, django-rest-framework, django-templated-mail, djoser, fake-useragent, future, gunicorn, urllib3, idna, requests, Pygments, httpie, lxml, pika, pyOpenSSL, PySocks, python-dateutil, python-decouple, SQLAlchemy
Running setup.py install for psycopg2 ... error
ERROR: Command errored out with exit status 1:
command: /home/pierre/Workspace/campground_scavanger/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-zxwqo979/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-zxwqo979/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-r4aij71q/install-record.txt --single-version-externally-managed --compile --install-headers /home/pierre/Workspace/campground_scavanger/venv/include/site/python3.8/psycopg2
cwd: /tmp/pip-install-zxwqo979/psycopg2/
Complete output (40 lines):
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/psycopg2
copying lib/_ipaddress.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_range.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/extensions.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_json.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/__init__.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_lru_cache.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/compat.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/errors.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/sql.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/tz.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/errorcodes.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/pool.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/extras.py -> build/lib.linux-x86_64-3.8/psycopg2
running build_ext
building 'psycopg2._psycopg' extension
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/psycopg
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DPSYCOPG_VERSION=2.8.5 (dt dec pq3 ext lo64) -DPG_VERSION_NUM=120005 -DHAVE_LO64=1 -I/home/pierre/Workspace/campground_scavanger/venv/include -I/usr/include/python3.8 -I. -I/usr/include/postgresql -I/usr/include/postgresql/12/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-3.8/psycopg/psycopgmodule.o -Wdeclaration-after-statement
In file included from psycopg/psycopgmodule.c:28:
./psycopg/psycopg.h:36:10: fatal error: libpq-fe.h: No such file or directory
36 | #include <libpq-fe.h>
| ^~~~~~~~~~~~
compilation terminated.
It appears you are missing some prerequisite to build the package from source.
You may install a binary package by installing 'psycopg2-binary' from PyPI.
If you want to install psycopg2 from source, please install the packages
required for the build and try again.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /home/pierre/Workspace/campground_scavanger/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-zxwqo979/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-zxwqo979/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-r4aij71q/install-record.txt --single-version-externally-managed --compile --install-headers /home/pierre/Workspace/campground_scavanger/venv/include/site/python3.8/psycopg2 Check the logs for full command output.
I replaced psycopg2==2.8.5 by psycopg2-binary==2.8.5 as per some other stackoverflow threads but the same issue remains during the pip install -r requirements.txt .
If I install manually psycopg2-binary in my venv it works fine:
(venv) pierre@thinkpad:~/Workspace/campground_scavanger$ pip install psycopg2-binary
Collecting psycopg2-binary
Using cached psycopg2_binary-2.8.6-cp38-cp38-manylinux1_x86_64.whl (3.0 MB)
Installing collected packages: psycopg2-binary
Successfully installed psycopg2-binary-2.8.6
But then again if I comment out psycopg2==2.8.5 (or psycopg2-binary==2.8.5 and rerun pip install -r requirements.txt, I keep getting the same error message.
I'm not quite sure why there is a Building wheels for collected packages: psycopg2 even if psycopg2 is commented out in my requirements.txt. Dependency with another package?
A:
Instead pip command use:
sudo apt-get install libpq-dev
then use:
pip install psycopg2
A:
On Ubuntu, do you can install psycopg2-binary
pip install psycopg2-binary
A:
Looks like you need to install libpq-dev according to this Problems compiling and installing psycopg2.
pip install libpq-dev should work or perhaps a specific version may be required. About psycopg2 being commented yet it being attempted to install by pip it may be a requirement of one of your other dependencies.
A:
After installing the right psycopg2==2.8.5 version, also check if your database is spatial and the right setting is there in your settings.py file for example if you are using postgres, set your database engine connection to;
'ENGINE': 'django.db.backends.postgresql_psycopg2',
| How to solve ERROR: Failed building wheel for psycopg2? | I'm having issues building wheel for psycopg2 thru pip install -r requirements.txt. I'm on ubuntu 20.04 + python 3.8.5 + venv.
This is my requirements.txt:
amqp==2.6.1
anyjson==0.3.3
asgiref==3.2.10
billiard==3.6.3.0
brotlipy==0.7.0
celery==4.4.7
celery-progress==0.0.12
certifi==2020.6.20
cffi==1.14.2
chardet==3.0.4
cryptography==3.1
Django==3.0.3
dj-database-url==0.5.0
django-celery-results==1.2.1
django-cors-headers==3.5.0
django-crispy-forms==1.9.2
django-heroku==0.3.1
django-rest-framework==0.1.0
django-templated-mail==1.1.1
djangorestframework==3.11.1
djoser==2.0.5
fake-useragent==0.1.11
future==0.18.2
gunicorn==20.0.4
httpie==2.2.0
idna==2.10
kombu==4.6.11
lxml==4.5.2
pika==1.1.0
psycopg2==2.8.5
pycparser==2.20
Pygments==2.7.0
pyOpenSSL==19.1.0
PySocks==1.7.1
python-dateutil==2.8.1
python-decouple==3.3
pytz==2020.1
requests==2.24.0
six==1.15.0
SQLAlchemy==1.3.19
sqlparse==0.3.1
urllib3==1.25.10
vine==1.3.0
whitenoise==5.2.0
This is the output when I pip install -r requirements.txt:
[...]
Collecting urllib3==1.25.10
Using cached urllib3-1.25.10-py2.py3-none-any.whl (127 kB)
Collecting vine==1.3.0
Using cached vine-1.3.0-py2.py3-none-any.whl (14 kB)
Collecting whitenoise==5.2.0
Using cached whitenoise-5.2.0-py2.py3-none-any.whl (19 kB)
Requirement already satisfied: setuptools>=3.0 in ./venv/lib/python3.8/site-packages (from gunicorn==20.0.4->-r requirements.txt (line 24)) (44.0.0)
Building wheels for collected packages: psycopg2
Building wheel for psycopg2 (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /home/pierre/Workspace/campground_scavanger/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1xr9yjk0/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1xr9yjk0/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-b8g9assp
cwd: /tmp/pip-install-1xr9yjk0/psycopg2/
Complete output (6 lines):
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
----------------------------------------
ERROR: Failed building wheel for psycopg2
Running setup.py clean for psycopg2
Failed to build psycopg2
Installing collected packages: vine, amqp, anyjson, asgiref, billiard, pycparser, cffi, brotlipy, kombu, pytz, celery, celery-progress, certifi, chardet, six, cryptography, sqlparse, Django, dj-database-url, django-celery-results, django-cors-headers, django-crispy-forms, whitenoise, psycopg2, django-heroku, djangorestframework, django-rest-framework, django-templated-mail, djoser, fake-useragent, future, gunicorn, idna, urllib3, requests, Pygments, httpie, lxml, pika, pyOpenSSL, PySocks, python-dateutil, python-decouple, SQLAlchemy
Running setup.py install for psycopg2 ... error
ERROR: Command errored out with exit status 1:
command: /home/pierre/Workspace/campground_scavanger/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1xr9yjk0/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1xr9yjk0/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-8dtfz_uf/install-record.txt --single-version-externally-managed --compile --install-headers /home/pierre/Workspace/campground_scavanger/venv/include/site/python3.8/psycopg2
cwd: /tmp/pip-install-1xr9yjk0/psycopg2/
Complete output (40 lines):
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/psycopg2
copying lib/_ipaddress.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_range.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/extensions.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_json.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/__init__.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_lru_cache.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/compat.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/errors.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/sql.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/tz.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/errorcodes.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/pool.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/extras.py -> build/lib.linux-x86_64-3.8/psycopg2
running build_ext
building 'psycopg2._psycopg' extension
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/psycopg
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DPSYCOPG_VERSION=2.8.5 (dt dec pq3 ext lo64) -DPG_VERSION_NUM=120005 -DHAVE_LO64=1 -I/home/pierre/Workspace/campground_scavanger/venv/include -I/usr/include/python3.8 -I. -I/usr/include/postgresql -I/usr/include/postgresql/12/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-3.8/psycopg/psycopgmodule.o -Wdeclaration-after-statement
In file included from psycopg/psycopgmodule.c:28:
./psycopg/psycopg.h:36:10: fatal error: libpq-fe.h: No such file or directory
36 | #include <libpq-fe.h>
| ^~~~~~~~~~~~
compilation terminated.
It appears you are missing some prerequisite to build the package from source.
You may install a binary package by installing 'psycopg2-binary' from PyPI.
If you want to install psycopg2 from source, please install the packages
required for the build and try again.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /home/pierre/Workspace/campground_scavanger/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1xr9yjk0/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1xr9yjk0/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-8dtfz_uf/install-record.txt --single-version-externally-managed --compile --install-headers /home/pierre/Workspace/campground_scavanger/venv/include/site/python3.8/psycopg2 Check the logs for full command output.
Googling the first error error: invalid command 'bdist_wheel' led me to run pip install wheel in my venv. Successfully installed wheel-0.36.2. Not sure if this is related whatsoever with my psycopg2 issue hereafter but I'm showing all the steps I'm doing.
I then reran pip install -r requirements.txt and now only the psycopg2 issue shows up:
[...]
Requirement already satisfied: vine==1.3.0 in ./venv/lib/python3.8/site-packages (from -r requirements.txt (line 43)) (1.3.0)
Requirement already satisfied: whitenoise==5.2.0 in ./venv/lib/python3.8/site-packages (from -r requirements.txt (line 44)) (5.2.0)
Requirement already satisfied: setuptools>=3.0 in ./venv/lib/python3.8/site-packages (from gunicorn==20.0.4->-r requirements.txt (line 24)) (44.0.0)
Building wheels for collected packages: psycopg2
Building wheel for psycopg2 (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /home/pierre/Workspace/campground_scavanger/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-zxwqo979/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-zxwqo979/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-fknnvsn3
cwd: /tmp/pip-install-zxwqo979/psycopg2/
Complete output (40 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/psycopg2
copying lib/_ipaddress.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_range.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/extensions.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_json.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/__init__.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_lru_cache.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/compat.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/errors.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/sql.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/tz.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/errorcodes.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/pool.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/extras.py -> build/lib.linux-x86_64-3.8/psycopg2
running build_ext
building 'psycopg2._psycopg' extension
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/psycopg
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DPSYCOPG_VERSION=2.8.5 (dt dec pq3 ext lo64) -DPG_VERSION_NUM=120005 -DHAVE_LO64=1 -I/home/pierre/Workspace/campground_scavanger/venv/include -I/usr/include/python3.8 -I. -I/usr/include/postgresql -I/usr/include/postgresql/12/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-3.8/psycopg/psycopgmodule.o -Wdeclaration-after-statement
In file included from psycopg/psycopgmodule.c:28:
./psycopg/psycopg.h:36:10: fatal error: libpq-fe.h: No such file or directory
36 | #include <libpq-fe.h>
| ^~~~~~~~~~~~
compilation terminated.
It appears you are missing some prerequisite to build the package from source.
You may install a binary package by installing 'psycopg2-binary' from PyPI.
If you want to install psycopg2 from source, please install the packages
required for the build and try again.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for psycopg2
Running setup.py clean for psycopg2
Failed to build psycopg2
Installing collected packages: psycopg2, django-heroku, djangorestframework, django-rest-framework, django-templated-mail, djoser, fake-useragent, future, gunicorn, urllib3, idna, requests, Pygments, httpie, lxml, pika, pyOpenSSL, PySocks, python-dateutil, python-decouple, SQLAlchemy
Running setup.py install for psycopg2 ... error
ERROR: Command errored out with exit status 1:
command: /home/pierre/Workspace/campground_scavanger/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-zxwqo979/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-zxwqo979/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-r4aij71q/install-record.txt --single-version-externally-managed --compile --install-headers /home/pierre/Workspace/campground_scavanger/venv/include/site/python3.8/psycopg2
cwd: /tmp/pip-install-zxwqo979/psycopg2/
Complete output (40 lines):
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/psycopg2
copying lib/_ipaddress.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_range.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/extensions.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_json.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/__init__.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/_lru_cache.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/compat.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/errors.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/sql.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/tz.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/errorcodes.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/pool.py -> build/lib.linux-x86_64-3.8/psycopg2
copying lib/extras.py -> build/lib.linux-x86_64-3.8/psycopg2
running build_ext
building 'psycopg2._psycopg' extension
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/psycopg
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DPSYCOPG_VERSION=2.8.5 (dt dec pq3 ext lo64) -DPG_VERSION_NUM=120005 -DHAVE_LO64=1 -I/home/pierre/Workspace/campground_scavanger/venv/include -I/usr/include/python3.8 -I. -I/usr/include/postgresql -I/usr/include/postgresql/12/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-3.8/psycopg/psycopgmodule.o -Wdeclaration-after-statement
In file included from psycopg/psycopgmodule.c:28:
./psycopg/psycopg.h:36:10: fatal error: libpq-fe.h: No such file or directory
36 | #include <libpq-fe.h>
| ^~~~~~~~~~~~
compilation terminated.
It appears you are missing some prerequisite to build the package from source.
You may install a binary package by installing 'psycopg2-binary' from PyPI.
If you want to install psycopg2 from source, please install the packages
required for the build and try again.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /home/pierre/Workspace/campground_scavanger/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-zxwqo979/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-zxwqo979/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-r4aij71q/install-record.txt --single-version-externally-managed --compile --install-headers /home/pierre/Workspace/campground_scavanger/venv/include/site/python3.8/psycopg2 Check the logs for full command output.
I replaced psycopg2==2.8.5 by psycopg2-binary==2.8.5 as per some other stackoverflow threads but the same issue remains during the pip install -r requirements.txt .
If I install manually psycopg2-binary in my venv it works fine:
(venv) pierre@thinkpad:~/Workspace/campground_scavanger$ pip install psycopg2-binary
Collecting psycopg2-binary
Using cached psycopg2_binary-2.8.6-cp38-cp38-manylinux1_x86_64.whl (3.0 MB)
Installing collected packages: psycopg2-binary
Successfully installed psycopg2-binary-2.8.6
But then again if I comment out psycopg2==2.8.5 (or psycopg2-binary==2.8.5 and rerun pip install -r requirements.txt, I keep getting the same error message.
I'm not quite sure why there is a Building wheels for collected packages: psycopg2 even if psycopg2 is commented out in my requirements.txt. Dependency with another package?
| [
"Instead pip command use:\nsudo apt-get install libpq-dev\n\nthen use:\npip install psycopg2\n\n",
"On Ubuntu, do you can install psycopg2-binary\npip install psycopg2-binary\n\n",
"Looks like you need to install libpq-dev according to this Problems compiling and installing psycopg2.\npip install libpq-dev should work or perhaps a specific version may be required. About psycopg2 being commented yet it being attempted to install by pip it may be a requirement of one of your other dependencies.\n",
"After installing the right psycopg2==2.8.5 version, also check if your database is spatial and the right setting is there in your settings.py file for example if you are using postgres, set your database engine connection to;\n'ENGINE': 'django.db.backends.postgresql_psycopg2',\n"
] | [
30,
4,
2,
0
] | [] | [] | [
"django",
"linux",
"psycopg2",
"python",
"python_3.x"
] | stackoverflow_0065821330_django_linux_psycopg2_python_python_3.x.txt |
Q:
How to make an automatic email list with get requests in python
This is my website food.jackunderwood.org it basically uses a get request to retrieve a JSON file from my school's lunch provider. It then goes through the JSON file and writes it on the website. I want to have an Email List where Monday-Friday it sends an email at 7:00 AM to every person that has signed up through the website by simply putting their email In a text box. This email would contain the daily lunch if possible I want to just code this in a python script so how can I do this does anyone have any ideas of the best possible way?
I have so far tried using different services like Snovio but I couldn't get the information daily and automate it.
A:
Requests library isn't meant to send emails, because its using HTTP protocol, however emails use SMTP. I advice you to use smtplib.
import smtplib
from email.mime.text import MIMEText
sender = '[email protected]'
receivers = ['[email protected]']
port = 1025
msg = MIMEText('This is test mail')
msg['Subject'] = 'Test mail'
msg['From'] = '[email protected]'
msg['To'] = '[email protected]'
with smtplib.SMTP('localhost', port) as server:
# server.login('username', 'password')
server.sendmail(sender, receivers, msg.as_string())
print("Successfully sent email")
https://zetcode.com/python/smtplib/ For further explanation
| How to make an automatic email list with get requests in python | This is my website food.jackunderwood.org it basically uses a get request to retrieve a JSON file from my school's lunch provider. It then goes through the JSON file and writes it on the website. I want to have an Email List where Monday-Friday it sends an email at 7:00 AM to every person that has signed up through the website by simply putting their email In a text box. This email would contain the daily lunch if possible I want to just code this in a python script so how can I do this does anyone have any ideas of the best possible way?
I have so far tried using different services like Snovio but I couldn't get the information daily and automate it.
| [
"Requests library isn't meant to send emails, because its using HTTP protocol, however emails use SMTP. I advice you to use smtplib.\nimport smtplib\nfrom email.mime.text import MIMEText\n\nsender = '[email protected]'\nreceivers = ['[email protected]']\n\n\nport = 1025\nmsg = MIMEText('This is test mail')\n\nmsg['Subject'] = 'Test mail'\nmsg['From'] = '[email protected]'\nmsg['To'] = '[email protected]'\n\nwith smtplib.SMTP('localhost', port) as server:\n \n # server.login('username', 'password')\n server.sendmail(sender, receivers, msg.as_string())\n print(\"Successfully sent email\")\n\nhttps://zetcode.com/python/smtplib/ For further explanation\n"
] | [
0
] | [] | [] | [
"email",
"html",
"html_email",
"javascript",
"python"
] | stackoverflow_0074642756_email_html_html_email_javascript_python.txt |
Q:
How can I redirect python print statement to browser console
So basically I am making a flask application. I have few python print statement to mark checkpoints (debugging). Instead of print those statements in python console . I want it to be in browser console (i.e. console.log)
When I do:
print("ok")
It should also print ok in browser log( like javascript console.log). Is there any library for achieving this or any other way of doing this
| How can I redirect python print statement to browser console | So basically I am making a flask application. I have few python print statement to mark checkpoints (debugging). Instead of print those statements in python console . I want it to be in browser console (i.e. console.log)
When I do:
print("ok")
It should also print ok in browser log( like javascript console.log). Is there any library for achieving this or any other way of doing this
| [] | [] | [
"you can't do it but you can do only in javascript\n"
] | [
-1
] | [
"python"
] | stackoverflow_0050794816_python.txt |
Q:
Closing firebase connection
I have 3 python files chained into one file like this:
#chained.py
import file1
import file2
import file3
Every file in chained.py initializes a firebase admin object with the firebase_admin.initialize_app(cred) method. When I run the three files separately everything works as expected. When I run chained.py I get the below error when the second file start running.
File "/home/usern/.local/lib/python3.6/site-packages/firebase_admin/__init__.py", line 72, in initialize_app
'The default Firebase app already exists. This means you called '
ValueError: The default Firebase app already exists. This means you called initialize_app() more than once without providing an app name as the second argument. In most cases you only need to call initialize_app() once. But if you do want to initialize multiple apps, pass a second argument to initialize_app() to give each app a unique name.
I assume the fix is to close the connection somehow at the end of the script, however I couldn't find a solution yet. Is there any common practice to deal with this issue? Or a method for this purpose.
A:
You can check if the firebase app has already been initialized using the below code.
import firebase_admin
from firebase_admin import credentials, initialize_app, storage
FIREBASE_STORAGE_PATH = "firebase_storage_path_here"
if not firebase_admin._apps:
cred = credentials.Certificate(JSON_FILE)
initialize_app(cred, {'storageBucket': FIREBASE_STORAGE_PATH})
| Closing firebase connection | I have 3 python files chained into one file like this:
#chained.py
import file1
import file2
import file3
Every file in chained.py initializes a firebase admin object with the firebase_admin.initialize_app(cred) method. When I run the three files separately everything works as expected. When I run chained.py I get the below error when the second file start running.
File "/home/usern/.local/lib/python3.6/site-packages/firebase_admin/__init__.py", line 72, in initialize_app
'The default Firebase app already exists. This means you called '
ValueError: The default Firebase app already exists. This means you called initialize_app() more than once without providing an app name as the second argument. In most cases you only need to call initialize_app() once. But if you do want to initialize multiple apps, pass a second argument to initialize_app() to give each app a unique name.
I assume the fix is to close the connection somehow at the end of the script, however I couldn't find a solution yet. Is there any common practice to deal with this issue? Or a method for this purpose.
| [
"You can check if the firebase app has already been initialized using the below code.\nimport firebase_admin\nfrom firebase_admin import credentials, initialize_app, storage\n\nFIREBASE_STORAGE_PATH = \"firebase_storage_path_here\"\n\nif not firebase_admin._apps:\n cred = credentials.Certificate(JSON_FILE)\n initialize_app(cred, {'storageBucket': FIREBASE_STORAGE_PATH})\n\n"
] | [
0
] | [] | [] | [
"firebase",
"google_cloud_firestore",
"python"
] | stackoverflow_0071002596_firebase_google_cloud_firestore_python.txt |
Q:
How to keep selected values in html in flask app
This if my html file and i just want to keep the selected values in the dropdown menu after clicked. That's only my problem I hope for your response. I try a lot of methods but I can't solve my problem. Thankyou for the help
<form action="{{url_for('values')}}" method='POST'>
<div class="s1">
<div class="s1_lbl">
<label style="font-weight: bold; color:white;">Select a Fish:</label>
<select name="fish" class="dropdown">
<optgroup label="BRACKISHWATER FISHPOND">
<option value="BF - Milkfish">BF - Milkfish</option>
<option value="BF - Tilapia">BF - Tilapia</option>
<option value="BF - Tiger prawn">BF - Tiger Prawn</option>
<option value="BF - Mudcrab">BF - Mudcrab</option>
<option value="BF - Endeavor prawn">BF - Endeavor prawn</option>
<option value="BF - White shrimp">BF - White shrimp</option>
<option value="BF - Grouper">BF - Grouper</option>
<option value="BF - Siganid">BF - Siganid</option>
</optgroup>
</select>
<label style="font-weight: bold; color:white;">Number of Quarter/s:</label>
<select name="quarter" class="dropdown2">
<option value="0">1 (JANUARY-MARCH)</option>
<option value="1">2 (APRIL-JUNE)</option>
<option value="2">3 (JULY-SEPTEMBER)</option>
<option value="3">4 (OCTOBER-DECEMBER)</option>
</select>
</div>
<div class="btn_pos">
<button type="submit" class="btn">Submit</button>
</div>
A:
Frontend
You can save selected data in localstorage when you click the button and then select the option which its value is equal to the save data.
OR
Backend
use session storage
this code snippet is frontend solution, link this javascript code to your html and it should work fine
let btn = document.querySelector('.btn_pos .btn');
let fishDropdown = document.querySelector('[name=fish]');
let quarterDropdown = document.querySelector('[name=quarter]');
// select last clicked fish option
document.querySelectorAll('[name=fish] option').forEach(el => {
el.value == localStorage.getItem('savedFish') && el.setAttribute('selected', true);
});
// select last clicked quarter option
document.querySelectorAll('[name=quarter] option').forEach(el => {
el.value == localStorage.getItem('savedQuarter') && el.setAttribute('selected', true);
});
btn.addEventListener('click', ()=> {
let selectedFish = fishDropdown.value;
let selectedQuarter = quarterDropdown.value;
localStorage.setItem('savedFish', selectedFish);
localStorage.setItem('savedQuarter', selectedQuarter);
});
<div class="s1_lbl">
<label style="font-weight: bold; color:white;">Select a Fish:</label>
<select name="fish" class="dropdown">
<optgroup label="BRACKISHWATER FISHPOND">
<option value="BF - Milkfish">BF - Milkfish</option>
<option value="BF - Tilapia">BF - Tilapia</option>
<option value="BF - Tiger prawn">BF - Tiger Prawn</option>
<option value="BF - Mudcrab">BF - Mudcrab</option>
<option value="BF - Endeavor prawn">BF - Endeavor prawn</option>
<option value="BF - White shrimp">BF - White shrimp</option>
<option value="BF - Grouper">BF - Grouper</option>
<option value="BF - Siganid">BF - Siganid</option>
</optgroup>
</select>
<label style="font-weight: bold; color:white;">Number of Quarter/s:</label>
<select name="quarter" class="dropdown2">
<option value="0">1 (JANUARY-MARCH)</option>
<option value="1">2 (APRIL-JUNE)</option>
<option value="2">3 (JULY-SEPTEMBER)</option>
<option value="3">4 (OCTOBER-DECEMBER)</option>
</select>
</div>
<div class="btn_pos">
<button type="submit" class="btn">Submit</button>
</div>
| How to keep selected values in html in flask app |
This if my html file and i just want to keep the selected values in the dropdown menu after clicked. That's only my problem I hope for your response. I try a lot of methods but I can't solve my problem. Thankyou for the help
<form action="{{url_for('values')}}" method='POST'>
<div class="s1">
<div class="s1_lbl">
<label style="font-weight: bold; color:white;">Select a Fish:</label>
<select name="fish" class="dropdown">
<optgroup label="BRACKISHWATER FISHPOND">
<option value="BF - Milkfish">BF - Milkfish</option>
<option value="BF - Tilapia">BF - Tilapia</option>
<option value="BF - Tiger prawn">BF - Tiger Prawn</option>
<option value="BF - Mudcrab">BF - Mudcrab</option>
<option value="BF - Endeavor prawn">BF - Endeavor prawn</option>
<option value="BF - White shrimp">BF - White shrimp</option>
<option value="BF - Grouper">BF - Grouper</option>
<option value="BF - Siganid">BF - Siganid</option>
</optgroup>
</select>
<label style="font-weight: bold; color:white;">Number of Quarter/s:</label>
<select name="quarter" class="dropdown2">
<option value="0">1 (JANUARY-MARCH)</option>
<option value="1">2 (APRIL-JUNE)</option>
<option value="2">3 (JULY-SEPTEMBER)</option>
<option value="3">4 (OCTOBER-DECEMBER)</option>
</select>
</div>
<div class="btn_pos">
<button type="submit" class="btn">Submit</button>
</div>
| [
"Frontend\nYou can save selected data in localstorage when you click the button and then select the option which its value is equal to the save data.\nOR\nBackend\nuse session storage\nthis code snippet is frontend solution, link this javascript code to your html and it should work fine\n\n\nlet btn = document.querySelector('.btn_pos .btn');\nlet fishDropdown = document.querySelector('[name=fish]');\nlet quarterDropdown = document.querySelector('[name=quarter]');\n \n// select last clicked fish option\ndocument.querySelectorAll('[name=fish] option').forEach(el => {\n el.value == localStorage.getItem('savedFish') && el.setAttribute('selected', true);\n});\n\n// select last clicked quarter option\ndocument.querySelectorAll('[name=quarter] option').forEach(el => {\n el.value == localStorage.getItem('savedQuarter') && el.setAttribute('selected', true);\n});\n\nbtn.addEventListener('click', ()=> {\n let selectedFish = fishDropdown.value;\n let selectedQuarter = quarterDropdown.value;\n\n localStorage.setItem('savedFish', selectedFish);\n localStorage.setItem('savedQuarter', selectedQuarter);\n});\n<div class=\"s1_lbl\">\n <label style=\"font-weight: bold; color:white;\">Select a Fish:</label>\n <select name=\"fish\" class=\"dropdown\">\n <optgroup label=\"BRACKISHWATER FISHPOND\">\n <option value=\"BF - Milkfish\">BF - Milkfish</option>\n <option value=\"BF - Tilapia\">BF - Tilapia</option>\n <option value=\"BF - Tiger prawn\">BF - Tiger Prawn</option>\n <option value=\"BF - Mudcrab\">BF - Mudcrab</option>\n <option value=\"BF - Endeavor prawn\">BF - Endeavor prawn</option>\n <option value=\"BF - White shrimp\">BF - White shrimp</option>\n <option value=\"BF - Grouper\">BF - Grouper</option>\n <option value=\"BF - Siganid\">BF - Siganid</option>\n </optgroup>\n </select>\n\n <label style=\"font-weight: bold; color:white;\">Number of Quarter/s:</label>\n <select name=\"quarter\" class=\"dropdown2\">\n <option value=\"0\">1 (JANUARY-MARCH)</option>\n <option value=\"1\">2 (APRIL-JUNE)</option>\n <option value=\"2\">3 (JULY-SEPTEMBER)</option>\n <option value=\"3\">4 (OCTOBER-DECEMBER)</option>\n </select>\n</div>\n<div class=\"btn_pos\">\n <button type=\"submit\" class=\"btn\">Submit</button>\n</div>\n\n\n\n"
] | [
0
] | [] | [] | [
"flask",
"html",
"python"
] | stackoverflow_0074642578_flask_html_python.txt |
Q:
Checking if a .cmd file was executed successfully with python
I wrote a script that executes certain .cmd files. I'm trying to find a way to check if the execution finished with errors or not. This is how the final line of the .cmd file looks like, it shows you the number of warnings and errors:
https://i.stack.imgur.com/y6K1m.png (sorry i do not have enough rep to make the image embedded between the text :()
I tried saving the console output to a variable, and then check if the substring "Error(s)" was inside the text, but that didn't seem to work... I'm fairly new to python and I'm running out of ideas and to stuff to try, any suggestions would be appreciated. Let me know if you need more details. Thanks in advance guys!
A:
Batch scripts usually have a return code that you can check if it completed successfully.
Check this link for the exit codes
And check this question for the python code to get them
| Checking if a .cmd file was executed successfully with python | I wrote a script that executes certain .cmd files. I'm trying to find a way to check if the execution finished with errors or not. This is how the final line of the .cmd file looks like, it shows you the number of warnings and errors:
https://i.stack.imgur.com/y6K1m.png (sorry i do not have enough rep to make the image embedded between the text :()
I tried saving the console output to a variable, and then check if the substring "Error(s)" was inside the text, but that didn't seem to work... I'm fairly new to python and I'm running out of ideas and to stuff to try, any suggestions would be appreciated. Let me know if you need more details. Thanks in advance guys!
| [
"Batch scripts usually have a return code that you can check if it completed successfully.\nCheck this link for the exit codes\nAnd check this question for the python code to get them\n"
] | [
0
] | [] | [] | [
"cmd",
"python"
] | stackoverflow_0074642763_cmd_python.txt |
Q:
How do I convert a Django QuerySet into list of dicts?
How can I convert a Django QuerySet into a list of dicts? I haven't found an answer to this so I'm wondering if I'm missing some sort of common helper function that everyone uses.
A:
Use the .values() method:
>>> Blog.objects.values()
[{'id': 1, 'name': 'Beatles Blog', 'tagline': 'All the latest Beatles news.'}],
>>> Blog.objects.values('id', 'name')
[{'id': 1, 'name': 'Beatles Blog'}]
Note: the result is a QuerySet which mostly behaves like a list, but isn't actually an instance of list. Use list(Blog.objects.values(…)) if you really need an instance of list.
A:
The .values() method will return you a result of type ValuesQuerySet which is typically what you need in most cases.
But if you wish, you could turn ValuesQuerySet into a native Python list using Python list comprehension as illustrated in the example below.
result = Blog.objects.values() # return ValuesQuerySet object
list_result = [entry for entry in result] # converts ValuesQuerySet into Python list
return list_result
I find the above helps if you are writing unit tests and need to assert that the expected return value of a function matches the actual return value, in which case both expected_result and actual_result must be of the same type (e.g. dictionary).
actual_result = some_function()
expected_result = {
# dictionary content here ...
}
assert expected_result == actual_result
A:
If you need native data types for some reason (e.g. JSON serialization) this is my quick 'n' dirty way to do it:
data = [{'id': blog.pk, 'name': blog.name} for blog in blogs]
As you can see building the dict inside the list is not really DRY so if somebody knows a better way ...
A:
Type Cast to List
job_reports = JobReport.objects.filter(job_id=job_id, status=1).values('id', 'name')
json.dumps(list(job_reports))
A:
You need DjangoJSONEncoder and list to make your Queryset to json, ref: Python JSON serialize a Decimal object
import json
from django.core.serializers.json import DjangoJSONEncoder
blog = Blog.objects.all().values()
json.dumps(list(blog), cls=DjangoJSONEncoder)
A:
You do not exactly define what the dictionaries should look like, but most likely you are referring to QuerySet.values(). From the official django documentation:
Returns a ValuesQuerySet — a QuerySet subclass that returns
dictionaries when used as an iterable, rather than model-instance
objects.
Each of those dictionaries represents an object, with the keys
corresponding to the attribute names of model objects.
A:
You can use the values() method on the dict you got from the Django model field you make the queries on and then you can easily access each field by a index value.
Call it like this -
myList = dictOfSomeData.values()
itemNumberThree = myList[2] #If there's a value in that index off course...
A:
You could define a function using model_to_dict as follows:
from django.forms.models import model_to_dict
def queryset_to_list(qs,fields=None, exclude=None):
return [model_to_dict(x,fields,exclude) for x in qs]
Suppose your Model has the following fields
id
name
email
Run the following commands in Django shell
>>>qs=<yourmodel>.objects.all()
>>>list=queryset_to_list(qs)
>>>list
[{'id':1, 'name':'abc', 'email':'[email protected]'},{'id':2, 'name':'xyz', 'email':'[email protected]'}]
Say you want only the id and the name in the list of queryset dictionary
>>>qs=<yourmodel>.objects.all()
>>>list=queryset_to_list(qs,fields=['id','name'])
>>>list
[{'id':1, 'name':'abc'},{'id':2, 'name':'xyz'}]
Similarly, you can exclude fields in your output.
A:
If you already have a query set you just use the list function to turn it into a list of dicts, eg:
list(MyModel.objects.values())
A:
I found even a better solution:
This was my queryset:
queryset = TestDB.objects.values_list("country", "code")
Code above returned
<QuerySet [('Afghanistan', 'AF'), ('Albania', 'AL'), ('Algeria', 'DZ'), ('American Samoa', 'AS'), ('Andorra', 'AD'), ('Angola', 'AO'), ('Anguilla', 'AI'), ('Antarctica', 'AQ'), ('Antigua and Barbuda', 'AG'), ('Argentina', 'AR'), ('Armenia', 'AM'), ('Aruba', 'AW'), ('Australia', 'AU'), ('Austria', 'AT'), ('Azerbaijan', 'AZ'), ('Bahamas ', 'BS'), ('Bahrain', 'BH'), ('Bangladesh', 'BD'), ('Barbados', 'BB'), ('Belarus', 'BY'), '...(remaining elements truncated)...']>
and print(dict(queryset)) converted above into this:
{'Afghanistan': 'AF', 'Albania': 'AL', 'Algeria': 'DZ', 'American Samoa': 'AS', 'Andorra': 'AD', 'Angola': 'AO', 'Anguilla': 'AI', 'Antarctica': 'AQ', 'Antigua and Barbuda': 'AG', 'Argentina': 'AR', 'Armenia': 'AM', 'Aruba': 'AW', 'Australia': 'AU', 'Austria': 'AT', 'Azerbaijan': 'AZ', 'Bahamas ': 'BS', 'Bahrain': 'BH', 'Bangladesh': 'BD', 'Barbados': 'BB', 'Belarus': 'BY', 'Belgium': 'BE', 'Belize': 'BZ', 'Benin': 'BJ', 'Bermuda': 'BM', 'Bhutan': 'BT', 'Bolivia (Plurinational State of)': 'BO', 'Bonaire, Sint Eustatius and Saba': 'BQ', 'Bosnia and Herzegovina': 'BA', 'Botswana': 'BW', 'Bouvet Island': 'BV', 'Brazil': 'BR', 'British Indian Ocean Territory ': 'IO', 'Brunei Darussalam': 'BN', 'Bulgaria': 'BG', 'Burkina Faso': 'BF', 'Burundi': 'BI', 'Cabo Verde': 'CV', 'Cambodia': 'KH', 'Cameroon': 'CM', 'Canada': 'CA', 'Cayman Islands ': 'KY', 'Central African Republic ': 'CF', 'Chad': 'TD', 'Chile': 'CL', 'China': 'CN', 'Christmas Island': 'CX', 'Cocos (Keeling) Islands ': 'CC', 'Colombia': 'CO', 'Comoros ': 'KM', 'Congo (the Democratic Republic of the)': 'CD', 'Congo ': 'CG', 'Cook Islands ': 'CK', 'Costa Rica': 'CR', 'Croatia': 'HR', 'Cuba': 'CU', 'Curaçao': 'CW', 'Cyprus': 'CY', 'Czechia': 'CZ', 'Côte dIvoire': 'CI', 'Denmark': 'DK', 'Djibouti': 'DJ', 'Dominica': 'DM', 'Dominican Republic ': 'DO', 'Ecuador': 'EC', 'Egypt': 'EG', 'El Salvador': 'SV', 'Equatorial Guinea': 'GQ', 'Eritrea': 'ER', 'Estonia': 'EE', 'Eswatini': 'SZ', 'Ethiopia': 'ET', 'Falkland Islands [Malvinas]': 'FK', 'Faroe Islands ': 'FO', 'Fiji': 'FJ', 'Finland': 'FI', 'France': 'FR', 'French Guiana': 'GF', 'French Polynesia': 'PF', 'French Southern Territories ': 'TF', 'Gabon': 'GA', 'Gambia ': 'GM', 'Georgia': 'GE', 'Germany': 'DE', 'Ghana': 'GH', 'Gibraltar': 'GI', 'Greece': 'GR', 'Greenland': 'GL', 'Grenada': 'GD', 'Guadeloupe': 'GP', 'Guam': 'GU', 'Guatemala': 'GT', 'Guernsey': 'GG', 'Guinea': 'GN', 'Guinea-Bissau': 'GW', 'Guyana': 'GY', 'Haiti': 'HT', 'Heard Island and McDonald Islands': 'HM', 'Holy See ': 'VA', 'Honduras': 'HN', 'Hong Kong': 'HK', 'Hungary': 'HU', 'Iceland': 'IS', 'India': 'IN', 'Indonesia': 'ID', 'Iran (Islamic Republic of)': 'IR', 'Iraq': 'IQ', 'Ireland': 'IE', 'Isle of Man': 'IM', 'Israel': 'IL', 'Italy': 'IT', 'Jamaica': 'JM', 'Japan': 'JP', 'Jersey': 'JE', 'Jordan': 'JO', 'Kazakhstan': 'KZ', 'Kenya': 'KE', 'Kiribati': 'KI', 'Korea (the Democratic People Republic of)': 'KP', 'Korea (the Republic of)': 'KR', 'Kuwait': 'KW', 'Kyrgyzstan': 'KG', 'Lao People Democratic Republic ': 'LA', 'Latvia': 'LV', 'Lebanon': 'LB', 'Lesotho': 'LS', 'Liberia': 'LR', 'Libya': 'LY', 'Liechtenstein': 'LI', 'Lithuania': 'LT', 'Luxembourg': 'LU', 'Macao': 'MO', 'Madagascar': 'MG', 'Malawi': 'MW', 'Malaysia': 'MY', 'Maldives': 'MV', 'Mali': 'ML', 'Malta': 'MT', 'Marshall Islands ': 'MH', 'Martinique': 'MQ', 'Mauritania': 'MR', 'Mauritius': 'MU', 'Mayotte': 'YT', 'Mexico': 'MX', 'Micronesia (Federated States of)': 'FM', 'Moldova (the Republic of)': 'MD', 'Monaco': 'MC', 'Mongolia': 'MN', 'Montenegro': 'ME', 'Montserrat': 'MS', 'Morocco': 'MA', 'Mozambique': 'MZ', 'Myanmar': 'MM', 'Namibia': 'NA', 'Nauru': 'NR', 'Nepal': 'NP', 'Netherlands ': 'NL', 'New Caledonia': 'NC', 'New Zealand': 'NZ', 'Nicaragua': 'NI', 'Niger ': 'NE', 'Nigeria': 'NG', 'Niue': 'NU', 'Norfolk Island': 'NF', 'Northern Mariana Islands ': 'MP', 'Norway': 'NO', 'Oman': 'OM', 'Pakistan': 'PK', 'Palau': 'PW', 'Palestine, State of': 'PS', 'Panama': 'PA', 'Papua New Guinea': 'PG', 'Paraguay': 'PY', 'Peru': 'PE', 'Philippines ': 'PH', 'Pitcairn': 'PN', 'Poland': 'PL', 'Portugal': 'PT', 'Puerto Rico': 'PR', 'Qatar': 'QA', 'Republic of North Macedonia': 'MK', 'Romania': 'RO', 'Russian Federation ': 'RU', 'Rwanda': 'RW', 'Réunion': 'RE', 'Saint Barthélemy': 'BL', 'Saint Helena, Ascension and Tristan da Cunha': 'SH', 'Saint Kitts and Nevis': 'KN', 'Saint Lucia': 'LC', 'Saint Martin (French part)': 'MF', 'Saint Pierre and Miquelon': 'PM', 'Saint Vincent and the Grenadines': 'VC', 'Samoa': 'WS', 'San Marino': 'SM', 'Sao Tome and Principe': 'ST', 'Saudi Arabia': 'SA', 'Senegal': 'SN', 'Serbia': 'RS', 'Seychelles': 'SC', 'Sierra Leone': 'SL', 'Singapore': 'SG', 'Sint Maarten (Dutch part)': 'SX', 'Slovakia': 'SK', 'Slovenia': 'SI', 'Solomon Islands': 'SB', 'Somalia': 'SO', 'South Africa': 'ZA', 'South Georgia and the South Sandwich Islands': 'GS', 'South Sudan': 'SS', 'Spain': 'ES', 'Sri Lanka': 'LK', 'Sudan ': 'SD', 'Suriname': 'SR', 'Svalbard and Jan Mayen': 'SJ', 'Sweden': 'SE', 'Switzerland': 'CH', 'Syrian Arab Republic': 'SY', 'Taiwan (Province of China)': 'TW', 'Tajikistan': 'TJ', 'Tanzania, United Republic of': 'TZ', 'Thailand': 'TH', 'Timor-Leste': 'TL', 'Togo': 'TG', 'Tokelau': 'TK', 'Tonga': 'TO', 'Trinidad and Tobago': 'TT', 'Tunisia': 'TN', 'Turkey': 'TR', 'Turkmenistan': 'TM', 'Turks and Caicos Islands ': 'TC', 'Tuvalu': 'TV', 'Uganda': 'UG', 'Ukraine': 'UA', 'United Arab Emirates ': 'AE', 'United States Minor Outlying Islands ': 'UM', 'United States of America ': 'US', 'Uruguay': 'UY', 'Uzbekistan': 'UZ', 'Vanuatu': 'VU', 'Bolivarian Republic of Venezuela': 'VE', 'Viet Nam': 'VN', 'Virgin Islands (British)': 'VG', 'Virgin Islands (U.S.)': 'VI', 'Wallis and Futuna': 'WF', 'Western Sahara': 'EH', 'Yemen': 'YE', 'Zambia': 'ZM', 'Zimbabwe': 'ZW', 'Åland Islands': 'AX'}
Which I think you need in your case too (I guess).
| How do I convert a Django QuerySet into list of dicts? | How can I convert a Django QuerySet into a list of dicts? I haven't found an answer to this so I'm wondering if I'm missing some sort of common helper function that everyone uses.
| [
"Use the .values() method: \n>>> Blog.objects.values()\n[{'id': 1, 'name': 'Beatles Blog', 'tagline': 'All the latest Beatles news.'}],\n>>> Blog.objects.values('id', 'name')\n[{'id': 1, 'name': 'Beatles Blog'}]\n\nNote: the result is a QuerySet which mostly behaves like a list, but isn't actually an instance of list. Use list(Blog.objects.values(…)) if you really need an instance of list.\n",
"The .values() method will return you a result of type ValuesQuerySet which is typically what you need in most cases.\nBut if you wish, you could turn ValuesQuerySet into a native Python list using Python list comprehension as illustrated in the example below.\nresult = Blog.objects.values() # return ValuesQuerySet object\nlist_result = [entry for entry in result] # converts ValuesQuerySet into Python list\nreturn list_result\n\nI find the above helps if you are writing unit tests and need to assert that the expected return value of a function matches the actual return value, in which case both expected_result and actual_result must be of the same type (e.g. dictionary).\nactual_result = some_function()\nexpected_result = {\n # dictionary content here ...\n}\nassert expected_result == actual_result\n\n",
"If you need native data types for some reason (e.g. JSON serialization) this is my quick 'n' dirty way to do it:\ndata = [{'id': blog.pk, 'name': blog.name} for blog in blogs]\n\nAs you can see building the dict inside the list is not really DRY so if somebody knows a better way ...\n",
"Type Cast to List\n job_reports = JobReport.objects.filter(job_id=job_id, status=1).values('id', 'name')\n\n json.dumps(list(job_reports))\n\n",
"You need DjangoJSONEncoder and list to make your Queryset to json, ref: Python JSON serialize a Decimal object\n\nimport json\nfrom django.core.serializers.json import DjangoJSONEncoder\n\n\nblog = Blog.objects.all().values()\njson.dumps(list(blog), cls=DjangoJSONEncoder)\n\n",
"You do not exactly define what the dictionaries should look like, but most likely you are referring to QuerySet.values(). From the official django documentation:\n\nReturns a ValuesQuerySet — a QuerySet subclass that returns\n dictionaries when used as an iterable, rather than model-instance\n objects.\nEach of those dictionaries represents an object, with the keys\n corresponding to the attribute names of model objects.\n\n",
"You can use the values() method on the dict you got from the Django model field you make the queries on and then you can easily access each field by a index value.\nCall it like this - \nmyList = dictOfSomeData.values()\nitemNumberThree = myList[2] #If there's a value in that index off course...\n\n",
"You could define a function using model_to_dict as follows:\nfrom django.forms.models import model_to_dict\ndef queryset_to_list(qs,fields=None, exclude=None):\n return [model_to_dict(x,fields,exclude) for x in qs]\n\nSuppose your Model has the following fields\nid\nname\nemail\n\nRun the following commands in Django shell\n>>>qs=<yourmodel>.objects.all()\n>>>list=queryset_to_list(qs)\n>>>list\n[{'id':1, 'name':'abc', 'email':'[email protected]'},{'id':2, 'name':'xyz', 'email':'[email protected]'}]\n\nSay you want only the id and the name in the list of queryset dictionary\n>>>qs=<yourmodel>.objects.all()\n>>>list=queryset_to_list(qs,fields=['id','name'])\n>>>list\n[{'id':1, 'name':'abc'},{'id':2, 'name':'xyz'}]\n\nSimilarly, you can exclude fields in your output.\n",
"If you already have a query set you just use the list function to turn it into a list of dicts, eg:\nlist(MyModel.objects.values())\n",
"I found even a better solution:\nThis was my queryset:\n queryset = TestDB.objects.values_list(\"country\", \"code\")\n\nCode above returned\n<QuerySet [('Afghanistan', 'AF'), ('Albania', 'AL'), ('Algeria', 'DZ'), ('American Samoa', 'AS'), ('Andorra', 'AD'), ('Angola', 'AO'), ('Anguilla', 'AI'), ('Antarctica', 'AQ'), ('Antigua and Barbuda', 'AG'), ('Argentina', 'AR'), ('Armenia', 'AM'), ('Aruba', 'AW'), ('Australia', 'AU'), ('Austria', 'AT'), ('Azerbaijan', 'AZ'), ('Bahamas ', 'BS'), ('Bahrain', 'BH'), ('Bangladesh', 'BD'), ('Barbados', 'BB'), ('Belarus', 'BY'), '...(remaining elements truncated)...']>\n\nand print(dict(queryset)) converted above into this:\n{'Afghanistan': 'AF', 'Albania': 'AL', 'Algeria': 'DZ', 'American Samoa': 'AS', 'Andorra': 'AD', 'Angola': 'AO', 'Anguilla': 'AI', 'Antarctica': 'AQ', 'Antigua and Barbuda': 'AG', 'Argentina': 'AR', 'Armenia': 'AM', 'Aruba': 'AW', 'Australia': 'AU', 'Austria': 'AT', 'Azerbaijan': 'AZ', 'Bahamas ': 'BS', 'Bahrain': 'BH', 'Bangladesh': 'BD', 'Barbados': 'BB', 'Belarus': 'BY', 'Belgium': 'BE', 'Belize': 'BZ', 'Benin': 'BJ', 'Bermuda': 'BM', 'Bhutan': 'BT', 'Bolivia (Plurinational State of)': 'BO', 'Bonaire, Sint Eustatius and Saba': 'BQ', 'Bosnia and Herzegovina': 'BA', 'Botswana': 'BW', 'Bouvet Island': 'BV', 'Brazil': 'BR', 'British Indian Ocean Territory ': 'IO', 'Brunei Darussalam': 'BN', 'Bulgaria': 'BG', 'Burkina Faso': 'BF', 'Burundi': 'BI', 'Cabo Verde': 'CV', 'Cambodia': 'KH', 'Cameroon': 'CM', 'Canada': 'CA', 'Cayman Islands ': 'KY', 'Central African Republic ': 'CF', 'Chad': 'TD', 'Chile': 'CL', 'China': 'CN', 'Christmas Island': 'CX', 'Cocos (Keeling) Islands ': 'CC', 'Colombia': 'CO', 'Comoros ': 'KM', 'Congo (the Democratic Republic of the)': 'CD', 'Congo ': 'CG', 'Cook Islands ': 'CK', 'Costa Rica': 'CR', 'Croatia': 'HR', 'Cuba': 'CU', 'Curaçao': 'CW', 'Cyprus': 'CY', 'Czechia': 'CZ', 'Côte dIvoire': 'CI', 'Denmark': 'DK', 'Djibouti': 'DJ', 'Dominica': 'DM', 'Dominican Republic ': 'DO', 'Ecuador': 'EC', 'Egypt': 'EG', 'El Salvador': 'SV', 'Equatorial Guinea': 'GQ', 'Eritrea': 'ER', 'Estonia': 'EE', 'Eswatini': 'SZ', 'Ethiopia': 'ET', 'Falkland Islands [Malvinas]': 'FK', 'Faroe Islands ': 'FO', 'Fiji': 'FJ', 'Finland': 'FI', 'France': 'FR', 'French Guiana': 'GF', 'French Polynesia': 'PF', 'French Southern Territories ': 'TF', 'Gabon': 'GA', 'Gambia ': 'GM', 'Georgia': 'GE', 'Germany': 'DE', 'Ghana': 'GH', 'Gibraltar': 'GI', 'Greece': 'GR', 'Greenland': 'GL', 'Grenada': 'GD', 'Guadeloupe': 'GP', 'Guam': 'GU', 'Guatemala': 'GT', 'Guernsey': 'GG', 'Guinea': 'GN', 'Guinea-Bissau': 'GW', 'Guyana': 'GY', 'Haiti': 'HT', 'Heard Island and McDonald Islands': 'HM', 'Holy See ': 'VA', 'Honduras': 'HN', 'Hong Kong': 'HK', 'Hungary': 'HU', 'Iceland': 'IS', 'India': 'IN', 'Indonesia': 'ID', 'Iran (Islamic Republic of)': 'IR', 'Iraq': 'IQ', 'Ireland': 'IE', 'Isle of Man': 'IM', 'Israel': 'IL', 'Italy': 'IT', 'Jamaica': 'JM', 'Japan': 'JP', 'Jersey': 'JE', 'Jordan': 'JO', 'Kazakhstan': 'KZ', 'Kenya': 'KE', 'Kiribati': 'KI', 'Korea (the Democratic People Republic of)': 'KP', 'Korea (the Republic of)': 'KR', 'Kuwait': 'KW', 'Kyrgyzstan': 'KG', 'Lao People Democratic Republic ': 'LA', 'Latvia': 'LV', 'Lebanon': 'LB', 'Lesotho': 'LS', 'Liberia': 'LR', 'Libya': 'LY', 'Liechtenstein': 'LI', 'Lithuania': 'LT', 'Luxembourg': 'LU', 'Macao': 'MO', 'Madagascar': 'MG', 'Malawi': 'MW', 'Malaysia': 'MY', 'Maldives': 'MV', 'Mali': 'ML', 'Malta': 'MT', 'Marshall Islands ': 'MH', 'Martinique': 'MQ', 'Mauritania': 'MR', 'Mauritius': 'MU', 'Mayotte': 'YT', 'Mexico': 'MX', 'Micronesia (Federated States of)': 'FM', 'Moldova (the Republic of)': 'MD', 'Monaco': 'MC', 'Mongolia': 'MN', 'Montenegro': 'ME', 'Montserrat': 'MS', 'Morocco': 'MA', 'Mozambique': 'MZ', 'Myanmar': 'MM', 'Namibia': 'NA', 'Nauru': 'NR', 'Nepal': 'NP', 'Netherlands ': 'NL', 'New Caledonia': 'NC', 'New Zealand': 'NZ', 'Nicaragua': 'NI', 'Niger ': 'NE', 'Nigeria': 'NG', 'Niue': 'NU', 'Norfolk Island': 'NF', 'Northern Mariana Islands ': 'MP', 'Norway': 'NO', 'Oman': 'OM', 'Pakistan': 'PK', 'Palau': 'PW', 'Palestine, State of': 'PS', 'Panama': 'PA', 'Papua New Guinea': 'PG', 'Paraguay': 'PY', 'Peru': 'PE', 'Philippines ': 'PH', 'Pitcairn': 'PN', 'Poland': 'PL', 'Portugal': 'PT', 'Puerto Rico': 'PR', 'Qatar': 'QA', 'Republic of North Macedonia': 'MK', 'Romania': 'RO', 'Russian Federation ': 'RU', 'Rwanda': 'RW', 'Réunion': 'RE', 'Saint Barthélemy': 'BL', 'Saint Helena, Ascension and Tristan da Cunha': 'SH', 'Saint Kitts and Nevis': 'KN', 'Saint Lucia': 'LC', 'Saint Martin (French part)': 'MF', 'Saint Pierre and Miquelon': 'PM', 'Saint Vincent and the Grenadines': 'VC', 'Samoa': 'WS', 'San Marino': 'SM', 'Sao Tome and Principe': 'ST', 'Saudi Arabia': 'SA', 'Senegal': 'SN', 'Serbia': 'RS', 'Seychelles': 'SC', 'Sierra Leone': 'SL', 'Singapore': 'SG', 'Sint Maarten (Dutch part)': 'SX', 'Slovakia': 'SK', 'Slovenia': 'SI', 'Solomon Islands': 'SB', 'Somalia': 'SO', 'South Africa': 'ZA', 'South Georgia and the South Sandwich Islands': 'GS', 'South Sudan': 'SS', 'Spain': 'ES', 'Sri Lanka': 'LK', 'Sudan ': 'SD', 'Suriname': 'SR', 'Svalbard and Jan Mayen': 'SJ', 'Sweden': 'SE', 'Switzerland': 'CH', 'Syrian Arab Republic': 'SY', 'Taiwan (Province of China)': 'TW', 'Tajikistan': 'TJ', 'Tanzania, United Republic of': 'TZ', 'Thailand': 'TH', 'Timor-Leste': 'TL', 'Togo': 'TG', 'Tokelau': 'TK', 'Tonga': 'TO', 'Trinidad and Tobago': 'TT', 'Tunisia': 'TN', 'Turkey': 'TR', 'Turkmenistan': 'TM', 'Turks and Caicos Islands ': 'TC', 'Tuvalu': 'TV', 'Uganda': 'UG', 'Ukraine': 'UA', 'United Arab Emirates ': 'AE', 'United States Minor Outlying Islands ': 'UM', 'United States of America ': 'US', 'Uruguay': 'UY', 'Uzbekistan': 'UZ', 'Vanuatu': 'VU', 'Bolivarian Republic of Venezuela': 'VE', 'Viet Nam': 'VN', 'Virgin Islands (British)': 'VG', 'Virgin Islands (U.S.)': 'VI', 'Wallis and Futuna': 'WF', 'Western Sahara': 'EH', 'Yemen': 'YE', 'Zambia': 'ZM', 'Zimbabwe': 'ZW', 'Åland Islands': 'AX'}\n\nWhich I think you need in your case too (I guess).\n"
] | [
269,
36,
19,
7,
4,
3,
2,
1,
1,
1
] | [
"im a newbie in python and i love @David Wolever answer\nuser = Blog.objects.all()\nuser = list(user.values(\"username\", \"id\"))\n\nin my case i use this to print username\nuser = Blog.objects.all()\nuser = list(user.values(\"username\"))\nname = []\nfor i in user:\n name.append(i[\"username\"])\nprint(name)\n# [\"joe\", \"karen\", \"stuf\"]\n\n",
"Simply put list(yourQuerySet).\n"
] | [
-1,
-2
] | [
"django",
"python"
] | stackoverflow_0007811556_django_python.txt |
Q:
How to auto-wrap widget in tkinter?
I saw this function (layout?) in android a few years ago, but I can't remind what is this function name...
I need an auto-replace widget.
if the new widget' width meets the end of the window, I want to move that widget new line.
The below is my expected output.
I Think, get the width and calculate new widget position will solve it.
But, I think this is so common need. Does tkinter support it?
A:
Since tkinter has a canvas which gives you absolute control over positioning, you can accomplish this with just a little bit of math when adding items. You'll have to add code to reposition the widgets when the window is resized.
A simpler approach is to use the text widget, which supports embedded images or widgets, and has support for wrapping.
Here's a demonstration:
import tkinter as tk
root = tk.Tk()
toolbar = tk.Frame(root)
text = tk.Text(root, wrap="word", yscrollcommand=lambda *args: vsb.set(*args))
vsb = tk.Scrollbar(root, command=text.yview)
toolbar.pack(side="top", fill="x")
vsb.pack(side="right", fill="y")
text.pack(side="left",fill="both", expand=True)
COUNT = 0
def add_widget():
global COUNT
COUNT += 1
widget = tk.Label(root, width=12, text=f"Widget #{COUNT}", bd=1, relief="raised",
bg="#5C9BD5", foreground="white", padx=4, pady=4)
text.configure(state="normal")
text.window_create("insert", window=widget, padx=10, pady=10)
text.configure(state="disabled")
add_button = tk.Button(toolbar, command=add_widget, text="Add")
add_button.pack(side="left")
for i in range(9):
add_widget()
root.mainloop()
A:
Refactored code
The code in Bryan's answer works very well! But a little bit difficult to understand.
Thus I refactor the code
import tkinter as tk
def add_widget():
widget = tk.Button(root, width=12, text=f"Widget", padx=4, pady=4)
text.window_create("end", window=widget, padx=10, pady=10)
root = tk.Tk()
add_button = tk.Button(root, command=add_widget, text="Add")
add_button.pack()
text = tk.Text(root, wrap="word", yscrollcommand=lambda *args: vscroll.set(*args))
text.configure(state="disabled")
text.pack(side="left",fill="both", expand=True)
vscroll = tk.Scrollbar(root, command=text.yview)
vscroll.pack(side="right", fill="both")
root.mainloop()
| How to auto-wrap widget in tkinter? | I saw this function (layout?) in android a few years ago, but I can't remind what is this function name...
I need an auto-replace widget.
if the new widget' width meets the end of the window, I want to move that widget new line.
The below is my expected output.
I Think, get the width and calculate new widget position will solve it.
But, I think this is so common need. Does tkinter support it?
| [
"Since tkinter has a canvas which gives you absolute control over positioning, you can accomplish this with just a little bit of math when adding items. You'll have to add code to reposition the widgets when the window is resized.\nA simpler approach is to use the text widget, which supports embedded images or widgets, and has support for wrapping.\nHere's a demonstration:\nimport tkinter as tk\n\nroot = tk.Tk()\ntoolbar = tk.Frame(root)\ntext = tk.Text(root, wrap=\"word\", yscrollcommand=lambda *args: vsb.set(*args))\nvsb = tk.Scrollbar(root, command=text.yview)\n\ntoolbar.pack(side=\"top\", fill=\"x\")\nvsb.pack(side=\"right\", fill=\"y\")\ntext.pack(side=\"left\",fill=\"both\", expand=True)\n\nCOUNT = 0\ndef add_widget():\n global COUNT\n COUNT += 1\n widget = tk.Label(root, width=12, text=f\"Widget #{COUNT}\", bd=1, relief=\"raised\",\n bg=\"#5C9BD5\", foreground=\"white\", padx=4, pady=4)\n text.configure(state=\"normal\")\n text.window_create(\"insert\", window=widget, padx=10, pady=10)\n text.configure(state=\"disabled\")\n\nadd_button = tk.Button(toolbar, command=add_widget, text=\"Add\")\nadd_button.pack(side=\"left\")\n\nfor i in range(9):\n add_widget()\n\nroot.mainloop()\n\n\n\n",
"Refactored code\nThe code in Bryan's answer works very well! But a little bit difficult to understand.\nThus I refactor the code\n\n\n\n import tkinter as tk\n \n def add_widget():\n widget = tk.Button(root, width=12, text=f\"Widget\", padx=4, pady=4)\n text.window_create(\"end\", window=widget, padx=10, pady=10)\n \n root = tk.Tk()\n \n add_button = tk.Button(root, command=add_widget, text=\"Add\")\n add_button.pack()\n \n text = tk.Text(root, wrap=\"word\", yscrollcommand=lambda *args: vscroll.set(*args))\n text.configure(state=\"disabled\") \n text.pack(side=\"left\",fill=\"both\", expand=True)\n \n vscroll = tk.Scrollbar(root, command=text.yview)\n vscroll.pack(side=\"right\", fill=\"both\")\n \n root.mainloop()\n\n\n"
] | [
5,
0
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0069846517_python_tkinter.txt |
Q:
cv2.error: OpenCV(4.5.2) :-1: error: (-5:Bad argument) in function 'rectangle'
hello guys i'm working on my project with easy ocr for text detection i got some errors, but i don't know how to fix it, it will be greatful if someone can help me?
here's few errors i get
cv2.error: OpenCV(4.5.2) :-1: error: (-5:Bad argument) in function 'rectangle'
Overload resolution failed:
Can't parse 'pt1'. Sequence item with index 0 has a wrong type
Can't parse 'pt1'. Sequence item with index 0 has a wrong type
Can't parse 'rec'. Expected sequence length 4, got 2
Can't parse 'rec'. Expected sequence length 4, got 2
anyway this is my code
import cv2
import numpy as np
from matplotlib import pyplot as plt
import easyocr
cap = cv2.VideoCapture(0)
reader = easyocr.Reader(['en'], gpu = True)
while True :
_, frame = cap.read()
result = reader.readtext(frame)
for detection in result:
top_left = tuple(detection[0][0])
bottom_right = tuple(detection[0][2])
text = detection[1]
print (text)
img = cv2.rectangle(frame,top_left,bottom_right,(0,255,0),2)
cv2.imshow("Text Recognition", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
A:
cv2.rectangle expects int values, while easyocr can return floats. Try:
top_left = (int(detection[0][0][0]), int(detection[0][0][1]))
bottom_right = (int(detection[0][2][0]), int(detection[0][2][1]))
| cv2.error: OpenCV(4.5.2) :-1: error: (-5:Bad argument) in function 'rectangle' | hello guys i'm working on my project with easy ocr for text detection i got some errors, but i don't know how to fix it, it will be greatful if someone can help me?
here's few errors i get
cv2.error: OpenCV(4.5.2) :-1: error: (-5:Bad argument) in function 'rectangle'
Overload resolution failed:
Can't parse 'pt1'. Sequence item with index 0 has a wrong type
Can't parse 'pt1'. Sequence item with index 0 has a wrong type
Can't parse 'rec'. Expected sequence length 4, got 2
Can't parse 'rec'. Expected sequence length 4, got 2
anyway this is my code
import cv2
import numpy as np
from matplotlib import pyplot as plt
import easyocr
cap = cv2.VideoCapture(0)
reader = easyocr.Reader(['en'], gpu = True)
while True :
_, frame = cap.read()
result = reader.readtext(frame)
for detection in result:
top_left = tuple(detection[0][0])
bottom_right = tuple(detection[0][2])
text = detection[1]
print (text)
img = cv2.rectangle(frame,top_left,bottom_right,(0,255,0),2)
cv2.imshow("Text Recognition", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
| [
"cv2.rectangle expects int values, while easyocr can return floats. Try:\n top_left = (int(detection[0][0][0]), int(detection[0][0][1]))\n bottom_right = (int(detection[0][2][0]), int(detection[0][2][1]))\n\n"
] | [
0
] | [] | [] | [
"detection",
"easyocr",
"opencv",
"python",
"text"
] | stackoverflow_0074637234_detection_easyocr_opencv_python_text.txt |
Q:
Unable to spilt data using python
I have a data like below:
data = """1000
2000
3000
4000
5000
6000
7000
8000
9000
10000"""
Now, I want to sum up the elements that appear before the space and maintain the max_sum track with the sum of the next elements that appear before the empty line. So for me, it should be the sum of 1000,2000,3000 = 6000 compared with the initial max_sum for eg 0, and now sum the next element i.e 4000, and keep comparing with the max_sum which could be like max(6000, 4000) = 6000 and keep on doing the same but need to reset the sum if I encounter a empty line.
Below is my code:
max_num = 0
sum = 0
for line in data:
# print(line)
sum = sum + int(line)
if line in ['\n', '\r\n']:
sum=0
max_num = max(max_num, sum)
This gives an error:
sum = sum + int(line)
ValueError: invalid literal for int() with base 10: '\n'
A:
You are trying to cast empty lines to int:
max_num = 0
sum = 0
for line in data:
print(line)
if line.strip():
sum = sum + int(line)
if line in ['\n', '\r\n']:
sum=0
max_num = max(max_num, sum)
A:
Here's a quick oneliner:
data = """1000
2000
3000
4000
5000
6000
7000
8000
9000
10000"""
max(
sum(
int(i) for i in l.split('\n')
) for l in data.split('\n\n')
)
which gives 24000
First it divides based on \n\n and then based on \n. Sums all elements in the groups and then chooses the biggest value.
A:
There are lines that are just composed of '\n', which you are trying to convert into int.
You should move your test for line up the int conversion, and continue without casting to int if the line is '\n' or '\r\n'
A:
Don't use builtin names like sum, here you need to split the data in \n you will get list then you can loop over and remove space using strip() then if line has some digits it will sum it else it will assign 0.
max_num = 0
sum_val = 0
for line in data.split("\n"):
line = line.strip()
sum_val = int(line) + sum_val if line and line.isdigit() else 0
max_num = max(max_num, sum_val)
print(max_num)
A:
You can try:
data = """1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
"""
data = data.splitlines()
max_sum = 0
group = []
for data_index, single_data in enumerate(data):
single_data = single_data.replace(" ","")
if single_data == "":
if max_sum < sum(group):
max_sum = sum(group)
group = []
else:
group.append(int(single_data))
print(max_sum)
Output:
24000
A:
Note that int() is impervious to leading and trailing whitespace - e.g., int('\n99\n') will result in 99 without error. However, a string comprised entirely of whitespace will result in ValueError. That's what is happening here. You're trying to parse a string that just contains a newline character.
You can take advantage of ValueError for these data as follows:
data = """1000
2000
3000
4000
5000
6000
7000
8000
9000
10000"""
current_sum = 0
max_sum = float('-inf')
for t in data.splitlines():
try:
x = int(t)
current_sum += x
except ValueError:
max_sum = max(max_sum, current_sum)
current_sum = 0
print(f'Max sum = {max(max_sum, current_sum)}')
Output:
Max sum = 24000
| Unable to spilt data using python | I have a data like below:
data = """1000
2000
3000
4000
5000
6000
7000
8000
9000
10000"""
Now, I want to sum up the elements that appear before the space and maintain the max_sum track with the sum of the next elements that appear before the empty line. So for me, it should be the sum of 1000,2000,3000 = 6000 compared with the initial max_sum for eg 0, and now sum the next element i.e 4000, and keep comparing with the max_sum which could be like max(6000, 4000) = 6000 and keep on doing the same but need to reset the sum if I encounter a empty line.
Below is my code:
max_num = 0
sum = 0
for line in data:
# print(line)
sum = sum + int(line)
if line in ['\n', '\r\n']:
sum=0
max_num = max(max_num, sum)
This gives an error:
sum = sum + int(line)
ValueError: invalid literal for int() with base 10: '\n'
| [
"You are trying to cast empty lines to int:\nmax_num = 0\nsum = 0\nfor line in data:\n print(line)\n if line.strip():\n sum = sum + int(line)\n if line in ['\\n', '\\r\\n']:\n sum=0\n max_num = max(max_num, sum)\n\n",
"Here's a quick oneliner:\ndata = \"\"\"1000\n2000\n3000\n\n4000\n\n5000\n6000\n\n7000\n8000\n9000\n\n10000\"\"\"\n\nmax(\n sum(\n int(i) for i in l.split('\\n')\n ) for l in data.split('\\n\\n')\n)\n\nwhich gives 24000\nFirst it divides based on \\n\\n and then based on \\n. Sums all elements in the groups and then chooses the biggest value.\n",
"There are lines that are just composed of '\\n', which you are trying to convert into int.\nYou should move your test for line up the int conversion, and continue without casting to int if the line is '\\n' or '\\r\\n'\n",
"Don't use builtin names like sum, here you need to split the data in \\n you will get list then you can loop over and remove space using strip() then if line has some digits it will sum it else it will assign 0.\nmax_num = 0\nsum_val = 0\n\n\nfor line in data.split(\"\\n\"):\n line = line.strip()\n sum_val = int(line) + sum_val if line and line.isdigit() else 0\n max_num = max(max_num, sum_val)\nprint(max_num)\n\n",
"You can try:\ndata = \"\"\"1000\n 2000\n 3000\n \n 4000\n \n 5000\n 6000\n \n 7000\n 8000\n 9000\n \n 10000\n \"\"\"\n\ndata = data.splitlines()\n\nmax_sum = 0\ngroup = []\n\nfor data_index, single_data in enumerate(data):\n single_data = single_data.replace(\" \",\"\")\n if single_data == \"\":\n if max_sum < sum(group):\n max_sum = sum(group)\n group = []\n else:\n group.append(int(single_data))\n\nprint(max_sum)\n\nOutput:\n24000\n\n",
"Note that int() is impervious to leading and trailing whitespace - e.g., int('\\n99\\n') will result in 99 without error. However, a string comprised entirely of whitespace will result in ValueError. That's what is happening here. You're trying to parse a string that just contains a newline character.\nYou can take advantage of ValueError for these data as follows:\ndata = \"\"\"1000\n2000\n3000\n\n4000\n\n5000\n6000\n\n7000\n8000\n9000\n\n10000\"\"\"\n\ncurrent_sum = 0\nmax_sum = float('-inf')\n\nfor t in data.splitlines():\n try:\n x = int(t)\n current_sum += x\n except ValueError:\n max_sum = max(max_sum, current_sum)\n current_sum = 0\n\nprint(f'Max sum = {max(max_sum, current_sum)}')\n\nOutput:\nMax sum = 24000\n\n"
] | [
2,
2,
1,
1,
1,
1
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074642735_python_python_3.x.txt |
Q:
Exporting .kml or .kmz from RShiny (with Python)
I had this problem for a few weeks. I couldn't figure out how to get RShiny to allow export of .kml or .kmz files, specifically ones created from a sourced Python package.
I finally figured it out day before yesterday. I don't know how to just make an answer, so I will add it as the accepted answer for anyone else who might come across this issue. There wasn't anything I could find that helped during my troubleshooting... so hopefully this helps the next person.
A:
I searched pretty much everywhere I could think of to solve this problem, but nothing helped. I finally got it through trial and error of my code on both the R and Python sides. Here's what I found works (in RShiny):
In ui, just make sure there is a downloadButton with the id of whatever you are trying to export with the downloadHandler. In this example, there would need to be two: one for 'download_kml' and another for 'download_kmz'.
server <- function(input,output) {
function_object <- reactive({python_class(args)})
kml_show_object <- reactive({function_object()$show_kml})
#------------------------------------------
# TO EXPORT .KML
output$download_kml <- downloadHandler(
filename <- function() {
'export.kml'
},
content <- function(file) {
write(kml_show_object(), file)
},
contentType = 'application/vnd.google-earth.kml+xml'
)
#------------------------------------------
# TO EXPORT .KMZ
output$download_kmz <- downloadHandler(
filename <- function() {
'export.kmz'
},
content <- function(file) {
write(kml_show_object(), file)
},
contentType = 'application/vnd.google-earth.kmz'
)
}
shinyApp(ui = ui, server = server)
This is using the simplekml package in Python to create a kml object. To get it to function right (on the R side), you can't just create the kml object. R doesn't know what to do with that. You have to assign a variable to kml.kml() inside the Python code. In this example, it is a class variable named 'show_kml', which gets assigned locally in R to the kml_show_object.
If you need help with the Python code that makes the kml object, I can edit this to include it.
Hope that helps!
| Exporting .kml or .kmz from RShiny (with Python) | I had this problem for a few weeks. I couldn't figure out how to get RShiny to allow export of .kml or .kmz files, specifically ones created from a sourced Python package.
I finally figured it out day before yesterday. I don't know how to just make an answer, so I will add it as the accepted answer for anyone else who might come across this issue. There wasn't anything I could find that helped during my troubleshooting... so hopefully this helps the next person.
| [
"I searched pretty much everywhere I could think of to solve this problem, but nothing helped. I finally got it through trial and error of my code on both the R and Python sides. Here's what I found works (in RShiny):\n\nIn ui, just make sure there is a downloadButton with the id of whatever you are trying to export with the downloadHandler. In this example, there would need to be two: one for 'download_kml' and another for 'download_kmz'.\nserver <- function(input,output) {\n\n\nfunction_object <- reactive({python_class(args)})\n\nkml_show_object <- reactive({function_object()$show_kml})\n\n\n#------------------------------------------\n# TO EXPORT .KML\n\noutput$download_kml <- downloadHandler(\n\n filename <- function() {\n 'export.kml'\n },\n content <- function(file) {\n write(kml_show_object(), file)\n },\n contentType = 'application/vnd.google-earth.kml+xml'\n )\n\n#------------------------------------------\n# TO EXPORT .KMZ\n\noutput$download_kmz <- downloadHandler(\n filename <- function() {\n 'export.kmz'\n },\n content <- function(file) {\n write(kml_show_object(), file)\n },\n contentType = 'application/vnd.google-earth.kmz'\n )\n}\n\nshinyApp(ui = ui, server = server)\n\nThis is using the simplekml package in Python to create a kml object. To get it to function right (on the R side), you can't just create the kml object. R doesn't know what to do with that. You have to assign a variable to kml.kml() inside the Python code. In this example, it is a class variable named 'show_kml', which gets assigned locally in R to the kml_show_object.\n\nIf you need help with the Python code that makes the kml object, I can edit this to include it.\nHope that helps!\n"
] | [
0
] | [] | [] | [
"export",
"kml",
"python",
"r",
"shiny"
] | stackoverflow_0074642898_export_kml_python_r_shiny.txt |
Q:
Reading semicolon separated data with read_html
I want to read winequality-white.csv data using pandas.read_html() function.
Here is my code:
import pandas as pd
wine = pd.DataFrame(
pd.read_html(
"https://github.com/shrikant-temburwar/Wine-Quality-Dataset/blob/master/winequality-white.csv",
thousands=";",
header=0,
)[0]
)
... but the result is:
Unnamed: 0 "fixed acidity";"volatile acidity";"citric acid";"residual sugar";"chlorides";"free sulfur dioxide";"total sulfur dioxide";"density";"pH";"sulphates";"alcohol";"quality"
0 NaN 7;0.27;0.36;20.7;0.045;45;170;1.001;3;0.45;8.8;6
1 NaN 6.3;0.3;0.34;1.6;0.049;14;132;0.994;3.3;0.49;9...
2 NaN 8.1;0.28;0.4;6.9;0.05;30;97;0.9951;3.26;0.44;1...
3 NaN 7.2;0.23;0.32;8.5;0.058;47;186;0.9956;3.19;0.4...
4 NaN 7.2;0.23;0.32;8.5;0.058;47;186;0.9956;3.19;0.4...
Of course I can choose raw and then use read_csv, but in case of html reading, how can I fix it?
A:
Alright, here is an option using pd.read_html:
import pandas as pd
wine = pd.read_html(
"https://github.com/shrikant-temburwar/Wine-Quality-Dataset/blob/master/winequality-white.csv",
header=0
)[0]
wine.drop('Unnamed: 0', axis=1, inplace=True)
headers = wine.columns[0].replace('"', '').split(';')
wine.columns = ['data']
wine[headers] = wine.data.str.split(';', expand=True)
wine.drop('data', axis=1, inplace=True)
wine.head()
The code above will result in:
>>> wine.head()
fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality
0 7 0.27 0.36 20.7 0.045 45 170 1.001 3 0.45 8.8 6
1 6.3 0.3 0.34 1.6 0.049 14 132 0.994 3.3 0.49 9.5 6
2 8.1 0.28 0.4 6.9 0.05 30 97 0.9951 3.26 0.44 10.1 6
3 7.2 0.23 0.32 8.5 0.058 47 186 0.9956 3.19 0.4 9.9 6
4 7.2 0.23 0.32 8.5 0.058 47 186 0.9956 3.19 0.4 9.9 6
>>>
But I would never exchange the simplicity of the following snippet for the above code:
import pandas as pd
wine = pd.read_csv(
'https://raw.githubusercontent.com/shrikant-temburwar/Wine-Quality-Dataset/master/winequality-white.csv',
header=0,
sep=';'
)
| Reading semicolon separated data with read_html | I want to read winequality-white.csv data using pandas.read_html() function.
Here is my code:
import pandas as pd
wine = pd.DataFrame(
pd.read_html(
"https://github.com/shrikant-temburwar/Wine-Quality-Dataset/blob/master/winequality-white.csv",
thousands=";",
header=0,
)[0]
)
... but the result is:
Unnamed: 0 "fixed acidity";"volatile acidity";"citric acid";"residual sugar";"chlorides";"free sulfur dioxide";"total sulfur dioxide";"density";"pH";"sulphates";"alcohol";"quality"
0 NaN 7;0.27;0.36;20.7;0.045;45;170;1.001;3;0.45;8.8;6
1 NaN 6.3;0.3;0.34;1.6;0.049;14;132;0.994;3.3;0.49;9...
2 NaN 8.1;0.28;0.4;6.9;0.05;30;97;0.9951;3.26;0.44;1...
3 NaN 7.2;0.23;0.32;8.5;0.058;47;186;0.9956;3.19;0.4...
4 NaN 7.2;0.23;0.32;8.5;0.058;47;186;0.9956;3.19;0.4...
Of course I can choose raw and then use read_csv, but in case of html reading, how can I fix it?
| [
"Alright, here is an option using pd.read_html:\nimport pandas as pd\n\nwine = pd.read_html(\n \"https://github.com/shrikant-temburwar/Wine-Quality-Dataset/blob/master/winequality-white.csv\",\n header=0\n)[0]\n\nwine.drop('Unnamed: 0', axis=1, inplace=True)\nheaders = wine.columns[0].replace('\"', '').split(';')\nwine.columns = ['data']\nwine[headers] = wine.data.str.split(';', expand=True)\nwine.drop('data', axis=1, inplace=True)\nwine.head()\n\nThe code above will result in:\n>>> wine.head()\n fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality\n0 7 0.27 0.36 20.7 0.045 45 170 1.001 3 0.45 8.8 6\n1 6.3 0.3 0.34 1.6 0.049 14 132 0.994 3.3 0.49 9.5 6\n2 8.1 0.28 0.4 6.9 0.05 30 97 0.9951 3.26 0.44 10.1 6\n3 7.2 0.23 0.32 8.5 0.058 47 186 0.9956 3.19 0.4 9.9 6\n4 7.2 0.23 0.32 8.5 0.058 47 186 0.9956 3.19 0.4 9.9 6\n>>> \n\nBut I would never exchange the simplicity of the following snippet for the above code:\nimport pandas as pd\n\nwine = pd.read_csv(\n 'https://raw.githubusercontent.com/shrikant-temburwar/Wine-Quality-Dataset/master/winequality-white.csv',\n header=0,\n sep=';'\n)\n\n"
] | [
0
] | [
"you could probably better to use the rawdatacontent address of github to remove the problem due to different html interface.\nhere is what you could do\nimport pandas as pd\nimport requests\nimport io\nurl = \"https://raw.githubusercontent.com/shrikant-temburwar/Wine-Quality-Dataset/master/winequality-white.csv\"\nr = requests.get(url)\nobj = io.BytesIO(r.content)\nwine = pd.read_csv(obj, delimiter=\";\")\nwine.head()\n\n"
] | [
-1
] | [
"pandas",
"python"
] | stackoverflow_0074640187_pandas_python.txt |
Q:
Add counter to Folium map of markers that meet a specific condition
I work for a small ISP and have written a script to collect data from the switches that is then fed into Folium to produce a map of subscribers and their operating status as online of offline.
I need to add a counter, maybe through a div, that would allow for markers of any status to be counted and displayed in the window like the top left corner.
The cluster option in Folium has helped to give an idea for areas but i would like an overall total as well that is easily seen without the need for adding cluster totals.
Does Folium have something like this that i've just missed or is there another python based dashboard that i could create the number with and add it as a div somewhere.
I've looked at some python based dashboards but most of those are heavily graph based. I really only need a square box with a number that changes based on the total number of markers/subscribers that are offline.
A:
Here is a description of how to add text or images to folium maps:
https://stackoverflow.com/a/65105474/13843906
text will be moving with the map, but image seems to be at a fixed position in map window
| Add counter to Folium map of markers that meet a specific condition | I work for a small ISP and have written a script to collect data from the switches that is then fed into Folium to produce a map of subscribers and their operating status as online of offline.
I need to add a counter, maybe through a div, that would allow for markers of any status to be counted and displayed in the window like the top left corner.
The cluster option in Folium has helped to give an idea for areas but i would like an overall total as well that is easily seen without the need for adding cluster totals.
Does Folium have something like this that i've just missed or is there another python based dashboard that i could create the number with and add it as a div somewhere.
I've looked at some python based dashboards but most of those are heavily graph based. I really only need a square box with a number that changes based on the total number of markers/subscribers that are offline.
| [
"Here is a description of how to add text or images to folium maps:\nhttps://stackoverflow.com/a/65105474/13843906\ntext will be moving with the map, but image seems to be at a fixed position in map window\n"
] | [
0
] | [] | [] | [
"dashboard",
"folium",
"html",
"maps",
"python"
] | stackoverflow_0074618598_dashboard_folium_html_maps_python.txt |
Q:
Python begginer - Can someone tell me why this loop don't finish?
def is_power_of_two(n):
# Check if the number can be divided by two without a remainder
while n % 2 == 0:
n = n / 2
# If after dividing by two the number is 1, it's a power of two
if n == 1:
return True
if n != 0:
return False
print(is_power_of_two(0)) # Should be False
print(is_power_of_two(1)) # Should be True
print(is_power_of_two(8)) # Should be True
print(is_power_of_two(9)) # Should be False
This is a excercice from Coursera's Python course. I don't know why it don't finish when n=0.
A:
while n % 2 == 0: will turns out to be a infinate loop.
This happening because
First number input is 0
while n % 2 == 0: ---> 0 ==0 --> True
While True:
#This is infinate loop.
Code correction
To avoid this check if n is zero or not before while.
def is_power_of_two(n):
if n ==0:
return False
# Check if the number can be divided by two without a remainder
while n % 2 == 0:
n = n / 2
# If after dividing by two the number is 1, it's a power of two
if n == 1:
return True
if n != 0:
return False
print(is_power_of_two(0)) # Should be False
print(is_power_of_two(1)) # Should be True
print(is_power_of_two(8)) # Should be True
print(is_power_of_two(9)) # Should be False
Gives #
False
True
True
False
| Python begginer - Can someone tell me why this loop don't finish? | def is_power_of_two(n):
# Check if the number can be divided by two without a remainder
while n % 2 == 0:
n = n / 2
# If after dividing by two the number is 1, it's a power of two
if n == 1:
return True
if n != 0:
return False
print(is_power_of_two(0)) # Should be False
print(is_power_of_two(1)) # Should be True
print(is_power_of_two(8)) # Should be True
print(is_power_of_two(9)) # Should be False
This is a excercice from Coursera's Python course. I don't know why it don't finish when n=0.
| [
"while n % 2 == 0: will turns out to be a infinate loop.\nThis happening because\nFirst number input is 0\nwhile n % 2 == 0: ---> 0 ==0 --> True\nWhile True:\n #This is infinate loop.\n\nCode correction\nTo avoid this check if n is zero or not before while.\ndef is_power_of_two(n):\n\n if n ==0:\n return False\n # Check if the number can be divided by two without a remainder\n while n % 2 == 0:\n n = n / 2\n # If after dividing by two the number is 1, it's a power of two\n if n == 1:\n return True\n if n != 0:\n return False\n\n\n\nprint(is_power_of_two(0)) # Should be False\nprint(is_power_of_two(1)) # Should be True\nprint(is_power_of_two(8)) # Should be True\nprint(is_power_of_two(9)) # Should be False\n\nGives #\nFalse\nTrue\nTrue\nFalse\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074642951_python.txt |
Q:
Tkinter: use after five times and then stop
In my app I'm trying to have a blinking image. This image should be blinking just five times and then stay still in the frame for five seconds. Right now I've menaged to make the image flash, but I don't know how to make it blink just five times and then stay still.
I've tried using a for loop but it did not solve it. This is my code:
import tkinter as tk
from PIL import ImageTk, Image, ImageGrab
class Blinking(tk.Frame):
def __init__(self, parent, *args, **kwargs):
tk.Frame.__init__(self, parent, *args, **kwargs)
self.first_img = ImageTk.PhotoImage(
Image.open("img_1.png")
)
self.first_img_label = tk.Label(self, image=self.first_img, foreground="black")
self.first_img_label.pack(padx=50, side="left", anchor="w")
self.button = tk.Button(self, text="start", command=self.blink_img)
self.button.pack()
def blink_img(self):
current_color = self.first_img_label.cget("foreground")
background_color = self.first_img_label.cget("background")
blink_clr = "black" if current_color == background_color else background_color
blink_img = "" if current_color == background_color else self.first_img
self.first_img_label.config(image=blink_img, foreground=blink_clr)
self.after_f = self.first_img_label.after(601, self.blink_img)
if __name__ == "__main__":
root = tk.Tk()
root.attributes("-fullscreen", True)
root.attributes("-topmost", True)
Blinking(root).pack(side="top", fill="both", expand=True)
root.mainloop()
How can I achieve this?
A:
Keep track of how many times you've 'blinked', then use after_cancel() to stop when you want.
def __init__(self, parent, *args, **kwargs):
... # code removed for brevity
self.blink_count = 0 # initialize blink counter
def blink_img(self):
current_color = self.first_img_label.cget("foreground")
background_color = self.first_img_label.cget("background")
blink_clr = "black" if current_color == background_color else background_color
blink_img = "" if current_color == background_color else self.first_img
self.first_img_label.config(image=blink_img, foreground=blink_clr)
if self.blink_count < 5:
self.after_f = self.first_img_label.after(601, self.blink_img)
self.blink_count += 1
else:
self.after_cancel(self.after_f)
self.blink_count = 0 # reset counter
| Tkinter: use after five times and then stop | In my app I'm trying to have a blinking image. This image should be blinking just five times and then stay still in the frame for five seconds. Right now I've menaged to make the image flash, but I don't know how to make it blink just five times and then stay still.
I've tried using a for loop but it did not solve it. This is my code:
import tkinter as tk
from PIL import ImageTk, Image, ImageGrab
class Blinking(tk.Frame):
def __init__(self, parent, *args, **kwargs):
tk.Frame.__init__(self, parent, *args, **kwargs)
self.first_img = ImageTk.PhotoImage(
Image.open("img_1.png")
)
self.first_img_label = tk.Label(self, image=self.first_img, foreground="black")
self.first_img_label.pack(padx=50, side="left", anchor="w")
self.button = tk.Button(self, text="start", command=self.blink_img)
self.button.pack()
def blink_img(self):
current_color = self.first_img_label.cget("foreground")
background_color = self.first_img_label.cget("background")
blink_clr = "black" if current_color == background_color else background_color
blink_img = "" if current_color == background_color else self.first_img
self.first_img_label.config(image=blink_img, foreground=blink_clr)
self.after_f = self.first_img_label.after(601, self.blink_img)
if __name__ == "__main__":
root = tk.Tk()
root.attributes("-fullscreen", True)
root.attributes("-topmost", True)
Blinking(root).pack(side="top", fill="both", expand=True)
root.mainloop()
How can I achieve this?
| [
"Keep track of how many times you've 'blinked', then use after_cancel() to stop when you want.\ndef __init__(self, parent, *args, **kwargs):\n ... # code removed for brevity\n self.blink_count = 0 # initialize blink counter\n\ndef blink_img(self):\n current_color = self.first_img_label.cget(\"foreground\")\n background_color = self.first_img_label.cget(\"background\")\n blink_clr = \"black\" if current_color == background_color else background_color\n blink_img = \"\" if current_color == background_color else self.first_img\n self.first_img_label.config(image=blink_img, foreground=blink_clr)\n\n if self.blink_count < 5:\n self.after_f = self.first_img_label.after(601, self.blink_img)\n self.blink_count += 1\n else:\n self.after_cancel(self.after_f)\n self.blink_count = 0 # reset counter\n\n"
] | [
1
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0074642888_python_tkinter.txt |
Q:
scikitlearn SVR giving same values for predictions (have tried scaling)
I have a battery dataframe with rows representing various cycles and a set of features for that cycle:
As an example row 1:
df = pd.DataFrame(columns=['Ecell_V', 'I_mA', 'EnergyCharge_W_h', 'QCharge_mA_h',
'EnergyDischarge_W_h', 'QDischarge_mA_h', 'Temperature__C',
'cycleNumber', 'SOH', 'Cell'])
df.loc[0] = [3.730646, 2988.8713, 0.185061, 49.724845, 0.0, 0.0, 27.5, 2, 0.99, 'VAH11']
There are 600,000 rows
I am trying to predict the value for SOH as follows:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.linear_model import LinearRegression # for building a linear regression model
from sklearn.svm import SVR # for building SVR model
from sklearn.preprocessing import MinMaxScaler
train_data = pd.read_csv("train_data.csv")
train_cell = train_data.pop('Cell')
# reduce size of df train for comp purposes
train_data = train_data.iloc[::20, :]
train_data = train_data.reset_index(drop=True)
#remove unwanted features
train_data.pop('Ns')
train_data.pop('time_s')
#scale the data
scaler = MinMaxScaler()
train_data_scaled = scaler.fit_transform(train_data)
#return to df
train_data_scaled = pd.DataFrame(train_data_scaled, columns=['Ecell_V', 'I_mA', 'EnergyCharge_W_h', 'QCharge_mA_h',
'EnergyDischarge_W_h', 'QDischarge_mA_h', 'Temperature__C',
'cycleNumber', 'SOH'])
train_data_scaled
#unscale target
train_data_scaled['SOH'] = train_data['SOH']
train_data_scaled
#split target and input
X = train_data_scaled.drop('SOH', axis=1)
y = train_data_scaled['SOH'].values
#model
model = SVR(kernel='rbf', C=100, epsilon=1)
svr = model.fit(X, y)
#predict model
pred = model.predict(X)
Now returning ``` pred `` gives the same prediction for each row:
array([0.89976814, 0.89976814, 0.89976814, ..., 0.89976814, 0.89976814,
0.89976814])
why is this happening?
A:
Using StandardScaler() on the X and y data corrected this issue, with an inverse called to return it to original values.
| scikitlearn SVR giving same values for predictions (have tried scaling) | I have a battery dataframe with rows representing various cycles and a set of features for that cycle:
As an example row 1:
df = pd.DataFrame(columns=['Ecell_V', 'I_mA', 'EnergyCharge_W_h', 'QCharge_mA_h',
'EnergyDischarge_W_h', 'QDischarge_mA_h', 'Temperature__C',
'cycleNumber', 'SOH', 'Cell'])
df.loc[0] = [3.730646, 2988.8713, 0.185061, 49.724845, 0.0, 0.0, 27.5, 2, 0.99, 'VAH11']
There are 600,000 rows
I am trying to predict the value for SOH as follows:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.linear_model import LinearRegression # for building a linear regression model
from sklearn.svm import SVR # for building SVR model
from sklearn.preprocessing import MinMaxScaler
train_data = pd.read_csv("train_data.csv")
train_cell = train_data.pop('Cell')
# reduce size of df train for comp purposes
train_data = train_data.iloc[::20, :]
train_data = train_data.reset_index(drop=True)
#remove unwanted features
train_data.pop('Ns')
train_data.pop('time_s')
#scale the data
scaler = MinMaxScaler()
train_data_scaled = scaler.fit_transform(train_data)
#return to df
train_data_scaled = pd.DataFrame(train_data_scaled, columns=['Ecell_V', 'I_mA', 'EnergyCharge_W_h', 'QCharge_mA_h',
'EnergyDischarge_W_h', 'QDischarge_mA_h', 'Temperature__C',
'cycleNumber', 'SOH'])
train_data_scaled
#unscale target
train_data_scaled['SOH'] = train_data['SOH']
train_data_scaled
#split target and input
X = train_data_scaled.drop('SOH', axis=1)
y = train_data_scaled['SOH'].values
#model
model = SVR(kernel='rbf', C=100, epsilon=1)
svr = model.fit(X, y)
#predict model
pred = model.predict(X)
Now returning ``` pred `` gives the same prediction for each row:
array([0.89976814, 0.89976814, 0.89976814, ..., 0.89976814, 0.89976814,
0.89976814])
why is this happening?
| [
"Using StandardScaler() on the X and y data corrected this issue, with an inverse called to return it to original values.\n"
] | [
0
] | [] | [] | [
"python",
"regression",
"scikit_learn",
"svm"
] | stackoverflow_0074640628_python_regression_scikit_learn_svm.txt |
Q:
Errors with mlflow and machine learning project using XGBoost and hyperopt in python
I'm experiencing some problems with a machine learning project.
I use XGBoost for forecast on warehouse items supply and i'm trying to select the best hyperparams with hyperopt and mlflow.
This is the code:
import pandas as pd
import glob
import holidays
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from sklearn import metrics,model_selection
from sklearn.model_selection import train_test_split
from typing import Dict,Union,Any,Tuple
import mlflow
import mlflow.xgboost
import xgboost as xgb
import hyperopt
from hyperopt.pyll.base import scope
import findspark
findspark.init()
import pyspark
import logging
import sys
class xgb_tune:
def __init__(self):
logging.basicConfig(format='%(levelname)s %(asctime)s %(message)s')
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.nomeFile = 'dati'
self.pathForecast = 'F:\\My-data\\prg\\machine-learning\\forecast\\'
self.pathWS = "F:\\My-data\\prg\\cq_webshaker\\"
self.test = None
self.train = None
print("Python v{}".format(sys.version))
def loadData(self,onlyweek=False,fromFile=True,nomeFile=None):
# method to load the data (hidden here for easy reading)
def reg_metrics(self,actual: pd.Series, pred: pd.Series) -> Dict:
return{
"MAE": metrics.mean_absolute_error(actual, pred),
"RMSE": np.sqrt(metrics.mean_squared_error(actual, pred))
}
def fit_and_log_cv(
self,
x_train: Union[pd.DataFrame, np.array],
y_train: Union[pd.DataFrame, np.array],
x_test: Union[pd.DataFrame, np.array],
y_test: Union[pd.DataFrame, np.array],
params: Dict[str,Any],
nested: bool = False
) -> Tuple[Dict[str,Any],Dict[str,Any]]:
with mlflow.start_run(nested=nested) as run:
print(type(params))
print(params)
model_cv = xgb.XGBRFRegressor(**params)
y_pred_cv = model_selection.cross_val_predict(model_cv, x_train, y_train)
metrics_cv = {
f"val_{metric}":value
for metric, value in self.reg_metrics(y_train, y_pred_cv).items()
}
#fit e log del training
try:
mlflow.xgboost.autolog()
dataset = xgb.DMatrix(x_train,label = y_train)
model = xgb.train(params=params, dtrain=dataset)
y_pred_test = model.predict(xgb.DMatrix(x_test))
metric_test = {
f"test_{metric}": value
for metric,value in self.reg_metrics(y_test, y_pred_test).items()
}
metrics = {**metric_test,**metrics_cv}
mlflow.log_metrics(metrics)
return metrics
except Exception as e:
print('autolog -> {}'.format(e))
def build_train_objective(
self,
x_train: Union[pd.DataFrame,np.array],
y_train: Union[pd.DataFrame,np.array],
x_test: Union[pd.DataFrame,np.array],
y_test: Union[pd.DataFrame,np.array],
metric: str
):
def train_func(params):
""" fa il train del modello e ritorna loss e metriche """
try:
metrics = self.fit_and_log_cv(x_train, y_train, x_test, y_test, params, nested = True)
return {'status': hyperopt.STATUS_OK, 'loss':metrics[metric]}
except Exception as e:
print('train_func -> {}'.format(e))
return train_func
def log_best(self,run: mlflow.entities.Run, metric: str) -> None:
runs = None
try:
client = mlflow.tracking.MlflowClient()
runs = client.search_runs(
[run.info.experiment_id],
"tags.mlflow.parentRunId = '{run_id}' ".format(run_id=run.info.run_id)
)
best_run = min(runs, key=lambda run: run.data.metrics[metric])
mlflow.set_tag("best_run", best_run.info.run_id)
mlflow.log_metric(f"best_{metric}", best_run.data.metrics[metric])
except Exception as e:
self.logger.error('log_best -> {}'.format(e))
if __name__ == '__main__':
mod = xgb_tune()
df = mod.loadData()
print(df)
#splitto i dati i train e test
X = df.loc[:,df.columns != 'qta']
y = df.loc[:,df.columns == 'qta']
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,shuffle=False)
MAX_EVALS = 1
METRIC = "val_RMSE"
# Number of experiments to run at once
PARALLELISM = 8
space = {
'learning_rate': hyperopt.hp.loguniform('learning_rate', 0, 0.3),
'max_depth': scope.int(hyperopt.hp.uniform('max_depth', 1, 100)),
'min_child_weight': hyperopt.hp.loguniform('min_child_weight', -2, 3),
'subsample': hyperopt.hp.uniform('subsample', 0.5, 1),
'colsample_bytree': hyperopt.hp.uniform('colsample_bytree', 0.5, 1),
'gamma': hyperopt.hp.loguniform('gamma', -10, 10),
'alpha': hyperopt.hp.loguniform('alpha', -10, 10),
'lambda': hyperopt.hp.loguniform('lambda', -10, 10),
'objective': 'reg:squarederror',
'eval_metric': 'rmse',
'seed': 123,
}
trials = hyperopt.SparkTrials(parallelism=PARALLELISM)
train_objective = mod.build_train_objective(
X_train, y_train, X_test, y_test, METRIC)
with mlflow.start_run() as run:
try:
hyperopt.fmin(fn=train_objective,
space=space,
algo=hyperopt.tpe.suggest,
max_evals=MAX_EVALS,
trials=trials)
except Exception as e:
mod.logger.error('main -> {}'.format(e))
mod.log_best(run, METRIC)
search_run_id = run.info.run_id
experiment_id = run.info.experiment_id
But when i run the script i get these errors and i don't know what is the specific error and how to log it.
2022/11/10 11:33:39 WARNING mlflow.utils.autologging_utils: MLflow autologging encountered a warning: "###########\Anaconda3\envs\machine-learning\lib\site-packages\_distutils_hack\__init__.py:33: UserWarning: Setuptools is replacing distutils."
trial task 0 failed, exception is An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (########### executor driver): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:581)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:770)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:747)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:755)
29 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2228)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2249)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2268)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2293)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1021)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:406)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1020)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:180)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:581)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:770)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:747)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
1 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:755)
29 more
.
None
ERROR 2022-11-10 11:33:39,931 trial task 0 failed, exception is An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (gsuzzi.passepartout.local executor driver): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:581)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:770)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:747)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:755)
29 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2228)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2249)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2268)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2293)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1021)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:406)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1020)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:180)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:581)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:770)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:747)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
1 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:755)
29 more
22/11/10 11:33:39 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:581)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:770)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:747)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:755)
29 more
22/11/10 11:33:39 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (gsuzzi.passepartout.local executor driver): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:581)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:770)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:747)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:755)
29 more
22/11/10 11:33:39 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Total Trials: 1: 0 succeeded, 1 failed, 0 cancelled.
INFO 2022-11-10 11:33:41,318 Total Trials: 1: 0 succeeded, 1 failed, 0 cancelled.
ERROR 2022-11-10 11:33:41,322 main -> There are no evaluation tasks, cannot return argmin of task losses.
ERROR 2022-11-10 11:33:45,578 log_best -> min() arg is an empty sequence
Any suggestions?
Thanks
A:
It seems that the problem is related to the number of parallel worker setted. If i change PARALLELISM to some other values (< 8) it works.
| Errors with mlflow and machine learning project using XGBoost and hyperopt in python | I'm experiencing some problems with a machine learning project.
I use XGBoost for forecast on warehouse items supply and i'm trying to select the best hyperparams with hyperopt and mlflow.
This is the code:
import pandas as pd
import glob
import holidays
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from sklearn import metrics,model_selection
from sklearn.model_selection import train_test_split
from typing import Dict,Union,Any,Tuple
import mlflow
import mlflow.xgboost
import xgboost as xgb
import hyperopt
from hyperopt.pyll.base import scope
import findspark
findspark.init()
import pyspark
import logging
import sys
class xgb_tune:
def __init__(self):
logging.basicConfig(format='%(levelname)s %(asctime)s %(message)s')
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.nomeFile = 'dati'
self.pathForecast = 'F:\\My-data\\prg\\machine-learning\\forecast\\'
self.pathWS = "F:\\My-data\\prg\\cq_webshaker\\"
self.test = None
self.train = None
print("Python v{}".format(sys.version))
def loadData(self,onlyweek=False,fromFile=True,nomeFile=None):
# method to load the data (hidden here for easy reading)
def reg_metrics(self,actual: pd.Series, pred: pd.Series) -> Dict:
return{
"MAE": metrics.mean_absolute_error(actual, pred),
"RMSE": np.sqrt(metrics.mean_squared_error(actual, pred))
}
def fit_and_log_cv(
self,
x_train: Union[pd.DataFrame, np.array],
y_train: Union[pd.DataFrame, np.array],
x_test: Union[pd.DataFrame, np.array],
y_test: Union[pd.DataFrame, np.array],
params: Dict[str,Any],
nested: bool = False
) -> Tuple[Dict[str,Any],Dict[str,Any]]:
with mlflow.start_run(nested=nested) as run:
print(type(params))
print(params)
model_cv = xgb.XGBRFRegressor(**params)
y_pred_cv = model_selection.cross_val_predict(model_cv, x_train, y_train)
metrics_cv = {
f"val_{metric}":value
for metric, value in self.reg_metrics(y_train, y_pred_cv).items()
}
#fit e log del training
try:
mlflow.xgboost.autolog()
dataset = xgb.DMatrix(x_train,label = y_train)
model = xgb.train(params=params, dtrain=dataset)
y_pred_test = model.predict(xgb.DMatrix(x_test))
metric_test = {
f"test_{metric}": value
for metric,value in self.reg_metrics(y_test, y_pred_test).items()
}
metrics = {**metric_test,**metrics_cv}
mlflow.log_metrics(metrics)
return metrics
except Exception as e:
print('autolog -> {}'.format(e))
def build_train_objective(
self,
x_train: Union[pd.DataFrame,np.array],
y_train: Union[pd.DataFrame,np.array],
x_test: Union[pd.DataFrame,np.array],
y_test: Union[pd.DataFrame,np.array],
metric: str
):
def train_func(params):
""" fa il train del modello e ritorna loss e metriche """
try:
metrics = self.fit_and_log_cv(x_train, y_train, x_test, y_test, params, nested = True)
return {'status': hyperopt.STATUS_OK, 'loss':metrics[metric]}
except Exception as e:
print('train_func -> {}'.format(e))
return train_func
def log_best(self,run: mlflow.entities.Run, metric: str) -> None:
runs = None
try:
client = mlflow.tracking.MlflowClient()
runs = client.search_runs(
[run.info.experiment_id],
"tags.mlflow.parentRunId = '{run_id}' ".format(run_id=run.info.run_id)
)
best_run = min(runs, key=lambda run: run.data.metrics[metric])
mlflow.set_tag("best_run", best_run.info.run_id)
mlflow.log_metric(f"best_{metric}", best_run.data.metrics[metric])
except Exception as e:
self.logger.error('log_best -> {}'.format(e))
if __name__ == '__main__':
mod = xgb_tune()
df = mod.loadData()
print(df)
#splitto i dati i train e test
X = df.loc[:,df.columns != 'qta']
y = df.loc[:,df.columns == 'qta']
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,shuffle=False)
MAX_EVALS = 1
METRIC = "val_RMSE"
# Number of experiments to run at once
PARALLELISM = 8
space = {
'learning_rate': hyperopt.hp.loguniform('learning_rate', 0, 0.3),
'max_depth': scope.int(hyperopt.hp.uniform('max_depth', 1, 100)),
'min_child_weight': hyperopt.hp.loguniform('min_child_weight', -2, 3),
'subsample': hyperopt.hp.uniform('subsample', 0.5, 1),
'colsample_bytree': hyperopt.hp.uniform('colsample_bytree', 0.5, 1),
'gamma': hyperopt.hp.loguniform('gamma', -10, 10),
'alpha': hyperopt.hp.loguniform('alpha', -10, 10),
'lambda': hyperopt.hp.loguniform('lambda', -10, 10),
'objective': 'reg:squarederror',
'eval_metric': 'rmse',
'seed': 123,
}
trials = hyperopt.SparkTrials(parallelism=PARALLELISM)
train_objective = mod.build_train_objective(
X_train, y_train, X_test, y_test, METRIC)
with mlflow.start_run() as run:
try:
hyperopt.fmin(fn=train_objective,
space=space,
algo=hyperopt.tpe.suggest,
max_evals=MAX_EVALS,
trials=trials)
except Exception as e:
mod.logger.error('main -> {}'.format(e))
mod.log_best(run, METRIC)
search_run_id = run.info.run_id
experiment_id = run.info.experiment_id
But when i run the script i get these errors and i don't know what is the specific error and how to log it.
2022/11/10 11:33:39 WARNING mlflow.utils.autologging_utils: MLflow autologging encountered a warning: "###########\Anaconda3\envs\machine-learning\lib\site-packages\_distutils_hack\__init__.py:33: UserWarning: Setuptools is replacing distutils."
trial task 0 failed, exception is An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (########### executor driver): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:581)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:770)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:747)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:755)
29 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2228)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2249)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2268)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2293)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1021)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:406)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1020)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:180)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:581)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:770)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:747)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
1 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:755)
29 more
.
None
ERROR 2022-11-10 11:33:39,931 trial task 0 failed, exception is An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (gsuzzi.passepartout.local executor driver): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:581)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:770)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:747)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:755)
29 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2228)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2249)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2268)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2293)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1021)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:406)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1020)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:180)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:581)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:770)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:747)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
1 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:755)
29 more
22/11/10 11:33:39 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:581)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:770)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:747)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:755)
29 more
22/11/10 11:33:39 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (gsuzzi.passepartout.local executor driver): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:581)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:770)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:747)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:755)
29 more
22/11/10 11:33:39 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Total Trials: 1: 0 succeeded, 1 failed, 0 cancelled.
INFO 2022-11-10 11:33:41,318 Total Trials: 1: 0 succeeded, 1 failed, 0 cancelled.
ERROR 2022-11-10 11:33:41,322 main -> There are no evaluation tasks, cannot return argmin of task losses.
ERROR 2022-11-10 11:33:45,578 log_best -> min() arg is an empty sequence
Any suggestions?
Thanks
| [
"It seems that the problem is related to the number of parallel worker setted. If i change PARALLELISM to some other values (< 8) it works.\n"
] | [
0
] | [] | [] | [
"hyperopt",
"mlflow",
"python",
"xgboost"
] | stackoverflow_0074387655_hyperopt_mlflow_python_xgboost.txt |
Q:
How to create multiple zip folders at once from a dataframe in Python
I have a dataframe consisting of users and a list of pdfs related to each of those users. The pdfs have no standard naming convention, there can be any number of pdfs to a list and the number of users is much longer than the example below.
import pandas as pd
from zipfile import ZipFile
data = {'name':['aaron', 'ben', 'charlie', 'daniel'],
'pdfs':[['aaron1.pdf', 'aaron2.pdf', 'aaron3.pdf'],
['ben1.pdf', 'ben2.pdf'],
['charlie3.pdf', 'charlie5.pdf'],
['dan_age.pdf', 'daniel1.pdf']]}
df = pd.DataFrame(data, columns = ['name', 'pdfs'])
Using the ZipFile package I want to run a loop to create a single zip folder for each user that contains within it the relevant pdf documents for only that individual user.
I can successfully create a zip folder for each user in the dataframe using the first two lines of the for loop, however I cannot map the pdf lists to each individual user so that only the pdfs related to each user appear in the correct zip file.
users = df['name']
pdfs = df['pdfs']
for user in users:
zipfiles = ZipFile(user + ".zip", 'w'),
for zip in zipfiles:
for lists in pdfs:
for pdf in lists:
zip.write(pdf)
Using a for loop I want to create seperate zip folders named 'aaron.zip', 'ben.zip', 'charlie.zip', 'daniel.zip' with each folder only containing the pdfs related to that user.
A:
I think you're making 3 mistakes:
You try to iterate over a newly created, empty zip-archive (sidenote: don't use zip as variable name, you're overriding a builtin function):
zipfiles = ZipFile(user + ".zip", 'w'),
for zip in zipfiles:
You try to write every file in df["pdfs"] in the user-zip-archive, not only the ones from the user:
for lists in pdfs:
for pdf in lists:
You don't use the newly created zip-archive to write to:
zip.write(pdf)
You could try the following instead:
def zip_user_files(row):
user, files = row
with ZipFile(user + ".zip", "w") as archive:
for file in files:
archive.write(file)
df[["name", "pdfs"]].apply(zip_user_files, axis=1)
I have used df[["name", "pdfs"]] instead of df just in case the dataframe has actually more columns. If that's not the case just use df.
Alternative if you don't want to use .apply:
for user, files in zip(df["name"], df["pdfs"]):
with ZipFile(user + ".zip", "w") as archive:
for file in files:
archive.write(file)
| How to create multiple zip folders at once from a dataframe in Python | I have a dataframe consisting of users and a list of pdfs related to each of those users. The pdfs have no standard naming convention, there can be any number of pdfs to a list and the number of users is much longer than the example below.
import pandas as pd
from zipfile import ZipFile
data = {'name':['aaron', 'ben', 'charlie', 'daniel'],
'pdfs':[['aaron1.pdf', 'aaron2.pdf', 'aaron3.pdf'],
['ben1.pdf', 'ben2.pdf'],
['charlie3.pdf', 'charlie5.pdf'],
['dan_age.pdf', 'daniel1.pdf']]}
df = pd.DataFrame(data, columns = ['name', 'pdfs'])
Using the ZipFile package I want to run a loop to create a single zip folder for each user that contains within it the relevant pdf documents for only that individual user.
I can successfully create a zip folder for each user in the dataframe using the first two lines of the for loop, however I cannot map the pdf lists to each individual user so that only the pdfs related to each user appear in the correct zip file.
users = df['name']
pdfs = df['pdfs']
for user in users:
zipfiles = ZipFile(user + ".zip", 'w'),
for zip in zipfiles:
for lists in pdfs:
for pdf in lists:
zip.write(pdf)
Using a for loop I want to create seperate zip folders named 'aaron.zip', 'ben.zip', 'charlie.zip', 'daniel.zip' with each folder only containing the pdfs related to that user.
| [
"I think you're making 3 mistakes:\n\nYou try to iterate over a newly created, empty zip-archive (sidenote: don't use zip as variable name, you're overriding a builtin function):\n zipfiles = ZipFile(user + \".zip\", 'w'),\n for zip in zipfiles:\n\n\nYou try to write every file in df[\"pdfs\"] in the user-zip-archive, not only the ones from the user:\n for lists in pdfs:\n for pdf in lists:\n\n\nYou don't use the newly created zip-archive to write to:\n zip.write(pdf)\n\n\n\nYou could try the following instead:\ndef zip_user_files(row):\n user, files = row\n with ZipFile(user + \".zip\", \"w\") as archive:\n for file in files:\n archive.write(file)\n\ndf[[\"name\", \"pdfs\"]].apply(zip_user_files, axis=1)\n\nI have used df[[\"name\", \"pdfs\"]] instead of df just in case the dataframe has actually more columns. If that's not the case just use df.\nAlternative if you don't want to use .apply:\nfor user, files in zip(df[\"name\"], df[\"pdfs\"]):\n with ZipFile(user + \".zip\", \"w\") as archive:\n for file in files:\n archive.write(file)\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python",
"python_zipfile"
] | stackoverflow_0074640725_dataframe_pandas_python_python_zipfile.txt |
Q:
fill one dataframe with values from another
I have this Dataframe, which is null values that haven't been populated right.
Unidad Precio Combustible Año_del_vehiculo Caballos \
49 1 1000 Gasolina 1998.0 50.0
63 1 800 Gasolina 1998.0 50.0
88 1 600 Gasolina 1999.0 54.0
107 1 3100 Diésel 2008.0 54.0
244 1 2000 Diésel 1995.0 60.0
... ... ... ... ... ...
46609 1 47795 Gasolina 2016.0 420.0
46770 1 26900 Gasolina 2011.0 450.0
46936 1 19900 Gasolina 2007.0 510.0
46941 1 24500 Gasolina 2006.0 514.0
47128 1 79600 Gasolina 2017.0 612.0
Comunidad_autonoma Marca_y_Modelo Año_Venta Año_Comunidad \
49 Islas Baleares CITROEN AX 2020 2020Islas Baleares
63 Islas Baleares SEAT Arosa 2021 2021Islas Baleares
88 Islas Baleares FIAT Seicento 2020 2020Islas Baleares
107 La Rioja TOYOTA Aygo 2020 2020La Rioja
244 Aragón PEUGEOT 205 2019 2019Aragón
... ... ... ... ...
46609 La Rioja PORSCHE Cayenne 2020 2020La Rioja
46770 Cataluña AUDI RS5 2020 2020Cataluña
46936 Islas Baleares MERCEDES-BENZ Clase M 2020 2020Islas Baleares
46941 La Rioja MERCEDES-BENZ Clase E 2020 2020La Rioja
47128 Islas Baleares MERCEDES-BENZ Clase E 2021 2021Islas Baleares
Fecha Año Super_95 Diesel Comunidad Salario en euros anuales
49 2020-12-01 NaN NaN NaN NaN NaN
63 2021-01-01 NaN NaN NaN NaN NaN
88 2020-12-01 NaN NaN NaN NaN NaN
107 2020-12-01 NaN NaN NaN NaN NaN
244 2019-03-01 NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
46609 2020-12-01 NaN NaN NaN NaN NaN
46770 2020-07-01 NaN NaN NaN NaN NaN
46936 2020-10-01 NaN NaN NaN NaN NaN
46941 2020-11-01 NaN NaN NaN NaN NaN
47128 2021-01-01 NaN NaN NaN NaN NaN
I need to fill the gasoline, diesel and salary tables with the values of the following:
Año Super_95 Diesel Comunidad Año_Comunidad Fecha \
0 2020 1.321750 1.246000 Navarra 2020Navarra 2020-01-01
1 2020 1.301000 1.207250 Navarra 2020Navarra 2020-02-01
2 2020 1.224800 1.126200 Navarra 2020Navarra 2020-03-01
3 2020 1.106667 1.020000 Navarra 2020Navarra 2020-04-01
4 2020 1.078750 0.986250 Navarra 2020Navarra 2020-05-01
.. ... ... ... ... ... ...
386 2021 1.416600 1.265000 La rioja 2021La rioja 2021-08-01
387 2021 1.431000 1.277000 La rioja 2021La rioja 2021-09-01
388 2021 1.474000 1.344000 La rioja 2021La rioja 2021-10-01
389 2021 1.510200 1.382000 La rioja 2021La rioja 2021-11-01
390 2021 1.481333 1.348667 La rioja 2021La rioja 2021-12-01
Salario en euros anuales
0 27.995,96
1 27.995,96
2 27.995,96
3 27.995,96
4 27.995,96
.. ...
386 21.535,29
387 21.535,29
388 21.535,29
389 21.535,29
390 21.535,29
It would fill the columns of the first with the second when the year_community table matches. for example in the nan where 2020Islas Baleares appears in the same row. fill in with the value of the price of gasoline from the other table where 2020Islas Baleares appears in the same row. In the case that it is 2020aragon, it would be with 2020 aragon and so on. I had thought of something like this:
analisis['Super_95'].fillna(analisis2['Super_95'].apply(lambda x: x if x=='2020Islas Baleares' else np.nan), inplace=True)
the second dataframe is the result of doing a merge, and those null values have not worked
A:
df1.merge(df2, on='Año_Comunidad')
As a result you'll have one DataFrame where columns with same names will have a suffix _x for first DataFrame and _y for the second one.
Now to fill in the blanks you can do this for each column:
df1.loc[df1["Año_x"].isnull(),'Año_x'] = df1["Año_y"]
If a row in Año is empty, it will be filled with data from second table that we merged earlier.
You can do it in a cycle for all the columns:
cols = ['Año', 'Super_95', 'Diesel', 'Comunidad', 'Salario en euros anuales']
for col in cols:
df1.loc[df1[col+"_x"].isnull(), col+'_x'] = df1[col+'_y']
And finally you can drop the merged columns:
for col in cols:
df1 = df1.drop(col+'_y', axis=1)
| fill one dataframe with values from another | I have this Dataframe, which is null values that haven't been populated right.
Unidad Precio Combustible Año_del_vehiculo Caballos \
49 1 1000 Gasolina 1998.0 50.0
63 1 800 Gasolina 1998.0 50.0
88 1 600 Gasolina 1999.0 54.0
107 1 3100 Diésel 2008.0 54.0
244 1 2000 Diésel 1995.0 60.0
... ... ... ... ... ...
46609 1 47795 Gasolina 2016.0 420.0
46770 1 26900 Gasolina 2011.0 450.0
46936 1 19900 Gasolina 2007.0 510.0
46941 1 24500 Gasolina 2006.0 514.0
47128 1 79600 Gasolina 2017.0 612.0
Comunidad_autonoma Marca_y_Modelo Año_Venta Año_Comunidad \
49 Islas Baleares CITROEN AX 2020 2020Islas Baleares
63 Islas Baleares SEAT Arosa 2021 2021Islas Baleares
88 Islas Baleares FIAT Seicento 2020 2020Islas Baleares
107 La Rioja TOYOTA Aygo 2020 2020La Rioja
244 Aragón PEUGEOT 205 2019 2019Aragón
... ... ... ... ...
46609 La Rioja PORSCHE Cayenne 2020 2020La Rioja
46770 Cataluña AUDI RS5 2020 2020Cataluña
46936 Islas Baleares MERCEDES-BENZ Clase M 2020 2020Islas Baleares
46941 La Rioja MERCEDES-BENZ Clase E 2020 2020La Rioja
47128 Islas Baleares MERCEDES-BENZ Clase E 2021 2021Islas Baleares
Fecha Año Super_95 Diesel Comunidad Salario en euros anuales
49 2020-12-01 NaN NaN NaN NaN NaN
63 2021-01-01 NaN NaN NaN NaN NaN
88 2020-12-01 NaN NaN NaN NaN NaN
107 2020-12-01 NaN NaN NaN NaN NaN
244 2019-03-01 NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
46609 2020-12-01 NaN NaN NaN NaN NaN
46770 2020-07-01 NaN NaN NaN NaN NaN
46936 2020-10-01 NaN NaN NaN NaN NaN
46941 2020-11-01 NaN NaN NaN NaN NaN
47128 2021-01-01 NaN NaN NaN NaN NaN
I need to fill the gasoline, diesel and salary tables with the values of the following:
Año Super_95 Diesel Comunidad Año_Comunidad Fecha \
0 2020 1.321750 1.246000 Navarra 2020Navarra 2020-01-01
1 2020 1.301000 1.207250 Navarra 2020Navarra 2020-02-01
2 2020 1.224800 1.126200 Navarra 2020Navarra 2020-03-01
3 2020 1.106667 1.020000 Navarra 2020Navarra 2020-04-01
4 2020 1.078750 0.986250 Navarra 2020Navarra 2020-05-01
.. ... ... ... ... ... ...
386 2021 1.416600 1.265000 La rioja 2021La rioja 2021-08-01
387 2021 1.431000 1.277000 La rioja 2021La rioja 2021-09-01
388 2021 1.474000 1.344000 La rioja 2021La rioja 2021-10-01
389 2021 1.510200 1.382000 La rioja 2021La rioja 2021-11-01
390 2021 1.481333 1.348667 La rioja 2021La rioja 2021-12-01
Salario en euros anuales
0 27.995,96
1 27.995,96
2 27.995,96
3 27.995,96
4 27.995,96
.. ...
386 21.535,29
387 21.535,29
388 21.535,29
389 21.535,29
390 21.535,29
It would fill the columns of the first with the second when the year_community table matches. for example in the nan where 2020Islas Baleares appears in the same row. fill in with the value of the price of gasoline from the other table where 2020Islas Baleares appears in the same row. In the case that it is 2020aragon, it would be with 2020 aragon and so on. I had thought of something like this:
analisis['Super_95'].fillna(analisis2['Super_95'].apply(lambda x: x if x=='2020Islas Baleares' else np.nan), inplace=True)
the second dataframe is the result of doing a merge, and those null values have not worked
| [
"df1.merge(df2, on='Año_Comunidad')\n\nAs a result you'll have one DataFrame where columns with same names will have a suffix _x for first DataFrame and _y for the second one.\nNow to fill in the blanks you can do this for each column:\ndf1.loc[df1[\"Año_x\"].isnull(),'Año_x'] = df1[\"Año_y\"]\n\nIf a row in Año is empty, it will be filled with data from second table that we merged earlier.\nYou can do it in a cycle for all the columns:\ncols = ['Año', 'Super_95', 'Diesel', 'Comunidad', 'Salario en euros anuales']\nfor col in cols:\n df1.loc[df1[col+\"_x\"].isnull(), col+'_x'] = df1[col+'_y']\n\nAnd finally you can drop the merged columns:\nfor col in cols:\n df1 = df1.drop(col+'_y', axis=1)\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074642291_pandas_python.txt |
Q:
Login with python to scrape pages
I need to scrape a page that requires login to access. But I couldn't do it for this web page. How can I do that?
I want to login on 'https://web.tvplus.com.tr/giris' And then scrape pages like: 'https://web.tvplus.com.tr/kanallar'
I haven't tried anything for the question. I do not know how to do it.
A:
StackOverflow typically hates questions like these, but the book I used when I started my programming journey actually had a whole entire chapter on this:
https://automatetheboringstuff.com/2e/chapter12/
The module you're looking to use is "selenium"; even though many would claim that there's better modules to use in 2022, selenium will do the job fine for your purposes.
| Login with python to scrape pages | I need to scrape a page that requires login to access. But I couldn't do it for this web page. How can I do that?
I want to login on 'https://web.tvplus.com.tr/giris' And then scrape pages like: 'https://web.tvplus.com.tr/kanallar'
I haven't tried anything for the question. I do not know how to do it.
| [
"StackOverflow typically hates questions like these, but the book I used when I started my programming journey actually had a whole entire chapter on this:\nhttps://automatetheboringstuff.com/2e/chapter12/\nThe module you're looking to use is \"selenium\"; even though many would claim that there's better modules to use in 2022, selenium will do the job fine for your purposes.\n"
] | [
0
] | [] | [] | [
"authentication",
"python",
"web_scraping"
] | stackoverflow_0074643063_authentication_python_web_scraping.txt |
Q:
Split 7 digit into separate columns in csv with python
Hi I have 100 data in csv file
I want to split 7 digit number into seprate columns with python
My csv file is like this:
A
1234567
Split into new columns:
B
C
D
E
F
G
H
1
2
3
4
5
6
7
I try
Splitdigit= df['A']>str.split(expand=true).add_perfix('A')
A:
If you have strings, you can use:
out = df['A'].astype(str).str.split('(?<=.)(?=.)', expand=True)
Output:
0 1 2 3 4 5 6
0 1 2 3 4 5 6 7
With the column names:
from string import ascii_uppercase
out = (df['A'].astype(str).str.split('(?<=.)(?=.)', expand=True)
.rename(columns=dict(enumerate(ascii_uppercase[1:])))
)
Output:
B C D E F G H
0 1 2 3 4 5 6 7
| Split 7 digit into separate columns in csv with python | Hi I have 100 data in csv file
I want to split 7 digit number into seprate columns with python
My csv file is like this:
A
1234567
Split into new columns:
B
C
D
E
F
G
H
1
2
3
4
5
6
7
I try
Splitdigit= df['A']>str.split(expand=true).add_perfix('A')
| [
"If you have strings, you can use:\nout = df['A'].astype(str).str.split('(?<=.)(?=.)', expand=True)\n\nOutput:\n 0 1 2 3 4 5 6\n0 1 2 3 4 5 6 7\n\nWith the column names:\nfrom string import ascii_uppercase\n\nout = (df['A'].astype(str).str.split('(?<=.)(?=.)', expand=True)\n .rename(columns=dict(enumerate(ascii_uppercase[1:])))\n )\n\nOutput:\n B C D E F G H\n0 1 2 3 4 5 6 7\n\n"
] | [
0
] | [] | [] | [
"arrays",
"digits",
"pandas",
"python"
] | stackoverflow_0074643120_arrays_digits_pandas_python.txt |
Q:
How to check if the set with commas is infinite or finite?
I am trying to use the method math.isinf to find out if the set is infinite.
The set is
{...,-5,-4,-3,-2,-1,0,1,2,3,4,5,...}
import math
Infinte_set = {-math.inf,-5,-4,-3,-2,-1,0,1,2,3,4,5,math.inf}
print(math.isinf(Infinte_set))
I was expecting True or False but what I got is this:
TypeError Traceback (most recent call last)
<ipython-input-13-3d08b071af6f> in <module>
5 import math
6 Infinte_set = {-math.inf,-5,-4,-3,-2,-1,0,1,2,3,4,5,math.inf}
----> 7 print(math.isinf(Infinte_set))
TypeError: must be real number, not set
A:
You can pass the min value or max value of the set in math.isinf function to check for infinity.
print(math.isinf(min(Infinte_set)) or math.isinf(max(Infinte_set)))
math.isinf(min(Infinte_set)) -> min(Infinte_set) would be the minimum numerical value, in your case it would be -infinity.
math.isinf(max(Infinte_set)) -> max(Infinte_set) would be the minimum numerical value, in your case it would be +infinity.
If in case you don't know how or works:
True or False -> True
False or True -> True
False or False -> False
True or True -> True
| How to check if the set with commas is infinite or finite? | I am trying to use the method math.isinf to find out if the set is infinite.
The set is
{...,-5,-4,-3,-2,-1,0,1,2,3,4,5,...}
import math
Infinte_set = {-math.inf,-5,-4,-3,-2,-1,0,1,2,3,4,5,math.inf}
print(math.isinf(Infinte_set))
I was expecting True or False but what I got is this:
TypeError Traceback (most recent call last)
<ipython-input-13-3d08b071af6f> in <module>
5 import math
6 Infinte_set = {-math.inf,-5,-4,-3,-2,-1,0,1,2,3,4,5,math.inf}
----> 7 print(math.isinf(Infinte_set))
TypeError: must be real number, not set
| [
"You can pass the min value or max value of the set in math.isinf function to check for infinity.\nprint(math.isinf(min(Infinte_set)) or math.isinf(max(Infinte_set)))\n\nmath.isinf(min(Infinte_set)) -> min(Infinte_set) would be the minimum numerical value, in your case it would be -infinity.\nmath.isinf(max(Infinte_set)) -> max(Infinte_set) would be the minimum numerical value, in your case it would be +infinity.\nIf in case you don't know how or works:\nTrue or False -> True\nFalse or True -> True\nFalse or False -> False\nTrue or True -> True\n\n"
] | [
0
] | [] | [] | [
"infinite",
"math",
"numbers",
"python",
"set"
] | stackoverflow_0074642968_infinite_math_numbers_python_set.txt |
Q:
time lapse of transport data in folium
I have data that resembles routes in the Netherlands. Now I want to show these routes on a folium map using some sort of timelapse.
The underneath code creates a folium map displaying all routes over the last few months.
I want however some sort of slide which you can drag to show the routes of for example a specific day in the last few months.
(The slib geopandas dataframe consists the routes of the last few months)
for i in range(len(slibdata)):
slib = slib.add_child(folium.PolyLine(locations=[routes[i][0], routes[i][1]], weight=slibdata['weight'][i]/3, color=color_, tooltip = f"{slibdata['istOmschr'][i]} >>> {slibdata['ist2Omschr'][i]}" ))
for coord in verwerkers_info:
slib = slib.add_child(folium.CircleMarker(location = [coord[1],coord[2]], tooltip =(coord), radius = 5))
Anything would help!
Maybe this is not possible with folium? But with some other package?
A:
look at this example which demonstrates how markers / lines are added depending on their timestamp. At the bottom of the map are controls for sliding through data
https://nbviewer.org/github/python-visualization/folium/blob/main/examples/Plugins.ipynb#Timestamped-GeoJSON
| time lapse of transport data in folium | I have data that resembles routes in the Netherlands. Now I want to show these routes on a folium map using some sort of timelapse.
The underneath code creates a folium map displaying all routes over the last few months.
I want however some sort of slide which you can drag to show the routes of for example a specific day in the last few months.
(The slib geopandas dataframe consists the routes of the last few months)
for i in range(len(slibdata)):
slib = slib.add_child(folium.PolyLine(locations=[routes[i][0], routes[i][1]], weight=slibdata['weight'][i]/3, color=color_, tooltip = f"{slibdata['istOmschr'][i]} >>> {slibdata['ist2Omschr'][i]}" ))
for coord in verwerkers_info:
slib = slib.add_child(folium.CircleMarker(location = [coord[1],coord[2]], tooltip =(coord), radius = 5))
Anything would help!
Maybe this is not possible with folium? But with some other package?
| [
"look at this example which demonstrates how markers / lines are added depending on their timestamp. At the bottom of the map are controls for sliding through data\nhttps://nbviewer.org/github/python-visualization/folium/blob/main/examples/Plugins.ipynb#Timestamped-GeoJSON\n"
] | [
0
] | [] | [] | [
"folium",
"geopandas",
"python",
"time",
"timelapse"
] | stackoverflow_0074641964_folium_geopandas_python_time_timelapse.txt |
Q:
How to remove sprites shadow?
I'm trying to load this image without its shadow
I've tried getting the color key by printing the color where my mouse is and and then setting the images color key to that color. I also tried adding convert_alpha() when loading the image but it still didn't work. Am I just getting the color wrong or is there another way of removing colors other than setting the color key. I tried searching google but it only shows setting the color key.
My current code
import pygame, sys
pygame.init()
display = pygame.display.set_mode((1280, 720))
clock = pygame.time.Clock()
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
pygame.quit()
sys.exit()
if event.key == pygame.K_RETURN:
print(display.get_at(pygame.mouse.get_pos()))
display.fill('white')
image = pygame.image.load('mystic_woods_free_v0.2/sprites/characters/player.png')
image = pygame.transform.scale(image, (image.get_rect().width * 6, image.get_rect().height * 6))
image.set_colorkey((138, 141, 151))
display.blit(image, (0, 0))
pygame.display.update()
clock.tick(60)
A:
I found out the true color of the shadow by using
pygame.image.load(...).convert()
and then using that color as the colorkey (which works without .covert() )
| How to remove sprites shadow? | I'm trying to load this image without its shadow
I've tried getting the color key by printing the color where my mouse is and and then setting the images color key to that color. I also tried adding convert_alpha() when loading the image but it still didn't work. Am I just getting the color wrong or is there another way of removing colors other than setting the color key. I tried searching google but it only shows setting the color key.
My current code
import pygame, sys
pygame.init()
display = pygame.display.set_mode((1280, 720))
clock = pygame.time.Clock()
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
pygame.quit()
sys.exit()
if event.key == pygame.K_RETURN:
print(display.get_at(pygame.mouse.get_pos()))
display.fill('white')
image = pygame.image.load('mystic_woods_free_v0.2/sprites/characters/player.png')
image = pygame.transform.scale(image, (image.get_rect().width * 6, image.get_rect().height * 6))
image.set_colorkey((138, 141, 151))
display.blit(image, (0, 0))
pygame.display.update()
clock.tick(60)
| [
"I found out the true color of the shadow by using\npygame.image.load(...).convert()\nand then using that color as the colorkey (which works without .covert() )\n"
] | [
0
] | [] | [] | [
"pygame",
"python",
"python_3.x"
] | stackoverflow_0074642639_pygame_python_python_3.x.txt |
Q:
Use different Python version with virtualenv
How do I create a virtual environment for a specified version of Python?
A:
NOTE: For Python 3.3+, see The Aelfinn's answer below.
Use the --python (or short -p) option when creating a virtualenv instance to specify the Python executable you want to use, e.g.:
virtualenv --python="/usr/bin/python2.6" "/path/to/new/virtualenv/"
A:
Since Python 3, the documentation suggests creating the virtual environment using:
python3 -m venv "my_env_name"
Please note that venv does not permit creating virtual environments with other versions of Python. For that, install and use the virtualenv package.
Obsolete information
The pyvenv script can be used to create a virtual environment:
pyvenv "/path/to/new/virtual/environment"
Deprecated since Python 3.6.
A:
These are the steps you can follow when you are on a shared hosting environment and need to install & compile Python from source and then create venv from your Python version. For Python 2.7.9. you would do something along these lines:
mkdir ~/src
wget http://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz
tar -zxvf Python-2.7.9.tgz
cd Python-2.7.9
mkdir ~/.localpython
./configure --prefix=$HOME/.localpython
make
make install
virtual env
cd ~/src
wget https://pypi.python.org/packages/5c/79/5dae7494b9f5ed061cff9a8ab8d6e1f02db352f3facf907d9eb614fb80e9/virtualenv-15.0.2.tar.gz#md5=0ed59863994daf1292827ffdbba80a63
tar -zxvf virtualenv-15.0.2.tar.gz
cd virtualenv-15.0.2/
~/.localpython/bin/python setup.py install
virtualenv ve -p $HOME/.localpython/bin/python2.7
source ve/bin/activate
Naturally, this can be applicable to any situation where you want to replicate the exact environment you work and deploy on.
A:
There is an easier way,
virtualenv venv --python=python2.7
Thanks to a comment, this only works if you have python2.7 installed at the system level (e.g. /usr/bin/python2.7).
Otherwise, if you are using homebrew you can use the path to give you what you want.
virtualenv venv --python=/usr/local/bin/python
You can find the path to your python installation with
which python
This will also work with python 3.
which python3
>> /usr/local/bin/python3
virtualenv venv --python=/usr/local/bin/python3
Ultimately condensing to:
virtualenv venv -p `which python`
virtualenv venv -p `which python3`
A:
virtualenv --python=/usr/bin/python2.6 <path/to/myvirtualenv>
A:
Under Windows for me this works:
virtualenv --python=c:\Python25\python.exe envname
without the python.exe I got WindowsError: [Error 5] Access is denied
I have Python2.7.1 installed with virtualenv 1.6.1, and I wanted python 2.5.2.
A:
Mac OSX 10.6.8 (Snow Leopard):
1) When you do pip install virtualenv, the pip command is associated with one of your python versions, and virtualenv gets installed into that version of python. You can do
$ which pip
to see what version of python that is. If you see something like:
$ which pip
/usr/local/bin/pip
then do:
$ ls -al /usr/local/bin/pip
lrwxrwxr-x 1 root admin 65 Apr 10 2015 /usr/local/bin/pip ->
../../../Library/Frameworks/Python.framework/Versions/2.7/bin/pip
You can see the python version in the output.
By default, that will be the version of python that is used for any new environment you create. However, you can specify any version of python installed on your computer to use inside a new environment with the -p flag:
$ virtualenv -p python3.2 my_env
Running virtualenv with interpreter /usr/local/bin/python3.2
New python executable in my_env/bin/python
Installing setuptools, pip...done.
virtualenv my_env will create a folder in the current directory which
will contain the Python executable files, and a copy of the pip
[command] which you can use to install other packages.
http://docs.python-guide.org/en/latest/dev/virtualenvs/
virtualenv just copies python from a location on your computer into the newly created my_env/bin/ directory.
2) The system python is in /usr/bin, while the various python versions I installed were, by default, installed into:
/usr/local/bin
3) The various pythons I installed have names like python2.7 or python3.2, and I can use those names rather than full paths.
========VIRTUALENVWRAPPER=========
1) I had some problems getting virtualenvwrapper to work. This is what I ended up putting in ~/.bash_profile:
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/django_projects #Not very important -- mkproject command uses this
#Added the following based on:
#http://stackoverflow.com/questions/19665327/virtualenvwrapper-installation-snow-leopard-python
export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python2.7
#source /usr/local/bin/virtualenvwrapper.sh
source /Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenvwrapper.sh
2) The -p option works differently with virtualenvwrapper: I have to specify the full path to the python interpreter to be used in the new environment(when I do not want to use the default python version):
$ mkvirtualenv -p /usr/local/bin/python3.2 my_env
Running virtualenv with interpreter /usr/local/bin/python3
New python executable in my_env/bin/python
Installing setuptools, pip...done.
Usage: source deactivate
removes the 'bin' directory of the environment activated with 'source
activate' from PATH.
Unlike virtualenv, virtualenvwrapper will create the environment at the location specified by the $WORKON_HOME environment variable. That keeps all your environments in one place.
A:
[November 2019] I needed to install a Python 3.7 environment (env) on my Python 3.8-based Arch Linux system. Python 3.7 was no longer on the system, so I could not downgrade Python, to install a package that I needed.
Furthermore, I wanted to use that package / Python 3.7 inside a virtual environment (venv). This is how I did it.
Download Python version source files:
I downloaded the Python 3.7.4 source files from
https://www.python.org/downloads/source/
to
/mnt/Vancouver/apps/python_versions/src/Python-3.7.4.tgz
I then extracted that archive (source files) to
/mnt/Vancouver/apps/python_versions/src/Python-3.7.4/
Installation:
[Note: in my system env, not a venv.]
cd /mnt/Vancouver/apps/python_versions/src/Python-3.7.4/
time ./configure ## 17 sec
time make ## 1 min 51 sec
time sudo make install ## 18 sec
time make clean ## 0.3 sec
Examine installed Python versions:
$ which python
/usr/bin/python
$ python --version
Python 3.8.0
$ which python3.7
/usr/local/bin/python3.7
$ python ## Python 3.8 [system / env]
Python 3.8.0 (default, Oct 23 2019, 18:51:26)
[GCC 9.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
$ python3.7 ## newly-installed Python 3.7 package
Python 3.7.4 (default, Nov 20 2019, 11:36:53)
[GCC 9.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> print(sys.version)
3.7.4 (default, Nov 20 2019, 11:36:53)
[GCC 9.2.0]
>>>
$ python3.7 --version
Python 3.7.4
How to create a venv for a specific Python version:
https://docs.python.org/3/tutorial/venv.html
12.2. CREATING VIRTUAL ENVIRONMENTS
The module used to create and manage virtual environments is called venv. venv will usually install the most recent version of Python that you have available. If you have multiple versions of Python on your system, you can select a specific Python version by running python3 or whichever version you want.
To create a virtual environment, decide upon a directory where you want to place it, and run the venv module as a script with the directory path:
python3 -m venv tutorial-env
This will create the tutorial-env directory if it doesn’t exist, and also create directories inside it containing a copy of the Python interpreter, the standard library, and various supporting files.
...
Create Python 3.7 venv [on a Python 3.8 operating env / system]:
python3.7 -m venv ~/venv/py3.7 ## create Python 3.7-based venv
source ~/venv/py3.7/bin/activate ## activate that venv
deactivate ## deactivate that venv (when done, there)
Added to ~/.bashrc:
alias p37='echo " [Python 3.7 venv (source ~/venv/py3.7/bin/activate)]" && source ~/venv/py3.7/bin/activate'
Test Python 3.7 venv:
$ p37
[Python 3.7 venv (source ~/venv/py3.7/bin/activate)]
(py3.7)$ python --version
Python 3.7.4
(py3.7)$ python
Python 3.7.4 (default, Nov 20 2019, 11:36:53)
[GCC 9.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> print(sys.version)
3.7.4 (default, Nov 20 2019, 11:36:53)
[GCC 9.2.0]
>>>
A:
Suppose you currently have python 2.7 installed in your virtualenv. But want to make use of python3.2, You would have to update this with:
$ virtualenv --python=/usr/bin/python3.2 name_of_your_virtualenv
Then activate your virtualenv by:
$ source activate name_of_your_virtualenv
and then do: python --version in shell to check whether your version is now updated.
A:
You should have that Python version installed. If you have it then basically,
With virtualenv,
virtualenv --python=python3.8 env/place/you/want/to/save/to
with venv
python3.8 -m venv env/place/you/want/to/save/to
The above examples are for python3.8, you can change it to have different versions of virtual environments given that they are installed in your computer.
A:
These two commands should work fine.
virtualenv -p python2 myenv (For python2)
virtualenv -p python3 myenv (For python3)
A:
You can call virtualenv with python version you want. For example:
python3 -m virtualenv venv
Or alternatively directly point to your virtualenv path. e.g. for windows:
c:\Python34\Scripts\virtualenv.exe venv
And by running:
venv/bin/python
Python 3.5.1 (v3.5.1:37a07cee5969, Dec 5 2015, 21:12:44)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
you can see the python version installed in virtual environment
A:
The -p approach works well, but you do have to remember to use it every time. If your goal is to switch to a newer version of Python generally, that's a pain and can also lead to mistakes.
Your other option is to set an environment variable that does the same thing as -p. Set this via your ~/.bashrc file or wherever you manage environment variables for your login sessions:
export VIRTUALENV_PYTHON=/path/to/desired/version
Then virtualenv will use that any time you don't specify -p on the command line.
A:
On the mac I use pyenv and virtualenvwrapper. I had to create a new virtualenv. You need homebrew which I'll assume you've installed if you're on a mac, but just for fun:
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew install pyenv
pyenv install 2.7.10
pyenv global 2.7.10
export PATH=/Users/{USERNAME}/.pyenv/versions/2.7.10/bin:$PATH
mkvirtualenv -p ~/.pyenv/versions/2.7.10/bin/python {virtual_env_name}
I also froze my requirements first so i could simply reinstall in the new virtualenv with:
pip install -r requirements.txt
A:
Even easier, by using command substitution to find python2 for you:
virtualenv -p $(which python2) <path/to/new/virtualenv/>
Or when using virtualenvwrapper :
mkvirtualenv -p $(which python2) <env_name>
A:
For Mac(High Sierra), install the virtualenv on python3 and create a virtualenv for python2:
$ python3 -m pip install virtualenv
$ python3 -m virtualenv --python=python2 vp27
$ source vp27/bin/activate
(vp27)$ python --version
Python 2.7.14
A:
As already mentioned in multiple answers, using virtualenv is a clean solution. However a small pitfall that everyone should be aware of is that if an alias for python is set in bash_aliases like:
python=python3.6
this alias will also be used inside the virtual environment. So in this scenario running python -V inside the virtual env will always output 3.6 regardless of what interpreter is used to create the environment:
virtualenv venv --python=pythonX.X
A:
These seem a little overcomplicated for Windows. If you're on Windows running python 3.3 or later, you can use the python launcher py to do this much more easily. Simply install the different python version, then run:
py -[my version] -m venv env
This will create a virtual environment called env in your current directory, using python [my version]. As an example:
py -3.7 -m venv env
./env/Scripts/activate
This creates a virtual environment called env using python3.7 and activates it. No paths or other complex stuff required.
A:
I utilized this answer for Windows
https://stackoverflow.com/a/22793687/15435022
py -3.4 -m venv c:\path\to\wherever\you\want\it
A:
On windows:
py -3.4x32 -m venv venv34
or
py -2.6.2 -m venv venv26
This uses the py launcher which will find the right python executable for you (assuming you have it installed).
A:
In windows subsystem for linux:
Create environment for python3:
virtualenv --python=/usr/bin/python3 env
Activate it:
source env/bin/activate
A:
I use pyenv to manage my python version.
pyenv install 3.7.3
pyenv local 3.7.3
Check your python version:
$ python --version
Python 3.7.3
Create the virtual environment with venv:
python -m venv .
Then activate the Virtual Environment:
source bin/activate
Check your python version:
$ python --version
Python 3.7.3
You may need to remove the previous virtual environment
rm -rf bin
A:
End of 2020:
The most seamless experience for using virtualenv (added benefit: with any possible python version) would be to use pyenv and its (bundled) pyenv-virtualenv plugin (cf https://realpython.com/intro-to-pyenv/#virtual-environments-and-pyenv)
Usage: pyenv virtualenv <python_version> <environment_name>
Installation:
first check that you've got all prerequisites (depending on your OS): https://github.com/pyenv/pyenv/wiki/Common-build-problems#prerequisites
curl https://pyenv.run | bash
exec $SHELL
cf https://github.com/pyenv/pyenv-installer
That being said, nowadays the best possible alternative instead of using virtualenv (and pip) would be Poetry (along with pyenv indicated above, to handle different python versions).
Another option, because it's supported directly by the PyPA (the org behind pip and the PyPI) and has restarted releasing since the end of May (didn't release since late 2018 prior to that...) would be Pipenv
A:
This worked for my usage in Windows 10, where I have Python 3.7 and want to downgrade for a project in Python 3.6.6:
I used "venv" to create a new environment called "venv", I downloaded from https://www.python.org/downloads/windows/ ; install "Download Windows x86-64 executable installer-" ; then I used the following command line in the directory where I want to create my environment
>C:\Users\...\Python\Python36\python.exe -m venv venv
Finally, I activated the environnent using the command line:
>venv\Scripts\activate.bat
And check the python version by calling:
>python --version
Python 3.6.6
A:
Yes, the above answers are correct and works fine on Unix based systems like Linux & MAC OS X.
I tried to create virtualenv for Python2 & Python3 with the following commands.
Here I have used venv2 & venv3 as their names for Python2 & Python3 respectively.
Python2 »
MacBook-Pro-2:~ admin$ virtualenv venv2 --python=`which python2`
Running virtualenv with interpreter /usr/local/bin/python2
New python executable in /Users/admin/venv2/bin/python
Installing setuptools, pip, wheel...done.
MacBook-Pro-2:~ admin$
MacBook-Pro-2:~ admin$ ls venv2/bin/
activate easy_install pip2.7 python2.7
activate.csh easy_install-2.7 python wheel
activate.fish pip python-config
activate_this.py pip2 python2
MacBook-Pro-2:~ admin$
Python3 »
MacBook-Pro-2:~ admin$ virtualenv venv3 --python=`which python3`
Running virtualenv with interpreter /usr/local/bin/python3
Using base prefix '/Library/Frameworks/Python.framework/Versions/3.6'
New python executable in /Users/admin/venv3/bin/python3
Also creating executable in /Users/admin/venv3/bin/python
Installing setuptools, pip, wheel...done.
MacBook-Pro-2:~ admin$
MacBook-Pro-2:~ admin$ ls venv3/bin/
activate easy_install pip3.6 python3.6
activate.csh easy_install-3.6 python wheel
activate.fish pip python-config
activate_this.py pip3 python3
MacBook-Pro-2:~ admin$
Checking Python installation locations
MacBook-Pro-2:~ admin$ which python2
/usr/local/bin/python2
MacBook-Pro-2:~ admin$
MacBook-Pro-2:~ admin$ which python3
/usr/local/bin/python3
MacBook-Pro-2:~ admin$
A:
I use Windows so I should use .exe on the pthon path
virtualenv -p=C:\Python27\python2.exe <envname>
A:
On Linux Ubuntu 21.04 (currently Python 3.9.5) I needed to get a virtualenv of Python 3.7.8. Full steps to get working:
Find the Python version source you want, for example 3.7.8 is here: https://www.python.org/downloads/release/python-378/
Download the Gzipped source tarball
Unzip it with tar zxvf Python-3.7.8.tgz (amend as required with your version number if different from 3.7.8)
Copy the unzipped folder to /usr/bin with: sudo cp -r Python-3.7.8 /usr/bin
cd /usr/bin/Python-3.7.8/
Check the contents if you wanted to see what you have so far: ls
sudo time ./configure
sudo time make
time sudo make install
time make clean
Check how your python is set up and reporting:
which python
python --version
Should be all relating to your primary install (Python 3.9.5 for me)
To check your new install:
which python 3.7
python3.7 --version
Should be all relating to your 3.7.8 install
If you want to run it to check, do:
python3.7
exit()
Install venv:
sudo apt install venv
To create a venv (maybe in your repo, if so, add .venv to .gitignore):
python3.7 -m venv .venv
To activate your venv:
source .venv/bin/activate
Check your version:
python --version
A:
Answer to this question shouldn't be that complicated...
TL,DR:
install as many versions of python you prefer on your system and use:
/c/path/to/any/version/of/python -m venv my_venv
============================================
I use venv to install virtual environments with
python -m venv <where/to/and/name_of_venv>
if you try which python you will see which python you are referring to, when saying "python". for example, for me it is:
which python
result:
/c/Program Files/Python36/python
So, now you have the answer!
you can install any version of python on your system and have multiple of them at the same time. So, for example I installed Python3.7 in this directory: "C:\Program Files\Python37".
So, instead of using 'python' now I specify which python by /c/Program\ Files/Python37/python:
/c/Program\ Files/Python37/python -m venv my_venv
(don't forget to escape the space in the path)
That's it!
A:
It worked for me
sudo apt-get install python3-minimal
virtualenv --no-site-packages --distribute -p /usr/bin/python3 ~/.virtualenvs/py3
A:
virtualenv -p python3 myenv
Link to Creating virtualenv
A:
This was a bug with virtualenv.
Just upgrading your pip should be the fix.
pip install --upgrade virtualenv
A:
For Debian (debian 9) Systems in 2019, I discovered a simple solution that may solve the problem from within the virtual environment.
Suppose the virtual environment were created via:
python3.7 -m venv myenv
but only has versions of python2 and python2.7, and you need the recent features of python3.7.
Then, simply running the command:
(myvenv) $ python3.7 -m venv --upgrade /home/username/path/to/myvenv/
will add python3.7 packages if they are already available on your system.
A:
It worked for me on windows with python 2 installation :
Step 1: Install python 3 version .
Step 2: create a env folder for
the virtual environment.
Step 3 : c:\Python37\python -m venv
c:\path\to\env.
This is how i created Python 3 virtual environment on my existing python 2 installation.
A:
Yes you just need to install the other version of python, and define the location of your other version of python in your command like :
virtualenv /home/payroll/Documents/env -p /usr/bin/python3
A:
Here is the stepbystep how to create the Virtual environment in Visual Studio Code folder:
I used Powershell (Administrator mode):
1. I create a VSCode folder - "D:\Code_Python_VE" where I want to create Virtual environment.
2. Next I type the command - "pip3 install virtualenv". (D:\Code_Python_VE> pip3 install virtualenv)
3. D:\Code_Python_VE> python3 -m venv project_env
4. D:\Code_Python_VE>project_env\Scripts\activate.bat
5. D:\Code_Python_VE> ls - This will list a new directory "project_env".
6. D:\Code_Python_VE> code . This will start Visual Studio Code. Make sure the command is (code .).
7. Create launch.jason with following content:
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "python",
"request": "launch",
"name": "Python: Current File (Integrated Terminal 1)",
"program": "${file}"
},
{
"name": "Python: Current File (Integrated Terminal 2)",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
}
]
}
(Please search how to go to Debug window and Add new Configuration in VS Code).
Press F1 in Visual studio code and the command pallet will open - Select Python Interpreter and select the virtual environment project_env.
Add test.py file with one statement print("Hello World").
Run this program.
In Visual studio Code terminal -
(project_env) d:\Code_Python_VE>python -m pip install --upgrade
I hope this helps.
A:
UBUNTU 19.04 / Global Python 3.7.
This worked for me, enabling a Python 3.8 environment using the recommended venv for python 3 development.
Install 3.8 and 3.8 venv module:
$ sudo apt install python3.8 python3.8-venv
plus any other modules you need
Create your Virtual Env using the python version you want in that env
$ /usr/bin/python3.8 -m venv python38-env
switch into your virtual env
$ source python38-env/bin/activate
python -V = python 3.8
A:
Surprised that no one has mentioned conda so far. I have found this is a lot more straightforward than the other methods mentioned here. Let's say I have python 3.9 and python 2.7 and a project I am working on was python 3.5.4, I could simply create the isolated virtual env for 3.5.4 with the conda command without downloading anything else.
To see a list of available python versions first, use the command
conda search "^python$"
To create the virtual environment for python version x.y.z, use the command
conda create -n yourenvname python=x.y.z
Activate venv with
conda activate yourenvname
Deactivate with
conda deactivate
To delete the virtual environment when done, use the command
conda remove -n yourenvname --all
A:
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python_version (ex: sudo apt install python3.8)
python_version -m venv env (ex: python3.8 -m venv env)
. env/bin/activate
This Above steps will solve your python version for env issue.
A:
Simple:
Linux
virtualenv venv --python=/usr/bin/python3.9
Windows
virtualenv venv --python=C:\Users\username\AppData\Local\Programs\Python\Python\python.exe
| Use different Python version with virtualenv | How do I create a virtual environment for a specified version of Python?
| [
"NOTE: For Python 3.3+, see The Aelfinn's answer below.\n\nUse the --python (or short -p) option when creating a virtualenv instance to specify the Python executable you want to use, e.g.:\nvirtualenv --python=\"/usr/bin/python2.6\" \"/path/to/new/virtualenv/\"\n\n",
"Since Python 3, the documentation suggests creating the virtual environment using:\npython3 -m venv \"my_env_name\"\n\nPlease note that venv does not permit creating virtual environments with other versions of Python. For that, install and use the virtualenv package.\n\nObsolete information\nThe pyvenv script can be used to create a virtual environment:\npyvenv \"/path/to/new/virtual/environment\"\n\nDeprecated since Python 3.6.\n",
"These are the steps you can follow when you are on a shared hosting environment and need to install & compile Python from source and then create venv from your Python version. For Python 2.7.9. you would do something along these lines:\nmkdir ~/src\nwget http://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz\ntar -zxvf Python-2.7.9.tgz\ncd Python-2.7.9\nmkdir ~/.localpython\n./configure --prefix=$HOME/.localpython\nmake\nmake install\n\nvirtual env\ncd ~/src\nwget https://pypi.python.org/packages/5c/79/5dae7494b9f5ed061cff9a8ab8d6e1f02db352f3facf907d9eb614fb80e9/virtualenv-15.0.2.tar.gz#md5=0ed59863994daf1292827ffdbba80a63\ntar -zxvf virtualenv-15.0.2.tar.gz\ncd virtualenv-15.0.2/\n~/.localpython/bin/python setup.py install\nvirtualenv ve -p $HOME/.localpython/bin/python2.7\nsource ve/bin/activate \n\nNaturally, this can be applicable to any situation where you want to replicate the exact environment you work and deploy on. \n",
"There is an easier way, \nvirtualenv venv --python=python2.7\n\nThanks to a comment, this only works if you have python2.7 installed at the system level (e.g. /usr/bin/python2.7).\nOtherwise, if you are using homebrew you can use the path to give you what you want.\nvirtualenv venv --python=/usr/local/bin/python\n\nYou can find the path to your python installation with \nwhich python\n\nThis will also work with python 3. \nwhich python3\n>> /usr/local/bin/python3\nvirtualenv venv --python=/usr/local/bin/python3\n\nUltimately condensing to:\nvirtualenv venv -p `which python`\nvirtualenv venv -p `which python3`\n\n",
"virtualenv --python=/usr/bin/python2.6 <path/to/myvirtualenv>\n\n",
"Under Windows for me this works:\nvirtualenv --python=c:\\Python25\\python.exe envname\n\nwithout the python.exe I got WindowsError: [Error 5] Access is denied \nI have Python2.7.1 installed with virtualenv 1.6.1, and I wanted python 2.5.2.\n",
"Mac OSX 10.6.8 (Snow Leopard):\n1) When you do pip install virtualenv, the pip command is associated with one of your python versions, and virtualenv gets installed into that version of python. You can do\n $ which pip \n\nto see what version of python that is. If you see something like:\n $ which pip\n /usr/local/bin/pip\n\nthen do:\n$ ls -al /usr/local/bin/pip\nlrwxrwxr-x 1 root admin 65 Apr 10 2015 /usr/local/bin/pip ->\n../../../Library/Frameworks/Python.framework/Versions/2.7/bin/pip\n\nYou can see the python version in the output.\nBy default, that will be the version of python that is used for any new environment you create. However, you can specify any version of python installed on your computer to use inside a new environment with the -p flag: \n$ virtualenv -p python3.2 my_env \nRunning virtualenv with interpreter /usr/local/bin/python3.2 \nNew python executable in my_env/bin/python \nInstalling setuptools, pip...done. \n\n\nvirtualenv my_env will create a folder in the current directory which\n will contain the Python executable files, and a copy of the pip\n [command] which you can use to install other packages.\n\nhttp://docs.python-guide.org/en/latest/dev/virtualenvs/\nvirtualenv just copies python from a location on your computer into the newly created my_env/bin/ directory. \n2) The system python is in /usr/bin, while the various python versions I installed were, by default, installed into:\n /usr/local/bin\n\n3) The various pythons I installed have names like python2.7 or python3.2, and I can use those names rather than full paths. \n========VIRTUALENVWRAPPER=========\n1) I had some problems getting virtualenvwrapper to work. This is what I ended up putting in ~/.bash_profile: \nexport WORKON_HOME=$HOME/.virtualenvs\nexport PROJECT_HOME=$HOME/django_projects #Not very important -- mkproject command uses this\n#Added the following based on: \n#http://stackoverflow.com/questions/19665327/virtualenvwrapper-installation-snow-leopard-python\nexport VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python2.7 \n#source /usr/local/bin/virtualenvwrapper.sh\nsource /Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenvwrapper.sh\n\n2) The -p option works differently with virtualenvwrapper: I have to specify the full path to the python interpreter to be used in the new environment(when I do not want to use the default python version): \n$ mkvirtualenv -p /usr/local/bin/python3.2 my_env\nRunning virtualenv with interpreter /usr/local/bin/python3\nNew python executable in my_env/bin/python\nInstalling setuptools, pip...done.\nUsage: source deactivate\n\nremoves the 'bin' directory of the environment activated with 'source\nactivate' from PATH. \n\nUnlike virtualenv, virtualenvwrapper will create the environment at the location specified by the $WORKON_HOME environment variable. That keeps all your environments in one place.\n",
"[November 2019] I needed to install a Python 3.7 environment (env) on my Python 3.8-based Arch Linux system. Python 3.7 was no longer on the system, so I could not downgrade Python, to install a package that I needed.\nFurthermore, I wanted to use that package / Python 3.7 inside a virtual environment (venv). This is how I did it.\n\nDownload Python version source files:\nI downloaded the Python 3.7.4 source files from\n\nhttps://www.python.org/downloads/source/\n\nto\n/mnt/Vancouver/apps/python_versions/src/Python-3.7.4.tgz\nI then extracted that archive (source files) to\n/mnt/Vancouver/apps/python_versions/src/Python-3.7.4/\n\nInstallation:\n[Note: in my system env, not a venv.]\ncd /mnt/Vancouver/apps/python_versions/src/Python-3.7.4/\ntime ./configure ## 17 sec\ntime make ## 1 min 51 sec\ntime sudo make install ## 18 sec\ntime make clean ## 0.3 sec\n\n\nExamine installed Python versions:\n$ which python\n/usr/bin/python\n\n$ python --version\nPython 3.8.0\n\n$ which python3.7\n/usr/local/bin/python3.7\n\n$ python ## Python 3.8 [system / env]\nPython 3.8.0 (default, Oct 23 2019, 18:51:26) \n[GCC 9.2.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>>\n\n$ python3.7 ## newly-installed Python 3.7 package\nPython 3.7.4 (default, Nov 20 2019, 11:36:53) \n[GCC 9.2.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import sys\n>>> print(sys.version)\n3.7.4 (default, Nov 20 2019, 11:36:53) \n[GCC 9.2.0]\n>>>\n\n$ python3.7 --version \nPython 3.7.4\n\n\nHow to create a venv for a specific Python version:\n\nhttps://docs.python.org/3/tutorial/venv.html\n12.2. CREATING VIRTUAL ENVIRONMENTS\nThe module used to create and manage virtual environments is called venv. venv will usually install the most recent version of Python that you have available. If you have multiple versions of Python on your system, you can select a specific Python version by running python3 or whichever version you want.\nTo create a virtual environment, decide upon a directory where you want to place it, and run the venv module as a script with the directory path:\n\npython3 -m venv tutorial-env\n\nThis will create the tutorial-env directory if it doesn’t exist, and also create directories inside it containing a copy of the Python interpreter, the standard library, and various supporting files.\n ...\n\n\nCreate Python 3.7 venv [on a Python 3.8 operating env / system]:\npython3.7 -m venv ~/venv/py3.7 ## create Python 3.7-based venv\nsource ~/venv/py3.7/bin/activate ## activate that venv\ndeactivate ## deactivate that venv (when done, there)\n\nAdded to ~/.bashrc:\nalias p37='echo \" [Python 3.7 venv (source ~/venv/py3.7/bin/activate)]\" && source ~/venv/py3.7/bin/activate'\n\n\nTest Python 3.7 venv:\n$ p37 \n[Python 3.7 venv (source ~/venv/py3.7/bin/activate)]\n\n(py3.7)$ python --version\nPython 3.7.4\n\n(py3.7)$ python\nPython 3.7.4 (default, Nov 20 2019, 11:36:53) \n[GCC 9.2.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import sys\n>>> print(sys.version)\n3.7.4 (default, Nov 20 2019, 11:36:53) \n[GCC 9.2.0] \n>>>\n\n",
"Suppose you currently have python 2.7 installed in your virtualenv. But want to make use of python3.2, You would have to update this with:\n$ virtualenv --python=/usr/bin/python3.2 name_of_your_virtualenv\n\nThen activate your virtualenv by:\n$ source activate name_of_your_virtualenv\n\nand then do: python --version in shell to check whether your version is now updated.\n",
"You should have that Python version installed. If you have it then basically,\nWith virtualenv,\nvirtualenv --python=python3.8 env/place/you/want/to/save/to\n\nwith venv\npython3.8 -m venv env/place/you/want/to/save/to\n\nThe above examples are for python3.8, you can change it to have different versions of virtual environments given that they are installed in your computer.\n",
"These two commands should work fine.\nvirtualenv -p python2 myenv (For python2)\nvirtualenv -p python3 myenv (For python3)\n",
"You can call virtualenv with python version you want. For example:\npython3 -m virtualenv venv\n\nOr alternatively directly point to your virtualenv path. e.g. for windows:\nc:\\Python34\\Scripts\\virtualenv.exe venv\n\nAnd by running:\nvenv/bin/python\n\nPython 3.5.1 (v3.5.1:37a07cee5969, Dec 5 2015, 21:12:44) \n[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>>\n\nyou can see the python version installed in virtual environment\n",
"The -p approach works well, but you do have to remember to use it every time. If your goal is to switch to a newer version of Python generally, that's a pain and can also lead to mistakes.\nYour other option is to set an environment variable that does the same thing as -p. Set this via your ~/.bashrc file or wherever you manage environment variables for your login sessions:\nexport VIRTUALENV_PYTHON=/path/to/desired/version\n\nThen virtualenv will use that any time you don't specify -p on the command line.\n",
"On the mac I use pyenv and virtualenvwrapper. I had to create a new virtualenv. You need homebrew which I'll assume you've installed if you're on a mac, but just for fun:\nruby -e \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)\"\n\n\nbrew install pyenv\npyenv install 2.7.10\npyenv global 2.7.10\nexport PATH=/Users/{USERNAME}/.pyenv/versions/2.7.10/bin:$PATH\nmkvirtualenv -p ~/.pyenv/versions/2.7.10/bin/python {virtual_env_name}\n\nI also froze my requirements first so i could simply reinstall in the new virtualenv with:\npip install -r requirements.txt\n\n",
"Even easier, by using command substitution to find python2 for you:\nvirtualenv -p $(which python2) <path/to/new/virtualenv/>\nOr when using virtualenvwrapper : \nmkvirtualenv -p $(which python2) <env_name>\n",
"For Mac(High Sierra), install the virtualenv on python3 and create a virtualenv for python2:\n $ python3 -m pip install virtualenv\n $ python3 -m virtualenv --python=python2 vp27\n $ source vp27/bin/activate\n (vp27)$ python --version\n Python 2.7.14\n\n",
"As already mentioned in multiple answers, using virtualenv is a clean solution. However a small pitfall that everyone should be aware of is that if an alias for python is set in bash_aliases like:\npython=python3.6\n\nthis alias will also be used inside the virtual environment. So in this scenario running python -V inside the virtual env will always output 3.6 regardless of what interpreter is used to create the environment: \nvirtualenv venv --python=pythonX.X\n\n",
"These seem a little overcomplicated for Windows. If you're on Windows running python 3.3 or later, you can use the python launcher py to do this much more easily. Simply install the different python version, then run:\npy -[my version] -m venv env\n\nThis will create a virtual environment called env in your current directory, using python [my version]. As an example:\npy -3.7 -m venv env\n./env/Scripts/activate\n\nThis creates a virtual environment called env using python3.7 and activates it. No paths or other complex stuff required.\n",
"I utilized this answer for Windows\nhttps://stackoverflow.com/a/22793687/15435022\npy -3.4 -m venv c:\\path\\to\\wherever\\you\\want\\it\n\n",
"On windows:\npy -3.4x32 -m venv venv34\n\nor \npy -2.6.2 -m venv venv26\n\nThis uses the py launcher which will find the right python executable for you (assuming you have it installed).\n",
"In windows subsystem for linux:\n\nCreate environment for python3:\nvirtualenv --python=/usr/bin/python3 env\n\nActivate it:\nsource env/bin/activate\n\n\n",
"I use pyenv to manage my python version.\npyenv install 3.7.3\npyenv local 3.7.3\n\nCheck your python version:\n$ python --version\nPython 3.7.3\n\nCreate the virtual environment with venv:\npython -m venv .\n\nThen activate the Virtual Environment:\nsource bin/activate\n\nCheck your python version:\n$ python --version\nPython 3.7.3\n\nYou may need to remove the previous virtual environment\nrm -rf bin\n\n",
"End of 2020:\nThe most seamless experience for using virtualenv (added benefit: with any possible python version) would be to use pyenv and its (bundled) pyenv-virtualenv plugin (cf https://realpython.com/intro-to-pyenv/#virtual-environments-and-pyenv)\nUsage: pyenv virtualenv <python_version> <environment_name>\nInstallation:\n\nfirst check that you've got all prerequisites (depending on your OS): https://github.com/pyenv/pyenv/wiki/Common-build-problems#prerequisites\ncurl https://pyenv.run | bash\nexec $SHELL\n\ncf https://github.com/pyenv/pyenv-installer\nThat being said, nowadays the best possible alternative instead of using virtualenv (and pip) would be Poetry (along with pyenv indicated above, to handle different python versions).\nAnother option, because it's supported directly by the PyPA (the org behind pip and the PyPI) and has restarted releasing since the end of May (didn't release since late 2018 prior to that...) would be Pipenv\n",
"This worked for my usage in Windows 10, where I have Python 3.7 and want to downgrade for a project in Python 3.6.6:\nI used \"venv\" to create a new environment called \"venv\", I downloaded from https://www.python.org/downloads/windows/ ; install \"Download Windows x86-64 executable installer-\" ; then I used the following command line in the directory where I want to create my environment\n>C:\\Users\\...\\Python\\Python36\\python.exe -m venv venv\nFinally, I activated the environnent using the command line:\n>venv\\Scripts\\activate.bat\nAnd check the python version by calling:\n>python --version\nPython 3.6.6\n",
"Yes, the above answers are correct and works fine on Unix based systems like Linux & MAC OS X.\nI tried to create virtualenv for Python2 & Python3 with the following commands.\nHere I have used venv2 & venv3 as their names for Python2 & Python3 respectively.\n\nPython2 »\n\nMacBook-Pro-2:~ admin$ virtualenv venv2 --python=`which python2`\nRunning virtualenv with interpreter /usr/local/bin/python2\nNew python executable in /Users/admin/venv2/bin/python\nInstalling setuptools, pip, wheel...done.\nMacBook-Pro-2:~ admin$ \nMacBook-Pro-2:~ admin$ ls venv2/bin/\nactivate easy_install pip2.7 python2.7\nactivate.csh easy_install-2.7 python wheel\nactivate.fish pip python-config\nactivate_this.py pip2 python2\nMacBook-Pro-2:~ admin$ \n\n\nPython3 »\n\nMacBook-Pro-2:~ admin$ virtualenv venv3 --python=`which python3`\nRunning virtualenv with interpreter /usr/local/bin/python3\nUsing base prefix '/Library/Frameworks/Python.framework/Versions/3.6'\nNew python executable in /Users/admin/venv3/bin/python3\nAlso creating executable in /Users/admin/venv3/bin/python\nInstalling setuptools, pip, wheel...done.\nMacBook-Pro-2:~ admin$ \nMacBook-Pro-2:~ admin$ ls venv3/bin/\nactivate easy_install pip3.6 python3.6\nactivate.csh easy_install-3.6 python wheel\nactivate.fish pip python-config\nactivate_this.py pip3 python3\nMacBook-Pro-2:~ admin$ \n\n\nChecking Python installation locations\n\nMacBook-Pro-2:~ admin$ which python2\n/usr/local/bin/python2\nMacBook-Pro-2:~ admin$ \nMacBook-Pro-2:~ admin$ which python3\n/usr/local/bin/python3\nMacBook-Pro-2:~ admin$ \n\n",
"I use Windows so I should use .exe on the pthon path\nvirtualenv -p=C:\\Python27\\python2.exe <envname>\n\n",
"On Linux Ubuntu 21.04 (currently Python 3.9.5) I needed to get a virtualenv of Python 3.7.8. Full steps to get working:\nFind the Python version source you want, for example 3.7.8 is here: https://www.python.org/downloads/release/python-378/\nDownload the Gzipped source tarball\nUnzip it with tar zxvf Python-3.7.8.tgz (amend as required with your version number if different from 3.7.8)\nCopy the unzipped folder to /usr/bin with: sudo cp -r Python-3.7.8 /usr/bin\ncd /usr/bin/Python-3.7.8/\n\nCheck the contents if you wanted to see what you have so far: ls\nsudo time ./configure\nsudo time make\ntime sudo make install\ntime make clean\n\nCheck how your python is set up and reporting:\nwhich python\npython --version\n\nShould be all relating to your primary install (Python 3.9.5 for me)\nTo check your new install:\nwhich python 3.7\npython3.7 --version\n\nShould be all relating to your 3.7.8 install\nIf you want to run it to check, do:\npython3.7\nexit()\n\nInstall venv:\nsudo apt install venv\n\nTo create a venv (maybe in your repo, if so, add .venv to .gitignore):\npython3.7 -m venv .venv\n\nTo activate your venv:\nsource .venv/bin/activate\n\nCheck your version:\npython --version\n\n",
"Answer to this question shouldn't be that complicated...\nTL,DR:\ninstall as many versions of python you prefer on your system and use:\n/c/path/to/any/version/of/python -m venv my_venv\n\n============================================\nI use venv to install virtual environments with\npython -m venv <where/to/and/name_of_venv>\n\nif you try which python you will see which python you are referring to, when saying \"python\". for example, for me it is:\nwhich python\n\nresult:\n/c/Program Files/Python36/python\nSo, now you have the answer!\nyou can install any version of python on your system and have multiple of them at the same time. So, for example I installed Python3.7 in this directory: \"C:\\Program Files\\Python37\".\nSo, instead of using 'python' now I specify which python by /c/Program\\ Files/Python37/python:\n /c/Program\\ Files/Python37/python -m venv my_venv\n\n(don't forget to escape the space in the path)\nThat's it!\n",
"It worked for me\nsudo apt-get install python3-minimal\n\nvirtualenv --no-site-packages --distribute -p /usr/bin/python3 ~/.virtualenvs/py3\n\n",
"virtualenv -p python3 myenv\n\nLink to Creating virtualenv\n",
"This was a bug with virtualenv.\nJust upgrading your pip should be the fix.\npip install --upgrade virtualenv\n",
"For Debian (debian 9) Systems in 2019, I discovered a simple solution that may solve the problem from within the virtual environment.\nSuppose the virtual environment were created via:\npython3.7 -m venv myenv\n\nbut only has versions of python2 and python2.7, and you need the recent features of python3.7. \nThen, simply running the command:\n(myvenv) $ python3.7 -m venv --upgrade /home/username/path/to/myvenv/\n\nwill add python3.7 packages if they are already available on your system.\n",
"It worked for me on windows with python 2 installation :\n\nStep 1: Install python 3 version . \nStep 2: create a env folder for\n the virtual environment.\nStep 3 : c:\\Python37\\python -m venv\n c:\\path\\to\\env.\n\nThis is how i created Python 3 virtual environment on my existing python 2 installation.\n",
"Yes you just need to install the other version of python, and define the location of your other version of python in your command like :\n\n\n\nvirtualenv /home/payroll/Documents/env -p /usr/bin/python3\n\n\n\n",
"Here is the stepbystep how to create the Virtual environment in Visual Studio Code folder:\nI used Powershell (Administrator mode):\n 1. I create a VSCode folder - \"D:\\Code_Python_VE\" where I want to create Virtual environment.\n 2. Next I type the command - \"pip3 install virtualenv\". (D:\\Code_Python_VE> pip3 install virtualenv) \n 3. D:\\Code_Python_VE> python3 -m venv project_env\n 4. D:\\Code_Python_VE>project_env\\Scripts\\activate.bat\n 5. D:\\Code_Python_VE> ls - This will list a new directory \"project_env\".\n 6. D:\\Code_Python_VE> code . This will start Visual Studio Code. Make sure the command is (code .).\n 7. Create launch.jason with following content:\n{\n // Use IntelliSense to learn about possible attributes.\n // Hover to view descriptions of existing attributes.\n // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387\n \"version\": \"0.2.0\",\n \"configurations\": [\n {\n \"type\": \"python\",\n \"request\": \"launch\",\n \"name\": \"Python: Current File (Integrated Terminal 1)\",\n \"program\": \"${file}\"\n },\n {\n \"name\": \"Python: Current File (Integrated Terminal 2)\",\n \"type\": \"python\",\n \"request\": \"launch\",\n \"program\": \"${file}\",\n \"console\": \"integratedTerminal\"\n }\n ]\n}\n\n(Please search how to go to Debug window and Add new Configuration in VS Code). \n\nPress F1 in Visual studio code and the command pallet will open - Select Python Interpreter and select the virtual environment project_env. \nAdd test.py file with one statement print(\"Hello World\").\nRun this program.\nIn Visual studio Code terminal -\n(project_env) d:\\Code_Python_VE>python -m pip install --upgrade\nI hope this helps.\n\n",
"UBUNTU 19.04 / Global Python 3.7.\nThis worked for me, enabling a Python 3.8 environment using the recommended venv for python 3 development.\nInstall 3.8 and 3.8 venv module:\n$ sudo apt install python3.8 python3.8-venv\nplus any other modules you need\nCreate your Virtual Env using the python version you want in that env\n$ /usr/bin/python3.8 -m venv python38-env\n\nswitch into your virtual env\n$ source python38-env/bin/activate\n\npython -V = python 3.8\n\n",
"Surprised that no one has mentioned conda so far. I have found this is a lot more straightforward than the other methods mentioned here. Let's say I have python 3.9 and python 2.7 and a project I am working on was python 3.5.4, I could simply create the isolated virtual env for 3.5.4 with the conda command without downloading anything else.\nTo see a list of available python versions first, use the command\nconda search \"^python$\"\nTo create the virtual environment for python version x.y.z, use the command\nconda create -n yourenvname python=x.y.z\nActivate venv with\nconda activate yourenvname\nDeactivate with\nconda deactivate\nTo delete the virtual environment when done, use the command\nconda remove -n yourenvname --all\n",
"sudo add-apt-repository ppa:deadsnakes/ppa\nsudo apt update\nsudo apt install python_version (ex: sudo apt install python3.8)\npython_version -m venv env (ex: python3.8 -m venv env)\n. env/bin/activate\nThis Above steps will solve your python version for env issue.\n",
"Simple:\nLinux\nvirtualenv venv --python=/usr/bin/python3.9\n\nWindows\nvirtualenv venv --python=C:\\Users\\username\\AppData\\Local\\Programs\\Python\\Python\\python.exe\n\n"
] | [
1901,
502,
217,
181,
113,
83,
41,
34,
28,
28,
23,
18,
13,
10,
8,
7,
7,
6,
6,
5,
4,
4,
4,
4,
3,
3,
3,
3,
2,
2,
2,
1,
1,
1,
1,
1,
1,
0,
0
] | [
"Suppose I want to use python 3.8 and I'm using MacOS.\nbrew install [email protected]\n\nThen,\npython3.8 -m venv venv\n\n",
"for windows only\n\ninstall the specific version of python in your pc\ngo the directory where you want to create the virtual environment\ntype cmd in the location bar in file explorer\non cmd type ->pip install virtualenv\nthen create the virtual env using the virtualenv library by typing the below command in cmd.\n-> virtualenv -p=\"C:\\location of python\\python.exe\" <virtualenv_name>\n\n"
] | [
-1,
-1
] | [
"python",
"virtualenv",
"virtualenvwrapper"
] | stackoverflow_0001534210_python_virtualenv_virtualenvwrapper.txt |
Q:
Pandas: expand group variable to encompass the first n obervations of the next group in a dataframe
Here is my df with an example of the two columns that I have to work (groupand value) with and an example of the output that I am trying to achieve(output_wanted):
egdf = pd.DataFrame({'group': ['A']*6+['B']*6+['C']*5+['D']*6,
'value': list(range(1, 7))*2+list(range(1, 6))+list(range(1, 7)),
'output_wanted': ['A']*8+['B']*6+['C']*5+['D']*4)
The idea is that I would like to expand each group to then encompass the first n (in this example two) entries of the next group in the dataframe. I'm lost for ideas on where to begin here. Has anyone got any idea how one might do this? Note that as in the example, the groups are of unequal sizes... Thanks!
A:
Why not use shift and backfill the first two (n) rows?
egdf['output_wanted'] = egdf.group.shift(2).fillna(method='bfill')
Where 2 can be replaced by n of course. If needed DataFrame could be sorted by group first.
| Pandas: expand group variable to encompass the first n obervations of the next group in a dataframe | Here is my df with an example of the two columns that I have to work (groupand value) with and an example of the output that I am trying to achieve(output_wanted):
egdf = pd.DataFrame({'group': ['A']*6+['B']*6+['C']*5+['D']*6,
'value': list(range(1, 7))*2+list(range(1, 6))+list(range(1, 7)),
'output_wanted': ['A']*8+['B']*6+['C']*5+['D']*4)
The idea is that I would like to expand each group to then encompass the first n (in this example two) entries of the next group in the dataframe. I'm lost for ideas on where to begin here. Has anyone got any idea how one might do this? Note that as in the example, the groups are of unequal sizes... Thanks!
| [
"Why not use shift and backfill the first two (n) rows?\negdf['output_wanted'] = egdf.group.shift(2).fillna(method='bfill')\n\nWhere 2 can be replaced by n of course. If needed DataFrame could be sorted by group first.\n"
] | [
1
] | [] | [] | [
"dataframe",
"group_by",
"pandas",
"python"
] | stackoverflow_0074642270_dataframe_group_by_pandas_python.txt |
Q:
bot returning the wrong number
I'm Trying to make a tax calculator but it's returning something strange..
Here's the function:
async def tax(args):
args3 = 5
protax= round(int(args)*args3/100)
if protax == 0:
protax = 1
return protax
here is where I call the function:
c.execute("SELECT price FROM netflix ")
netfprice = c.fetchall()
netprice = netfprice[0][0]
newnet = netprice*amount
withtax = await tax(args=newnet)
embed = discord.Embed(
title="tax system",
description=f"tax:{netprice + withtax}")
embed.set_footer(text=f"Sidtho Host. | Requested by - {message.author}")
await message.respond(embed=embed)
For this example let's use amount = 2, netprice = 3999.
It returned 4199, but should've returned 7998 + 400, So 8398.
A:
I feel really dumb after seeing my mistake...
my mistake was in the embed description.. Here:description=f"tax:{netprice + withtax}")
I should've made it description=f"tax:{newprice + withtax}")
| bot returning the wrong number | I'm Trying to make a tax calculator but it's returning something strange..
Here's the function:
async def tax(args):
args3 = 5
protax= round(int(args)*args3/100)
if protax == 0:
protax = 1
return protax
here is where I call the function:
c.execute("SELECT price FROM netflix ")
netfprice = c.fetchall()
netprice = netfprice[0][0]
newnet = netprice*amount
withtax = await tax(args=newnet)
embed = discord.Embed(
title="tax system",
description=f"tax:{netprice + withtax}")
embed.set_footer(text=f"Sidtho Host. | Requested by - {message.author}")
await message.respond(embed=embed)
For this example let's use amount = 2, netprice = 3999.
It returned 4199, but should've returned 7998 + 400, So 8398.
| [
"I feel really dumb after seeing my mistake...\nmy mistake was in the embed description.. Here:description=f\"tax:{netprice + withtax}\")\nI should've made it description=f\"tax:{newprice + withtax}\")\n"
] | [
0
] | [] | [] | [
"discord.py",
"pycord",
"python"
] | stackoverflow_0074642913_discord.py_pycord_python.txt |
Q:
Why is Django test cases checking actual DB and raising IntegrityError instead of just running in-memory?
When I run my tests with the DB empty (the actual application DB), everything goes fine. But when the DB has data, Django raises an IntegrityError for basically every test. The stact trace looks like the following (but for every test):
======================================================================
ERROR: test_api_get_detail (core.projects.tests.test_project_api.ProjectConfidentialPrivateAPITests)
Getting confidential object via API (GET)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/ramonkcom/Desktop/management/management-backend/venv/lib/python3.10/site-packages/django/test/testcases.py", line 299, in _setup_and_call
self._post_teardown()
File "/Users/ramonkcom/Desktop/management/management-backend/venv/lib/python3.10/site-packages/django/test/testcases.py", line 1199, in _post_teardown
self._fixture_teardown()
File "/Users/ramonkcom/Desktop/management/management-backend/venv/lib/python3.10/site-packages/django/test/testcases.py", line 1461, in _fixture_teardown
connections[db_name].check_constraints()
File "/Users/ramonkcom/Desktop/management/management-backend/venv/lib/python3.10/site-packages/django/db/backends/sqlite3/base.py", line 383, in check_constraints
raise IntegrityError(
django.db.utils.IntegrityError: The row in table 'custom_fields_customfield' with primary key '1' has an invalid foreign key: custom_fields_customfield.definition_id contains a value '1' that does not have a corresponding value in custom_fields_customfielddefinition.id.
At first I thought this was a problem with my fixtures. But they work just fine to set up the application basic data. The DB seems to be the key issue: if it has data, the tests crash, if it's empty, the tests work.
The most weird part is that the error being raised doesn't match with reality, and by that I mean: the CustomFieldDefinition object with pk=1 exists in the DB. Anyway, it shouldn't matter, since I expect Django to build an in-memory DB for every test. It's just an additional piece of info for the mistery.
What could be making Django check for the actual DB instead of doing all its thing in-memory?
Additional pieces of info:
I'll work to provide some code example (didn't provide it right from the start because there's a big test framework behind it, and I'll have to trim it). But for now, I'be adding infos and things that I get asked and things I suspect might be causing the issue:
I'm on Python 3.10.8 and Django 4.0.5
Most of the test cases are inheriting from django.test.TestCase, but some are inheriting from django.test.TransactionTestCase.
I'm using TransactionTestCase in some cases because I need to create dummy models (and maybe this implementation is part of the problem somehow). Example:
# THE DUMMY CLASS
@localized
class DummyClass(models.Model):
"""Represents a dummy model for testing purpose"""
class Meta:
app_label = 'apps.localizer'
name = models.CharField(max_length=255, blank=True, null=True)
# THE TEST CASE SETUP/TEARDOWN
class LocalizedModelsTests(TransactionTestCase):
def setUp(self):
with connection.schema_editor() as schema_editor:
schema_editor.create_model(DummyClass)
def tearDown(self):
with connection.schema_editor() as schema_editor:
schema_editor.delete_model(DummyClass)
| Why is Django test cases checking actual DB and raising IntegrityError instead of just running in-memory? | When I run my tests with the DB empty (the actual application DB), everything goes fine. But when the DB has data, Django raises an IntegrityError for basically every test. The stact trace looks like the following (but for every test):
======================================================================
ERROR: test_api_get_detail (core.projects.tests.test_project_api.ProjectConfidentialPrivateAPITests)
Getting confidential object via API (GET)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/ramonkcom/Desktop/management/management-backend/venv/lib/python3.10/site-packages/django/test/testcases.py", line 299, in _setup_and_call
self._post_teardown()
File "/Users/ramonkcom/Desktop/management/management-backend/venv/lib/python3.10/site-packages/django/test/testcases.py", line 1199, in _post_teardown
self._fixture_teardown()
File "/Users/ramonkcom/Desktop/management/management-backend/venv/lib/python3.10/site-packages/django/test/testcases.py", line 1461, in _fixture_teardown
connections[db_name].check_constraints()
File "/Users/ramonkcom/Desktop/management/management-backend/venv/lib/python3.10/site-packages/django/db/backends/sqlite3/base.py", line 383, in check_constraints
raise IntegrityError(
django.db.utils.IntegrityError: The row in table 'custom_fields_customfield' with primary key '1' has an invalid foreign key: custom_fields_customfield.definition_id contains a value '1' that does not have a corresponding value in custom_fields_customfielddefinition.id.
At first I thought this was a problem with my fixtures. But they work just fine to set up the application basic data. The DB seems to be the key issue: if it has data, the tests crash, if it's empty, the tests work.
The most weird part is that the error being raised doesn't match with reality, and by that I mean: the CustomFieldDefinition object with pk=1 exists in the DB. Anyway, it shouldn't matter, since I expect Django to build an in-memory DB for every test. It's just an additional piece of info for the mistery.
What could be making Django check for the actual DB instead of doing all its thing in-memory?
Additional pieces of info:
I'll work to provide some code example (didn't provide it right from the start because there's a big test framework behind it, and I'll have to trim it). But for now, I'be adding infos and things that I get asked and things I suspect might be causing the issue:
I'm on Python 3.10.8 and Django 4.0.5
Most of the test cases are inheriting from django.test.TestCase, but some are inheriting from django.test.TransactionTestCase.
I'm using TransactionTestCase in some cases because I need to create dummy models (and maybe this implementation is part of the problem somehow). Example:
# THE DUMMY CLASS
@localized
class DummyClass(models.Model):
"""Represents a dummy model for testing purpose"""
class Meta:
app_label = 'apps.localizer'
name = models.CharField(max_length=255, blank=True, null=True)
# THE TEST CASE SETUP/TEARDOWN
class LocalizedModelsTests(TransactionTestCase):
def setUp(self):
with connection.schema_editor() as schema_editor:
schema_editor.create_model(DummyClass)
def tearDown(self):
with connection.schema_editor() as schema_editor:
schema_editor.delete_model(DummyClass)
| [] | [] | [
"Are your tests classes childs of django.test.TestCase or unittest.TestCase?\nBecause with a db one should use django.test.TestCase...\n"
] | [
-2
] | [
"django",
"django_rest_framework",
"django_unittest",
"python",
"testing"
] | stackoverflow_0074607455_django_django_rest_framework_django_unittest_python_testing.txt |
Q:
zipfile and pandas failure mid-loop
I'm writing this on my phone, so a full code example is sorta out of the question at the moment, but I need some help.
I'm working on parsing a set of .csv files from a zipped infile, pulling out specific columns from each file, generating a new .csv with the chosen columns, and then exporting the new dataframes to a zipped outfile.
I am doing this through a series of loops, but can't get beyond 78% success on the parse process, and 73% on the parse combined with the compression process.
Somewhere along the way either zipfile.ZipFile is breaking, or pandas.to_csv... and I'm not sure why. I've been trying to figure it out for two weeks and I'm finally breaking down to ask assistance.
Brief code snippets for now:
Export function:
def export(new_filename):
os.chdir([import_file location])
try:
with zipfile.ZipFile(outfile_name,'a',zipfile=ZIP_DEFLATED, allowZip64=true) as outfile:
try:
outfile.write(new_filename)
#random errors at runtime saying the writing handle is still open... Not sure why.
except:
#print statement to alert of failure at this step. I have tried NameError
#and ValueError exceptions, but they don't help.
except:
#another statement to alert failure
Pandas function:
def infile_parser(filename, new_filename):
#excluding code beyond making the dataframe and file generation
df = pd.dataframe(data,columns=useful_columns)
df.to_csv(new_filename,index=false)
Thank you in advance. I can add more context if requested.
A:
I figured out where it was breaking. Sorry I forgot to update this question with the solution.
The issue was in the data of some of the files. Added automated badfile checking based on length of dataframe. Basically, the files causing issues only had 1 or 2 rows in column A but the good files had full tables of many rows. Pandas was assigning the string in the first cell to the header and basically breaking from there, since the columns being used in the other files did not exist in the badfiles.
Pre-parse file verification / data checking, thereby omitting the badfiles from the process, solved all issues.
| zipfile and pandas failure mid-loop | I'm writing this on my phone, so a full code example is sorta out of the question at the moment, but I need some help.
I'm working on parsing a set of .csv files from a zipped infile, pulling out specific columns from each file, generating a new .csv with the chosen columns, and then exporting the new dataframes to a zipped outfile.
I am doing this through a series of loops, but can't get beyond 78% success on the parse process, and 73% on the parse combined with the compression process.
Somewhere along the way either zipfile.ZipFile is breaking, or pandas.to_csv... and I'm not sure why. I've been trying to figure it out for two weeks and I'm finally breaking down to ask assistance.
Brief code snippets for now:
Export function:
def export(new_filename):
os.chdir([import_file location])
try:
with zipfile.ZipFile(outfile_name,'a',zipfile=ZIP_DEFLATED, allowZip64=true) as outfile:
try:
outfile.write(new_filename)
#random errors at runtime saying the writing handle is still open... Not sure why.
except:
#print statement to alert of failure at this step. I have tried NameError
#and ValueError exceptions, but they don't help.
except:
#another statement to alert failure
Pandas function:
def infile_parser(filename, new_filename):
#excluding code beyond making the dataframe and file generation
df = pd.dataframe(data,columns=useful_columns)
df.to_csv(new_filename,index=false)
Thank you in advance. I can add more context if requested.
| [
"I figured out where it was breaking. Sorry I forgot to update this question with the solution.\nThe issue was in the data of some of the files. Added automated badfile checking based on length of dataframe. Basically, the files causing issues only had 1 or 2 rows in column A but the good files had full tables of many rows. Pandas was assigning the string in the first cell to the header and basically breaking from there, since the columns being used in the other files did not exist in the badfiles.\nPre-parse file verification / data checking, thereby omitting the badfiles from the process, solved all issues.\n"
] | [
0
] | [] | [] | [
"data_science",
"file_conversion",
"pandas",
"python",
"python_zipfile"
] | stackoverflow_0070537550_data_science_file_conversion_pandas_python_python_zipfile.txt |
Q:
Extracting all data validation formula values(dropdown) from excel sheet
How can we extract the data validation dropdown values(not just formulas/references or a single valued formula result) from a given excel sheet using python?
Currently openpyxl helps us to get the formulas but not the values directly. It even fails in cases where file contain extensions(extLst) to the OOXML specifications that are not supported.These datavalidation formulas can contain functions like OFFSET, SUBSTITUTE, VLOOKUP, INDIRECT etc. There are some libraries that support parsing/calculation of a subset of formulas(xlcalculator, pycel, formulas etc); but it fails in some cases(limitations).Is there a way we can get them using libraries like xlwings, win32com etc or even using macros with python? The datavalidation formulas can contain plain lists(values), definednames, database table reference etc. and the source file should not be ideally modified. Is there a solution which works in all these cases?
A:
In order to obtain the results of a dynamic Excel formula you need to automate Excel itself, rather than just parse the worksheet file with an XML reader.
Hard to say this solution will fit all your cases, but it is a starting point. The Excel object model has a Validation Object, which contains all the information about the validation for a cell or range. One of the properties of the Validation might be a formula. The Worksheet/Application Evaluate
method can return the result of the formula.
As an example, cell C3 has a validation formula based on an offset from cell F3:
Using win32com, the formula for a validation range can be evaluated, as shown in the following minimal Python code:
import win32com.client as wc
#Create Excel object
xl = wc.gencache.EnsureDispatch('Excel.Application')
xl.Visible=True
#Open saved workbook, get the first worksheet, and a given cell
wb = xl.Workbooks.Open('MySheet.xlsx')
ws = wb.Worksheets[1]
rng = ws.Range('C3')
#Get the Validation object for the range
valid = rng.Validation
formula = valid.Formula1
if len(formula) >0: #If formula not empty
print('Formula for validation of cell',rng.Address,'is',formula)
#Evaluate will return different results depending on formula
#Offset returns a r x c array of cells (Range objects)
#Need to cut off the leading '=' character from the formula string
res = rng.Worksheet.Evaluate(formula[1:])
arr = [ [c.Value for c in r] for r in res]
print('Result of validation formula:',arr)
#Tidy up
wb.Close(False)
xl.Quit()
with the following output:
Formula for validation of cell $C$3 is =OFFSET($F$3,2,0,5)
Result of validation formula: [[3.0], [4.0], [5.0], [6.0], [7.0]]
You may need to experiment with the return values from Evaluate depending on the exact formulas you are using. Using win32com differs from file-reading methods (like openpyxl) in that the actual Excel application is launched, and its calculation engine used to return the results. This does however mean that you need the Excel application installed and accessible (which can be an issue for web/server based use cases).
| Extracting all data validation formula values(dropdown) from excel sheet | How can we extract the data validation dropdown values(not just formulas/references or a single valued formula result) from a given excel sheet using python?
Currently openpyxl helps us to get the formulas but not the values directly. It even fails in cases where file contain extensions(extLst) to the OOXML specifications that are not supported.These datavalidation formulas can contain functions like OFFSET, SUBSTITUTE, VLOOKUP, INDIRECT etc. There are some libraries that support parsing/calculation of a subset of formulas(xlcalculator, pycel, formulas etc); but it fails in some cases(limitations).Is there a way we can get them using libraries like xlwings, win32com etc or even using macros with python? The datavalidation formulas can contain plain lists(values), definednames, database table reference etc. and the source file should not be ideally modified. Is there a solution which works in all these cases?
| [
"In order to obtain the results of a dynamic Excel formula you need to automate Excel itself, rather than just parse the worksheet file with an XML reader.\nHard to say this solution will fit all your cases, but it is a starting point. The Excel object model has a Validation Object, which contains all the information about the validation for a cell or range. One of the properties of the Validation might be a formula. The Worksheet/Application Evaluate\nmethod can return the result of the formula.\nAs an example, cell C3 has a validation formula based on an offset from cell F3:\n\nUsing win32com, the formula for a validation range can be evaluated, as shown in the following minimal Python code:\nimport win32com.client as wc\n\n#Create Excel object\nxl = wc.gencache.EnsureDispatch('Excel.Application')\nxl.Visible=True\n\n#Open saved workbook, get the first worksheet, and a given cell\nwb = xl.Workbooks.Open('MySheet.xlsx')\nws = wb.Worksheets[1]\nrng = ws.Range('C3')\n\n#Get the Validation object for the range\nvalid = rng.Validation\nformula = valid.Formula1\n\nif len(formula) >0: #If formula not empty\n print('Formula for validation of cell',rng.Address,'is',formula)\n\n #Evaluate will return different results depending on formula\n #Offset returns a r x c array of cells (Range objects)\n #Need to cut off the leading '=' character from the formula string\n res = rng.Worksheet.Evaluate(formula[1:])\n arr = [ [c.Value for c in r] for r in res]\n print('Result of validation formula:',arr)\n\n#Tidy up\nwb.Close(False)\nxl.Quit()\n\nwith the following output:\nFormula for validation of cell $C$3 is =OFFSET($F$3,2,0,5)\nResult of validation formula: [[3.0], [4.0], [5.0], [6.0], [7.0]]\n\nYou may need to experiment with the return values from Evaluate depending on the exact formulas you are using. Using win32com differs from file-reading methods (like openpyxl) in that the actual Excel application is launched, and its calculation engine used to return the results. This does however mean that you need the Excel application installed and accessible (which can be an issue for web/server based use cases).\n"
] | [
0
] | [] | [] | [
"excel",
"openpyxl",
"python",
"win32com",
"xlwings"
] | stackoverflow_0074633059_excel_openpyxl_python_win32com_xlwings.txt |
Q:
read files without hardcoding path in python
I am writing a PySpark code to read PARQUET files from my local machine and process them. My directories and file paths look like this:
.
├── customer
│ └── day=20220815
│ └── part-00000-4dff7e82-411b-4940-bdb6-33acf5a189b4-c000.snappy.parquet
└── customer_interaction
└── day=20220815
└── part-00000-7b3ee7fd-c515-41c0-96a6-2f2dbbc0c9cf-c000.snappy.parquet
4 directories, 2 files
There are two different files in two different folders that I want to read in my code. As of now, I am using hardcoded values to pass the paths of these files like this:
customer_df = spark.read.parquet('customer/day=20220815/part-00000-4dff7e82-411b-4940-bdb6-33acf5a189b4-c000.snappy.parquet')
customer_interaction_df = spark.read.parquet('customer_interaction/day=20220815/part-00000-7b3ee7fd-c515-41c0-96a6-2f2dbbc0c9cf-c000.snappy.parquet')
But this is not what I want. Is there any other way that I can use for reading the files?
A:
You can use the glob module, see https://www.geeksforgeeks.org/how-to-use-glob-function-to-find-files-recursively-in-python/ for examples.
You can also use pathlib which also as a glob module.
In your case, the command is:
import glob
# Returns a list of names in list files.
print("Using glob.glob()")
files = glob.glob('/main_directory/**/*.snappy.parquet',
recursive = True)
for file in files:
print(file)
A:
X_data_no=[]
no_images = glob.glob("./Dataset/brain_tumor_dataset/no/"+'*')
for myFile in no_images:
image_no = cv2.imread(myFile)
print(image_no)
| read files without hardcoding path in python | I am writing a PySpark code to read PARQUET files from my local machine and process them. My directories and file paths look like this:
.
├── customer
│ └── day=20220815
│ └── part-00000-4dff7e82-411b-4940-bdb6-33acf5a189b4-c000.snappy.parquet
└── customer_interaction
└── day=20220815
└── part-00000-7b3ee7fd-c515-41c0-96a6-2f2dbbc0c9cf-c000.snappy.parquet
4 directories, 2 files
There are two different files in two different folders that I want to read in my code. As of now, I am using hardcoded values to pass the paths of these files like this:
customer_df = spark.read.parquet('customer/day=20220815/part-00000-4dff7e82-411b-4940-bdb6-33acf5a189b4-c000.snappy.parquet')
customer_interaction_df = spark.read.parquet('customer_interaction/day=20220815/part-00000-7b3ee7fd-c515-41c0-96a6-2f2dbbc0c9cf-c000.snappy.parquet')
But this is not what I want. Is there any other way that I can use for reading the files?
| [
"You can use the glob module, see https://www.geeksforgeeks.org/how-to-use-glob-function-to-find-files-recursively-in-python/ for examples.\nYou can also use pathlib which also as a glob module.\nIn your case, the command is:\nimport glob\n\n# Returns a list of names in list files.\nprint(\"Using glob.glob()\")\nfiles = glob.glob('/main_directory/**/*.snappy.parquet', \n recursive = True)\nfor file in files:\n print(file)\n\n",
"X_data_no=[]\nno_images = glob.glob(\"./Dataset/brain_tumor_dataset/no/\"+'*')\nfor myFile in no_images:\n image_no = cv2.imread(myFile)\n print(image_no) \n\n\n"
] | [
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0074642657_python.txt |
Q:
apply function on two columns in python pandas with 2 arguments (convert GPS type to another)
so i have a database of X and Y coordinates as the ITM type and i want it as WGS-84 type.
i found a function from pyproj library thats convert and it works great but now i'm having troubles to apply this function on two separate columns.
for example i want to convert this data :
Column x
Column y
643234
234562
634352
434534
to something like that
Column X
Column Y
33.04647
35.56525
25.34533
23.43532
so my function accepts two arguments(X from Column x and Y from column y) and needs to replace all the values to the new coordinates data
any one has an idea how to use the apply function to work on both columns
i tried to use apply function on both column but it didn't worked out and i couldn't find a solution online and i also tried to change my Function to work separately on one column but its impossible because its a built in function of a certain library
A:
https://github.com/geopandas/geopandas/issues/1400
from pyproj import Transformer
trans = Transformer.from_crs(
"epsg:4326",
"+proj=utm +zone=10 +ellps=WGS84",
always_xy=True,
)
xx, yy = trans.transform(My_data["LON"].values, My_data["LAT"].values)
My_data["X"] = xx
My_data["Y"] = yy
| apply function on two columns in python pandas with 2 arguments (convert GPS type to another) | so i have a database of X and Y coordinates as the ITM type and i want it as WGS-84 type.
i found a function from pyproj library thats convert and it works great but now i'm having troubles to apply this function on two separate columns.
for example i want to convert this data :
Column x
Column y
643234
234562
634352
434534
to something like that
Column X
Column Y
33.04647
35.56525
25.34533
23.43532
so my function accepts two arguments(X from Column x and Y from column y) and needs to replace all the values to the new coordinates data
any one has an idea how to use the apply function to work on both columns
i tried to use apply function on both column but it didn't worked out and i couldn't find a solution online and i also tried to change my Function to work separately on one column but its impossible because its a built in function of a certain library
| [
"https://github.com/geopandas/geopandas/issues/1400\nfrom pyproj import Transformer\n\ntrans = Transformer.from_crs(\n \"epsg:4326\",\n \"+proj=utm +zone=10 +ellps=WGS84\",\n always_xy=True,\n)\nxx, yy = trans.transform(My_data[\"LON\"].values, My_data[\"LAT\"].values)\nMy_data[\"X\"] = xx\nMy_data[\"Y\"] = yy\n\n"
] | [
0
] | [] | [] | [
"geolocation",
"pandas",
"pyproj",
"python"
] | stackoverflow_0074629073_geolocation_pandas_pyproj_python.txt |
Q:
VScode adding tick to dockerRun command
I'm trying to get the below to run a docker container command via VScode.
Within tasks.json I have:
{
"label": "docker-run: debug",
"type": "docker-run",
"dependsOn": [
"docker-build"
],
"python": {
"module": "simply.py"
},
"dockerRun": {
"command": "python3 simply.py",
}
},
Which I would expect to be run as
docker container run [unrelated arguments] python3 simply.py
Where python3 simply.py is the command I want to run
Instead I am getting
docker container run [unrelated arguments] python3` simply.py
I'm not sure where VScode is generating this tick after python3 but the result is my code doesn't run.
An example of dockerRun with python3 within command might help indicate what I'm doing wrong.
Also running docker container run [unrelated arguments] python3 simply.py in the terminal works fine.
Thanks
extra:
Not sure if relevant but my launch.json is:
{
"name": "Docker: DEV",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
],
},
},
A:
This fixed itself upon closing and opening VScode
| VScode adding tick to dockerRun command | I'm trying to get the below to run a docker container command via VScode.
Within tasks.json I have:
{
"label": "docker-run: debug",
"type": "docker-run",
"dependsOn": [
"docker-build"
],
"python": {
"module": "simply.py"
},
"dockerRun": {
"command": "python3 simply.py",
}
},
Which I would expect to be run as
docker container run [unrelated arguments] python3 simply.py
Where python3 simply.py is the command I want to run
Instead I am getting
docker container run [unrelated arguments] python3` simply.py
I'm not sure where VScode is generating this tick after python3 but the result is my code doesn't run.
An example of dockerRun with python3 within command might help indicate what I'm doing wrong.
Also running docker container run [unrelated arguments] python3 simply.py in the terminal works fine.
Thanks
extra:
Not sure if relevant but my launch.json is:
{
"name": "Docker: DEV",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
],
},
},
| [
"This fixed itself upon closing and opening VScode\n"
] | [
0
] | [] | [] | [
"docker",
"docker_container",
"python",
"vscode_tasks"
] | stackoverflow_0074631992_docker_docker_container_python_vscode_tasks.txt |
Q:
Jupyter Notebook's terminal command not using correct conda environment
I have 2 conda environments installed:
- env1: base environment where jupyter-notebook is installed and started from
- env2: project environment with ipykernel installed
I manually added kernelspecs for the 2 environments following this guide.
Everything works fine. sys.executable in 2 kernels show separate, correct paths. But for terminal commands (i.e. !which python), no matter which kernel I'm running in the environment defaults to env1.
Is there any way to have the notebook automatically change this to the kernel's environment?
P.S. I already tried installing nb_conda, nb_conda_kernels
A:
install nb_conda and nb_conda_kernels into your base.
conda install nb_conda nb_conda_kernels -n env1
This should give you the ability to change kernel in jupyter, and use the env2 kernel.
A:
I would install jupyter notebook in the base env (not env1, not env2)
Then install nb_conda_kernels in the base
in env1 and env2, install ipykernel
in env1 and env2, run this:
python -m ipykernel install --user --name env1 --display-name "env1 env"
Check this out for more info:
New Conda environment with latest Python Version for Jupyter Notebook
| Jupyter Notebook's terminal command not using correct conda environment | I have 2 conda environments installed:
- env1: base environment where jupyter-notebook is installed and started from
- env2: project environment with ipykernel installed
I manually added kernelspecs for the 2 environments following this guide.
Everything works fine. sys.executable in 2 kernels show separate, correct paths. But for terminal commands (i.e. !which python), no matter which kernel I'm running in the environment defaults to env1.
Is there any way to have the notebook automatically change this to the kernel's environment?
P.S. I already tried installing nb_conda, nb_conda_kernels
| [
"install nb_conda and nb_conda_kernels into your base.\nconda install nb_conda nb_conda_kernels -n env1\n\nThis should give you the ability to change kernel in jupyter, and use the env2 kernel.\n",
"I would install jupyter notebook in the base env (not env1, not env2)\nThen install nb_conda_kernels in the base\nin env1 and env2, install ipykernel\nin env1 and env2, run this:\npython -m ipykernel install --user --name env1 --display-name \"env1 env\"\n\nCheck this out for more info:\nNew Conda environment with latest Python Version for Jupyter Notebook\n"
] | [
1,
0
] | [] | [] | [
"conda",
"jupyter_notebook",
"python"
] | stackoverflow_0053440940_conda_jupyter_notebook_python.txt |
Q:
How to remove a single tick label on a plot, leaving the tick itself
I'd like to remove all but the first and last tick labels, but keep their ticks on a plot.
However, using the below code, all labels get removed
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(1, 1)
ax.plot(np.arange(10))
locs = ax.get_xticks()
labels = ax.get_xticklabels()
labels[1].set_text('') # Should only remove the 2nd label
ax.set_xticks(locs)
ax.set_xticklabels(labels)
results in:
Where are all my labels are removed, rather than just the 2. How can I remove just that single number?
I'd expect the following plot to show up (rendered by BigBen using the code above in a comment, using matplotlib 3.6.0 and Python 3.9.7):
Preferably in a method capable of removing all but the first and last label. For that I tried:
for label_idx in range(1, len(labels)-1):
labels[label_idx].set_text('')
which also removed all labels. I want all ticks but only the labels 0 and 8 to show up, i.e. remove the labels 2, 4 and 6.
I'm using Python 3.9.12 with matplotlib 3.5.2 on a Spyder 5.3.2 IDE. After upgrading to Python 3.9.15 and matplotlib 3.5.3 the issue remains.
The code in Bhargav's answer generates the same issue on my system: it removes all labels for me. I'm going to presume this is a bug in Python, matplotlib or Spyder.
A:
In
Python - 3.9.10
matplotlib: 3.5.1
Uisng Tick_formatter
plt.xticks(np.arange(min(locs), max(locs)+1, 2))
ax.xaxis.get_major_ticks()[1].draw = lambda *args:None
Gives #
if you want to keep ticks
plt.xticks(np.arange(min(locs), max(locs)+1, 2))
labels = [item.get_text() for item in ax.get_xticklabels()]
labels[1] = ''
ax.set_xticklabels(labels)
Gives #
| How to remove a single tick label on a plot, leaving the tick itself | I'd like to remove all but the first and last tick labels, but keep their ticks on a plot.
However, using the below code, all labels get removed
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(1, 1)
ax.plot(np.arange(10))
locs = ax.get_xticks()
labels = ax.get_xticklabels()
labels[1].set_text('') # Should only remove the 2nd label
ax.set_xticks(locs)
ax.set_xticklabels(labels)
results in:
Where are all my labels are removed, rather than just the 2. How can I remove just that single number?
I'd expect the following plot to show up (rendered by BigBen using the code above in a comment, using matplotlib 3.6.0 and Python 3.9.7):
Preferably in a method capable of removing all but the first and last label. For that I tried:
for label_idx in range(1, len(labels)-1):
labels[label_idx].set_text('')
which also removed all labels. I want all ticks but only the labels 0 and 8 to show up, i.e. remove the labels 2, 4 and 6.
I'm using Python 3.9.12 with matplotlib 3.5.2 on a Spyder 5.3.2 IDE. After upgrading to Python 3.9.15 and matplotlib 3.5.3 the issue remains.
The code in Bhargav's answer generates the same issue on my system: it removes all labels for me. I'm going to presume this is a bug in Python, matplotlib or Spyder.
| [
"In\nPython - 3.9.10\nmatplotlib: 3.5.1\n\nUisng Tick_formatter\nplt.xticks(np.arange(min(locs), max(locs)+1, 2))\nax.xaxis.get_major_ticks()[1].draw = lambda *args:None\n\nGives #\n\nif you want to keep ticks\nplt.xticks(np.arange(min(locs), max(locs)+1, 2))\nlabels = [item.get_text() for item in ax.get_xticklabels()]\nlabels[1] = ''\nax.set_xticklabels(labels)\n\nGives #\n\n"
] | [
1
] | [] | [] | [
"matplotlib",
"python",
"xticks"
] | stackoverflow_0074643201_matplotlib_python_xticks.txt |
Q:
Improving small images for data extraction
In Open CV or with Pillow library how can we improve below images for tesseract.
I tried below code with multiple options like thresholding, blur, enchance, however not able to improve.
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
print(pytesseract.image_to_string(img))
img_medianBlur = cv2.blur(img, (3 , 3))
print(pytesseract.image_to_string(img_medianBlur))
blur = cv2.GaussianBlur(img,(5,5),0)
ret3,th3 = cv2.threshold(img,150,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
print(pytesseract.image_to_string(th3))
A:
To improve the images for tesseract, you can try using some of the following techniques:
Increase the contrast of the image by using histogram equalization or
stretching the intensity range of the image.
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
equ = cv2.equalizeHist(img)
print(pytesseract.image_to_string(equ))
Remove any noise or artifacts from the image using a median or
Gaussian blur filter
img_medianBlur = cv2.medianBlur(img, 3)
print(pytesseract.image_to_string(img_medianBlur))
Use adaptive thresholding to segment the text from the background.
th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, 2)
print(pytesseract.image_to_string(th))
Sharpen the image to make the text more distinct and easier to read.
sharpened = cv2.filter2D(img, -1, np.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]]))
print(pytesseract.image_to_string(sharpened))
Use morphological operations such as dilation or erosion to fill in gaps in the text and remove small objects that may be interfering with the text recognition.
kernel = np.ones((3, 3), np.uint8)
dilated = cv2.dilate(img, kernel, iterations=1)
eroded = cv2.erode(dilated, kernel, iterations=1)
print(pytesseract.image_to_string(eroded))
It may also be useful to try combining these techniques and experimenting with different parameter values to find the optimal configuration for your specific image.
A:
It looks like the first two images are the result of motion blur.
If denoising filters, such as Gaussian, don't work, I suggest research on two topics, motion deblur and deconvolution.
You might want to check out this OpenCV tutorial on deblurring filter
and motion deblur filter.
Here explains mathematical aspect of the algorithm.
I'll come back to improve my answer if time permits.
| Improving small images for data extraction | In Open CV or with Pillow library how can we improve below images for tesseract.
I tried below code with multiple options like thresholding, blur, enchance, however not able to improve.
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
print(pytesseract.image_to_string(img))
img_medianBlur = cv2.blur(img, (3 , 3))
print(pytesseract.image_to_string(img_medianBlur))
blur = cv2.GaussianBlur(img,(5,5),0)
ret3,th3 = cv2.threshold(img,150,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
print(pytesseract.image_to_string(th3))
| [
"To improve the images for tesseract, you can try using some of the following techniques:\n\nIncrease the contrast of the image by using histogram equalization or\nstretching the intensity range of the image.\n\nimg = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\nequ = cv2.equalizeHist(img)\nprint(pytesseract.image_to_string(equ))\n\n\nRemove any noise or artifacts from the image using a median or\nGaussian blur filter\n\nimg_medianBlur = cv2.medianBlur(img, 3)\nprint(pytesseract.image_to_string(img_medianBlur))\n\n\nUse adaptive thresholding to segment the text from the background.\n\nth = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, 2)\nprint(pytesseract.image_to_string(th))\n\n\nSharpen the image to make the text more distinct and easier to read.\n\nsharpened = cv2.filter2D(img, -1, np.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]]))\nprint(pytesseract.image_to_string(sharpened))\n\n\nUse morphological operations such as dilation or erosion to fill in gaps in the text and remove small objects that may be interfering with the text recognition.\n\nkernel = np.ones((3, 3), np.uint8)\ndilated = cv2.dilate(img, kernel, iterations=1)\neroded = cv2.erode(dilated, kernel, iterations=1)\nprint(pytesseract.image_to_string(eroded))\n\nIt may also be useful to try combining these techniques and experimenting with different parameter values to find the optimal configuration for your specific image.\n",
"It looks like the first two images are the result of motion blur.\nIf denoising filters, such as Gaussian, don't work, I suggest research on two topics, motion deblur and deconvolution.\nYou might want to check out this OpenCV tutorial on deblurring filter\nand motion deblur filter.\nHere explains mathematical aspect of the algorithm.\nI'll come back to improve my answer if time permits.\n"
] | [
0,
0
] | [] | [] | [
"image_processing",
"ocr",
"python",
"python_tesseract",
"tesseract"
] | stackoverflow_0074582794_image_processing_ocr_python_python_tesseract_tesseract.txt |
Q:
How do I pass a user-agent to panda's pd.read_html()?
some websites automatically decline requests due to lack of user-agent, and it's a hassle using bs4 to scrape many different types of tables.
This issue was resolved before through this code:
url = 'http://finance.yahoo.com/quote/A/key-statistics?p=A'
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
response = opener.open(url)
tables = pd.read_html(response.read()
However urllib2 has been depreciated and urllib3 doesn't have a build_opener() attribute, and I could not find an equivalent attribute either even though I'm sure it has one.
A:
read_html() accepts a URL and string, so u can set headers on request, and pandas ll read this resoponse like a text:
import pandas as pd
import requests
url = 'http://finance.yahoo.com/quote/A/key-statistics?p=A'
response = requests.get(url, headers={'User-agent': 'Mozilla/5.0'})
tables = pd.read_html(response.text)
print(tables)
If u open read_html() none of the options accept headers as an argument, so just set headers in request
| How do I pass a user-agent to panda's pd.read_html()? | some websites automatically decline requests due to lack of user-agent, and it's a hassle using bs4 to scrape many different types of tables.
This issue was resolved before through this code:
url = 'http://finance.yahoo.com/quote/A/key-statistics?p=A'
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
response = opener.open(url)
tables = pd.read_html(response.read()
However urllib2 has been depreciated and urllib3 doesn't have a build_opener() attribute, and I could not find an equivalent attribute either even though I'm sure it has one.
| [
"read_html() accepts a URL and string, so u can set headers on request, and pandas ll read this resoponse like a text:\nimport pandas as pd\nimport requests\n\n\nurl = 'http://finance.yahoo.com/quote/A/key-statistics?p=A'\nresponse = requests.get(url, headers={'User-agent': 'Mozilla/5.0'})\ntables = pd.read_html(response.text)\nprint(tables)\n\nIf u open read_html() none of the options accept headers as an argument, so just set headers in request\n"
] | [
0
] | [] | [] | [
"python",
"urllib2",
"urllib3",
"user_agent"
] | stackoverflow_0074642461_python_urllib2_urllib3_user_agent.txt |
Q:
Create a dataframe out of dbutils.fs.ls output in Databricks
So, I'm a beginner and learning spark programming (pyspark) on Databricks -
What am I trying to do ?
List all the files in a directory and save it into a dataframe so that I am able to apply filter, sort etc on this list of files. Why ? Because I am trying to find the biggest file in my directory.
Why doesn't below work ? What am I missing ?
from pyspark.sql.types import StringType
sklist = dbutils.fs.ls(sourceFile)
df = spark.createDataFrame(sklist,StringType())
A:
ok, actually, I figured it out :). Just wanna leave the question here incase some one benefits from it.
So basically, the problem was with the schema. Not all the elements in the list was of String Type. So I explicitly created a schema and used it in createDataFrame function.
Working code -
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
ddlSchema = StructType([
StructField('path',StringType()),
StructField('name',StringType()),
StructField('size',IntegerType())
])
sklist = dbutils.fs.ls(sourceFile)
df = spark.createDataFrame(sklist,ddlSchema)
A:
Updating the answer by @skrprince. The schema has a new field "modtime" now that uses Unix epoch values. It's best to use LongType() for both size and modtime, since IntegerType will fail on larger values.
fslsSchema = StructType(
[
StructField('path', StringType()),
StructField('name', StringType()),
StructField('size', LongType()),
StructField('modtime', LongType())
]
)
filelist = dbutils.fs.ls('<your path here>')
df_files = spark.createDataFrame(filelist, fslsSchema)
You can also create a temporary view to execute SQL queries against your dataframe data:
df_files.createTempView("files_view")
Then you can run queries in the same notebook like the example below:
%sql
SELECT name, size, modtime
FROM files_view
WHERE name LIKE '<your file pattern>%.parq'
ORDER BY modtime
A:
You don't need to set the schema:
df = spark.createDataFrame(dbutils.fs.ls(sourceFile))
| Create a dataframe out of dbutils.fs.ls output in Databricks | So, I'm a beginner and learning spark programming (pyspark) on Databricks -
What am I trying to do ?
List all the files in a directory and save it into a dataframe so that I am able to apply filter, sort etc on this list of files. Why ? Because I am trying to find the biggest file in my directory.
Why doesn't below work ? What am I missing ?
from pyspark.sql.types import StringType
sklist = dbutils.fs.ls(sourceFile)
df = spark.createDataFrame(sklist,StringType())
| [
"ok, actually, I figured it out :). Just wanna leave the question here incase some one benefits from it.\nSo basically, the problem was with the schema. Not all the elements in the list was of String Type. So I explicitly created a schema and used it in createDataFrame function.\nWorking code -\nfrom pyspark.sql.types import StructType, StructField, IntegerType, StringType\n\nddlSchema = StructType([\nStructField('path',StringType()),\nStructField('name',StringType()),\nStructField('size',IntegerType())\n])\n\nsklist = dbutils.fs.ls(sourceFile)\ndf = spark.createDataFrame(sklist,ddlSchema)\n\n",
"Updating the answer by @skrprince. The schema has a new field \"modtime\" now that uses Unix epoch values. It's best to use LongType() for both size and modtime, since IntegerType will fail on larger values.\nfslsSchema = StructType(\n [\n StructField('path', StringType()),\n StructField('name', StringType()),\n StructField('size', LongType()),\n StructField('modtime', LongType())\n ]\n)\n\nfilelist = dbutils.fs.ls('<your path here>')\ndf_files = spark.createDataFrame(filelist, fslsSchema)\n\nYou can also create a temporary view to execute SQL queries against your dataframe data:\ndf_files.createTempView(\"files_view\")\n\nThen you can run queries in the same notebook like the example below:\n%sql\nSELECT name, size, modtime\nFROM files_view\nWHERE name LIKE '<your file pattern>%.parq'\nORDER BY modtime\n\n",
"You don't need to set the schema:\ndf = spark.createDataFrame(dbutils.fs.ls(sourceFile))\n\n"
] | [
4,
1,
0
] | [] | [] | [
"apache_commons_dbutils",
"databricks",
"pyspark",
"python"
] | stackoverflow_0066166411_apache_commons_dbutils_databricks_pyspark_python.txt |
Q:
What is the best way to combine dataframes that have been created through a for loop?
I am trying to combine dataframes with 2 columns into a single dataframe. The initial dataframes are generated through a for loop and stored in a list. I am having trouble getting the data from the list of dataframes into a single dataframe. Right now when I run my code, it treats each full dataframe as a row.
def linear_reg_function(category):
df = pd.read_csv(file)
df = df[df['category_column'] == category]`
df1 = df[['category_column', 'value_column']]
df_export.append(df1)
df_export = []
for category in category_list:
linear_reg_function(category)
when I run this block of code I get a list of dataframes that have 2 columns. When I try to convert df_export to a dataframe, it ends up with 12 rows (the number of categories in category_list). I tried:
df_export = pd.DataFrame()
but the result was:
_
I would like to have a single dataframe with 2 columns, [Category, Value] that includes the values of all 12 categories generated in the for loop.
A:
You can use pd.concat to merge a list of DataFrames into a single big DataFrame.
appended_data = []
for infile in glob.glob("*.xlsx"):
data = pandas.read_excel(infile)
# store DataFrame in list
appended_data.append(data)
# see pd.concat documentation for more info
appended_data = pd.concat(appended_data)
# write DataFrame to an excel sheet
appended_data.to_excel('appended.xlsx')
you can manipulate it to your proper demande
| What is the best way to combine dataframes that have been created through a for loop? | I am trying to combine dataframes with 2 columns into a single dataframe. The initial dataframes are generated through a for loop and stored in a list. I am having trouble getting the data from the list of dataframes into a single dataframe. Right now when I run my code, it treats each full dataframe as a row.
def linear_reg_function(category):
df = pd.read_csv(file)
df = df[df['category_column'] == category]`
df1 = df[['category_column', 'value_column']]
df_export.append(df1)
df_export = []
for category in category_list:
linear_reg_function(category)
when I run this block of code I get a list of dataframes that have 2 columns. When I try to convert df_export to a dataframe, it ends up with 12 rows (the number of categories in category_list). I tried:
df_export = pd.DataFrame()
but the result was:
_
I would like to have a single dataframe with 2 columns, [Category, Value] that includes the values of all 12 categories generated in the for loop.
| [
"You can use pd.concat to merge a list of DataFrames into a single big DataFrame.\nappended_data = []\nfor infile in glob.glob(\"*.xlsx\"):\n data = pandas.read_excel(infile)\n # store DataFrame in list\n appended_data.append(data)\n# see pd.concat documentation for more info\nappended_data = pd.concat(appended_data)\n# write DataFrame to an excel sheet \nappended_data.to_excel('appended.xlsx')\n\nyou can manipulate it to your proper demande\n"
] | [
1
] | [] | [] | [
"dataframe",
"for_loop",
"jupyter_notebook",
"python"
] | stackoverflow_0074643318_dataframe_for_loop_jupyter_notebook_python.txt |
Q:
How can I remove duplicate words in a string with Python?
Following example:
string1 = "calvin klein design dress calvin klein"
How can I remove the second two duplicates "calvin" and "klein"?
The result should look like
string2 = "calvin klein design dress"
only the second duplicates should be removed and the sequence of the words should not be changed!
A:
string1 = "calvin klein design dress calvin klein"
words = string1.split()
print (" ".join(sorted(set(words), key=words.index)))
This sorts the set of all the (unique) words in your string by the word's index in the original list of words.
A:
def unique_list(l):
ulist = []
[ulist.append(x) for x in l if x not in ulist]
return ulist
a="calvin klein design dress calvin klein"
a=' '.join(unique_list(a.split()))
A:
In Python 2.7+, you could use collections.OrderedDict for this:
from collections import OrderedDict
s = "calvin klein design dress calvin klein"
print ' '.join(OrderedDict((w,w) for w in s.split()).keys())
A:
Cut and paste from the itertools recipes
from itertools import ifilterfalse
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in ifilterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
I really wish they could go ahead and make a module out of those recipes soon. I'd very much like to be able to do from itertools_recipes import unique_everseen instead of using cut-and-paste every time I need something.
Use like this:
def unique_words(string, ignore_case=False):
key = None
if ignore_case:
key = str.lower
return " ".join(unique_everseen(string.split(), key=key))
string2 = unique_words(string1)
A:
string2 = ' '.join(set(string1.split()))
Explanation:
.split() - it is a method to split string to list (without params it split by spaces)
set() - it is type of unordered collections that exclude dublicates
'separator'.join(list) - mean that you want to join list from params to string with 'separator' between elements
A:
string = 'calvin klein design dress calvin klein'
def uniquify(string):
output = []
seen = set()
for word in string.split():
if word not in seen:
output.append(word)
seen.add(word)
return ' '.join(output)
print uniquify(string)
A:
You can use a set to keep track of already processed words.
words = set()
result = ''
for word in string1.split():
if word not in words:
result = result + word + ' '
words.add(word)
print result
A:
Several answers are pretty close to this but haven't quite ended up where I did:
def uniques( your_string ):
seen = set()
return ' '.join( seen.add(i) or i for i in your_string.split() if i not in seen )
Of course, if you want it a tiny bit cleaner or faster, we can refactor a bit:
def uniques( your_string ):
words = your_string.split()
seen = set()
seen_add = seen.add
def add(x):
seen_add(x)
return x
return ' '.join( add(i) for i in words if i not in seen )
I think the second version is about as performant as you can get in a small amount of code. (More code could be used to do all the work in a single scan across the input string but for most workloads, this should be sufficient.)
A:
Question: Remove the duplicates in a string
from _collections import OrderedDict
a = "Gina Gini Gini Protijayi"
aa = OrderedDict().fromkeys(a.split())
print(' '.join(aa))
# output => Gina Gini Protijayi
A:
Use numpy function
make an import its better to have an alias for the import (as np)
import numpy as np
and then you can bing it like this
for removing duplicates from array you can use it this way
no_duplicates_array = np.unique(your_array)
for your case if you want result in string you can use
no_duplicates_string = ' '.join(np.unique(your_string.split()))
A:
11 and 2 work perfectly:
s="the sky is blue very blue"
s=s.lower()
slist = s.split()
print " ".join(sorted(set(slist), key=slist.index))
and 2
s="the sky is blue very blue"
s=s.lower()
slist = s.split()
print " ".join(sorted(set(slist), key=slist.index))
A:
You can remove duplicate or repeated words from a text file or string using following codes -
from collections import Counter
for lines in all_words:
line=''.join(lines.lower())
new_data1=' '.join(lemmatize_sentence(line))
new_data2 = word_tokenize(new_data1)
new_data3=nltk.pos_tag(new_data2)
# below code is for removal of repeated words
for i in range(0, len(new_data3)):
new_data3[i] = "".join(new_data3[i])
UniqW = Counter(new_data3)
new_data5 = " ".join(UniqW.keys())
print (new_data5)
new_data.append(new_data5)
print (new_data)
P.S. -Do identations as per required.
Hope this helps!!!
A:
Without using the split function (will help in interviews)
def unique_words2(a):
words = []
spaces = ' '
length = len(a)
i = 0
while i < length:
if a[i] not in spaces:
word_start = i
while i < length and a[i] not in spaces:
i += 1
words.append(a[word_start:i])
i += 1
words_stack = []
for val in words: #
if val not in words_stack: # We can replace these three lines with this one -> [words_stack.append(val) for val in words if val not in words_stack]
words_stack.append(val) #
print(' '.join(words_stack)) # or return, your choice
unique_words2('calvin klein design dress calvin klein')
A:
initializing list
listA = [ 'xy-xy', 'pq-qr', 'xp-xp-xp', 'dd-ee']
print("Given list : ",listA)
using set() and split()
res = [set(sub.split('-')) for sub in listA]
Result
print("List after duplicate removal :", res)
A:
To remove duplicate words from sentence and preserve the order of the words you can use dict.fromkeys method.
string1 = "calvin klein design dress calvin klein"
words = string1.split()
result = " ".join(list(dict.fromkeys(words)))
print(result)
| How can I remove duplicate words in a string with Python? | Following example:
string1 = "calvin klein design dress calvin klein"
How can I remove the second two duplicates "calvin" and "klein"?
The result should look like
string2 = "calvin klein design dress"
only the second duplicates should be removed and the sequence of the words should not be changed!
| [
"string1 = \"calvin klein design dress calvin klein\"\nwords = string1.split()\nprint (\" \".join(sorted(set(words), key=words.index)))\n\nThis sorts the set of all the (unique) words in your string by the word's index in the original list of words.\n",
"def unique_list(l):\n ulist = []\n [ulist.append(x) for x in l if x not in ulist]\n return ulist\n\na=\"calvin klein design dress calvin klein\"\na=' '.join(unique_list(a.split()))\n\n",
"In Python 2.7+, you could use collections.OrderedDict for this:\nfrom collections import OrderedDict\ns = \"calvin klein design dress calvin klein\"\nprint ' '.join(OrderedDict((w,w) for w in s.split()).keys())\n\n",
"Cut and paste from the itertools recipes\nfrom itertools import ifilterfalse\n\ndef unique_everseen(iterable, key=None):\n \"List unique elements, preserving order. Remember all elements ever seen.\"\n # unique_everseen('AAAABBBCCDAABBB') --> A B C D\n # unique_everseen('ABBCcAD', str.lower) --> A B C D\n seen = set()\n seen_add = seen.add\n if key is None:\n for element in ifilterfalse(seen.__contains__, iterable):\n seen_add(element)\n yield element\n else:\n for element in iterable:\n k = key(element)\n if k not in seen:\n seen_add(k)\n yield element\n\nI really wish they could go ahead and make a module out of those recipes soon. I'd very much like to be able to do from itertools_recipes import unique_everseen instead of using cut-and-paste every time I need something.\nUse like this:\ndef unique_words(string, ignore_case=False):\n key = None\n if ignore_case:\n key = str.lower\n return \" \".join(unique_everseen(string.split(), key=key))\n\nstring2 = unique_words(string1)\n\n",
"string2 = ' '.join(set(string1.split()))\n\nExplanation:\n.split() - it is a method to split string to list (without params it split by spaces)\nset() - it is type of unordered collections that exclude dublicates\n'separator'.join(list) - mean that you want to join list from params to string with 'separator' between elements\n",
"string = 'calvin klein design dress calvin klein'\n\ndef uniquify(string):\n output = []\n seen = set()\n for word in string.split():\n if word not in seen:\n output.append(word)\n seen.add(word)\n return ' '.join(output)\n\nprint uniquify(string)\n\n",
"You can use a set to keep track of already processed words.\nwords = set()\nresult = ''\nfor word in string1.split():\n if word not in words:\n result = result + word + ' '\n words.add(word)\nprint result\n\n",
"Several answers are pretty close to this but haven't quite ended up where I did:\ndef uniques( your_string ): \n seen = set()\n return ' '.join( seen.add(i) or i for i in your_string.split() if i not in seen )\n\nOf course, if you want it a tiny bit cleaner or faster, we can refactor a bit:\ndef uniques( your_string ): \n words = your_string.split()\n\n seen = set()\n seen_add = seen.add\n\n def add(x):\n seen_add(x) \n return x\n\n return ' '.join( add(i) for i in words if i not in seen )\n\nI think the second version is about as performant as you can get in a small amount of code. (More code could be used to do all the work in a single scan across the input string but for most workloads, this should be sufficient.)\n",
"Question: Remove the duplicates in a string\n from _collections import OrderedDict\n\n a = \"Gina Gini Gini Protijayi\"\n\n aa = OrderedDict().fromkeys(a.split())\n print(' '.join(aa))\n # output => Gina Gini Protijayi\n\n",
"Use numpy function\nmake an import its better to have an alias for the import (as np)\nimport numpy as np\n\nand then you can bing it like this\nfor removing duplicates from array you can use it this way\nno_duplicates_array = np.unique(your_array)\n\nfor your case if you want result in string you can use\nno_duplicates_string = ' '.join(np.unique(your_string.split()))\n\n",
"11 and 2 work perfectly:\n s=\"the sky is blue very blue\"\n s=s.lower()\n slist = s.split()\n print \" \".join(sorted(set(slist), key=slist.index))\n\nand 2\n s=\"the sky is blue very blue\"\n s=s.lower()\n slist = s.split()\n print \" \".join(sorted(set(slist), key=slist.index))\n\n",
"You can remove duplicate or repeated words from a text file or string using following codes - \nfrom collections import Counter\nfor lines in all_words:\n\n line=''.join(lines.lower())\n new_data1=' '.join(lemmatize_sentence(line))\n new_data2 = word_tokenize(new_data1)\n new_data3=nltk.pos_tag(new_data2)\n\n # below code is for removal of repeated words\n\n for i in range(0, len(new_data3)):\n new_data3[i] = \"\".join(new_data3[i])\n UniqW = Counter(new_data3)\n new_data5 = \" \".join(UniqW.keys())\n print (new_data5)\n\n\n new_data.append(new_data5)\n\n\nprint (new_data)\n\nP.S. -Do identations as per required.\nHope this helps!!!\n",
"Without using the split function (will help in interviews)\ndef unique_words2(a):\n words = []\n spaces = ' '\n length = len(a)\n i = 0\n while i < length:\n if a[i] not in spaces:\n word_start = i\n while i < length and a[i] not in spaces:\n i += 1\n words.append(a[word_start:i])\n i += 1\n words_stack = []\n for val in words: #\n if val not in words_stack: # We can replace these three lines with this one -> [words_stack.append(val) for val in words if val not in words_stack]\n words_stack.append(val) #\n print(' '.join(words_stack)) # or return, your choice\n\n\nunique_words2('calvin klein design dress calvin klein') \n\n",
"initializing list\nlistA = [ 'xy-xy', 'pq-qr', 'xp-xp-xp', 'dd-ee']\n\nprint(\"Given list : \",listA)\n\nusing set() and split()\nres = [set(sub.split('-')) for sub in listA]\n\nResult\nprint(\"List after duplicate removal :\", res) \n\n",
"To remove duplicate words from sentence and preserve the order of the words you can use dict.fromkeys method.\nstring1 = \"calvin klein design dress calvin klein\"\n\nwords = string1.split()\n\nresult = \" \".join(list(dict.fromkeys(words)))\n\nprint(result)\n\n"
] | [
53,
25,
12,
7,
7,
5,
2,
2,
1,
1,
0,
0,
0,
0,
0
] | [
"You can do that simply by getting the set associated to the string, which is a mathematical object containing no repeated elements by definition. It suffices to join the words in the set back into a string:\ndef remove_duplicate_words(string):\n x = string.split()\n x = sorted(set(x), key = x.index)\n return ' '.join(x)\n\n"
] | [
-1
] | [
"duplicates",
"python",
"string"
] | stackoverflow_0007794208_duplicates_python_string.txt |
Q:
Unexpected result when concatenating strings of cells in (geo)pandas
I am working the data provided here https://www.opengeodata.nrw.de/produkte/transport_verkehr/unfallatlas/
I am trying to create a concatenated string like this
import geopandas
accidents2020 = gp.read_file("Unfallorte2020_LinRef.shp")
accidents2020['joined'] = f"{accidents2020['ULAND']}{accidents2020['UREGBEZ']}{accidents2020['UKREIS']}{accidents2020['UGEMEINDE']}"
However, this gives me some weird string
0 0 12\n1 12\n2 12\n3 ...
1 0 12\n1 12\n2 12\n3 ...
2 0 12\n1 12\n2 12\n3 ...
3 0 12\n1 12\n2 12\n3 ...
4 0 12\n1 12\n2 12\n3 ...
which is unexpected. When I do a accidents2020['ULAND']
0 12
1 12
2 12
3 12
4 12
there are no \n1. Where are the \n1 etc. coming from?
A:
accidents2020['ULAND'] is a Series, if you convert this Series to a string, it also includes the index and the linefeeds at the end of each line:
print(repr(f"{accidents2020.loc[0:1, 'ULAND']}"))
# '0 12\n1 12\nName: ULAND, dtype: object'
print(f"{accidents2020.loc[0:1, 'ULAND']}")
# 0 12
# 1 12
# Name: ULAND, dtype: object
When I do a accidents2020['ULAND'] there are no \n1.
No, they are there - you simply don't see them as \n representations but as linefeeds in the output.
Where are the \n1 etc. coming from?
\n is the newline character and 1 is the row index.
So what you need is simply accidents2020['joined'] = accidents2020['ULAND'] + accidents2020['UREGBEZ'] + accidents2020['UKREIS'] + accidents2020['UGEMEINDE'], without any f strings.
An alternative is cat where you can optionally specify a separator: accidents2020['joined'] = accidents2020['ULAND'].str.cat(accidents2020[['UREGBEZ', 'UKREIS', 'UGEMEINDE']])
| Unexpected result when concatenating strings of cells in (geo)pandas | I am working the data provided here https://www.opengeodata.nrw.de/produkte/transport_verkehr/unfallatlas/
I am trying to create a concatenated string like this
import geopandas
accidents2020 = gp.read_file("Unfallorte2020_LinRef.shp")
accidents2020['joined'] = f"{accidents2020['ULAND']}{accidents2020['UREGBEZ']}{accidents2020['UKREIS']}{accidents2020['UGEMEINDE']}"
However, this gives me some weird string
0 0 12\n1 12\n2 12\n3 ...
1 0 12\n1 12\n2 12\n3 ...
2 0 12\n1 12\n2 12\n3 ...
3 0 12\n1 12\n2 12\n3 ...
4 0 12\n1 12\n2 12\n3 ...
which is unexpected. When I do a accidents2020['ULAND']
0 12
1 12
2 12
3 12
4 12
there are no \n1. Where are the \n1 etc. coming from?
| [
"accidents2020['ULAND'] is a Series, if you convert this Series to a string, it also includes the index and the linefeeds at the end of each line:\nprint(repr(f\"{accidents2020.loc[0:1, 'ULAND']}\"))\n# '0 12\\n1 12\\nName: ULAND, dtype: object'\n\nprint(f\"{accidents2020.loc[0:1, 'ULAND']}\")\n# 0 12\n# 1 12\n# Name: ULAND, dtype: object\n\n\nWhen I do a accidents2020['ULAND'] there are no \\n1.\n\nNo, they are there - you simply don't see them as \\n representations but as linefeeds in the output.\n\nWhere are the \\n1 etc. coming from?\n\n\\n is the newline character and 1 is the row index.\n\nSo what you need is simply accidents2020['joined'] = accidents2020['ULAND'] + accidents2020['UREGBEZ'] + accidents2020['UKREIS'] + accidents2020['UGEMEINDE'], without any f strings.\nAn alternative is cat where you can optionally specify a separator: accidents2020['joined'] = accidents2020['ULAND'].str.cat(accidents2020[['UREGBEZ', 'UKREIS', 'UGEMEINDE']])\n"
] | [
1
] | [] | [] | [
"geopandas",
"pandas",
"python",
"string"
] | stackoverflow_0074642979_geopandas_pandas_python_string.txt |
Q:
cannot scrape ratings
My issue is that I cannot use bs4 to scrape sub ratings in its reviews.
Below is an example:
So far, I have discovered where these stars are, but their codes are the same regardless of the color (i.e., green or grey)... I need to be able to identify the color to identify the ratings, not just scrape the stars. Below is my code:
url='https://www.glassdoor.com/Reviews/Walmart-Reviews-E715_P2.htm?filter.iso3Language=eng'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
com = soup.find(class_ = "ratingNumber mr-xsm")
com1 = soup.find(class_ = "gdReview")
com1_1 = com1.find(class_ = "content")
A:
For getting the star rating breakdown (which seems to have no numeric display or meta value), I don't think there's any very simple-and-straight-forward short method since it's done by css in a style tag connected by a class of the container element.
You could use something like soup.select('style:-soup-contains(".css-1nuumx7")') [ the css-1nuumx7 part is specific to rating mentioned above], but :-soup-contains needs html5lib parser and can be a bit slow, so it's better to figure out the data-emotion-css attribute of the style tag instead:
def getDECstars(starCont, mSoup, outOf=5, isv=False):
classList = starCont.get('class', [])
if type(classList) != list: classList = [classList]
classList = [str(c) for c in classList if str(c).startswith('css-')]
if not classList:
if isv: print('Stars container has no "css-" class')
return None
demc = classList[0].replace('css-', '', 1)
demc_sel = f'style[data-emotion-css="{demc}"]'
cssStyle = mSoup.select_one(demc_sel)
if not cssStyle:
if isv: print(f'Nothing found with selector {demc_sel}')
return None
cssStyle = cssStyle.get_text()
errMsg = ''
if '90deg,#0caa41 ' not in cssStyle: errMsg += 'No #0caa41'
if '%' not in cssStyle.split('90deg,#0caa41 ', 1)[-1][:20]:
errMsg += ' No %'
if not errMsg:
rPerc = cssStyle.split('90deg,#0caa41 ', 1)[-1]
rPerc = rPerc.split('%')[0]
try:
rPerc = float(rPerc)
if 0 <= rPerc <= 100:
if type(outOf) == int and outOf > 0: rPerc = (rPerc/100)*outOf
return float(f'{float(rPerc):.3}')
errMsg = f'{demc_sel} --> "{rPerc}" is out of range'
except: errMsg = f'{demc_sel} --> cannot convert to float "{rPerc}"'
if isv: print(f'{demc_sel} --> unexpected format {errMsg}')
return None
OR, if you don't care so much about why there's a missing rating:
def getDECstars(starCont, mSoup, outOf=5, isv=False):
try:
demc = [c for c in starCont.get('class', []) if c[:4]=='css-'][0].replace('css-', '', 1)
demc_sel = f'style[data-emotion-css="{demc}"]'
rPerc = float(mSoup.select_one(demc_sel).get_text().split('90deg,#0caa41 ', 1)[1].split('%')[0])
return float(f'{(rPerc/100)*outOf if type(outOf) == int and outOf > 0 else rPerc:.3}')
except: return None
Here's an example of how you might use it:
pcCon = 'div.px-std:has(h2 > a.reviewLink) + div.px-std'
pcDiv = f'{pcCon} div.v2__EIReviewDetailsV2__fullWidth'
refDict = {
'rating_num': 'span.ratingNumber',
'emp_status': 'div:has(> div > span.ratingNumber) + span',
'header': 'h2 > a.reviewLink',
'subheader': 'h2:has(> a.reviewLink) + span',
'pros': f'{pcDiv}:first-of-type > p.pb',
'cons': f'{pcDiv}:nth-of-type(2) > p.pb'
}
subRatSel = 'div:has(> .ratingNumber) ~ aside ul > li:has(div ~ div)'
empRevs = []
for r in soup.select('li[id^="empReview_"]'):
rDet = {'reviewId': r.get('id')}
for sr in r.select(subRatSel):
k = sr.select_one('div:first-of-type').get_text(' ').strip()
sval = getDECstars(sr.select_one('div:nth-of-type(2)'), soup)
rDet[f'[rating] {k}'] = sval
for k, sel in refDict.items():
sval = r.select_one(sel)
if sval: sval = sval.get_text(' ').strip()
rDet[k] = sval
empRevs.append(rDet)
If empRevs is viewed as a table:
reviewId
[rating] Work/Life Balance
[rating] Culture & Values
[rating] Diversity & Inclusion
[rating] Career Opportunities
[rating] Compensation and Benefits
[rating] Senior Management
rating_num
emp_status
header
subheader
pros
cons
empReview_71400593
5
4
4
4
5
3
3
great pay but bit of obnoxious enviornment
Nov 26, 2022 - Sales Associate/Cashier in Bensalem, PA
-Walmart's fair pay policy is ...
-some locations wont build emp...
empReview_70963705
3
3
2
2
2
2
2
Former Employee
Walmart Employees Trained Thrown to the Wolves
Nov 10, 2022 - Data Entry
Getting a snack at break was e...
I worked at Walmart for a very...
empReview_71415031
4
4
4
4
4
4
5
Current Employee, more than 1 year
Work
Nov 27, 2022 - Warehouse Associate in Springfield, GA
The money there is good during...
It can get stressful at times ...
empReview_69136451
nan
nan
nan
nan
nan
nan
4
Current Employee
Walmart
Sep 16, 2022 - Sales Associate/Cashier
I'm a EXPERIENCED WORKER. I ✨...
In my opinion I believe that W...
empReview_71398525
4
3
4
3
4
3
4
Current Employee
Depends heavily on your team
Nov 26, 2022 - Personal Digital Shopper
I have a generally excellent t...
Generally, departments are sho...
empReview_71227029
1
1
1
1
3
1
1
Former Employee, less than 1 year
Managers are treated like a slave.
Nov 19, 2022 - Auto Care Center Manager (ACCM) in Cottonwood, AZ
Great if you like working with...
you only get to work in your a...
empReview_71329467
1
3
3
3
4
1
1
Current Employee, more than 3 years
No more values
Nov 23, 2022 - GM Coach in Houston, TX
Pay compare to other retails a...
Walmart is not a bad company t...
empReview_71512609
5
5
5
5
5
5
5
Former Employee
Walmart midnight stocker
Nov 30, 2022 - Midnight Stocker in Taylor, MI
2 paid 15 min breaks and 1 hou...
Honestly nothing that I can th...
empReview_70585957
3
4
4
4
4
4
4
Former Employee
Lots of Opportunity
Oct 28, 2022 - Human Resources People Lead
Plenty of opportunities if one...
As with any job, management is...
empReview_71519435
3
4
4
5
4
4
5
Current Employee, more than 3 years
Lot of work but worth it
Nov 30, 2022 - People Lead
I enjoy making associates live...
Sometimes an overwhelming amou...
Markdown for the table above was printed with pandas:
erdf = pandas.DataFrame(empRevs).set_index('reviewId')
erdf['pros'] = [p[:30] + '...' if len(p) > 33 else p for p in erdf['pros']]
erdf['cons'] = [p[:30] + '...' if len(p) > 33 else p for p in erdf['cons']]
print(erdf.to_markdown())
| cannot scrape ratings | My issue is that I cannot use bs4 to scrape sub ratings in its reviews.
Below is an example:
So far, I have discovered where these stars are, but their codes are the same regardless of the color (i.e., green or grey)... I need to be able to identify the color to identify the ratings, not just scrape the stars. Below is my code:
url='https://www.glassdoor.com/Reviews/Walmart-Reviews-E715_P2.htm?filter.iso3Language=eng'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
com = soup.find(class_ = "ratingNumber mr-xsm")
com1 = soup.find(class_ = "gdReview")
com1_1 = com1.find(class_ = "content")
| [
"For getting the star rating breakdown (which seems to have no numeric display or meta value), I don't think there's any very simple-and-straight-forward short method since it's done by css in a style tag connected by a class of the container element.\nYou could use something like soup.select('style:-soup-contains(\".css-1nuumx7\")') [ the css-1nuumx7 part is specific to rating mentioned above], but :-soup-contains needs html5lib parser and can be a bit slow, so it's better to figure out the data-emotion-css attribute of the style tag instead:\ndef getDECstars(starCont, mSoup, outOf=5, isv=False):\n classList = starCont.get('class', [])\n if type(classList) != list: classList = [classList]\n classList = [str(c) for c in classList if str(c).startswith('css-')] \n if not classList: \n if isv: print('Stars container has no \"css-\" class')\n return None\n \n demc = classList[0].replace('css-', '', 1)\n demc_sel = f'style[data-emotion-css=\"{demc}\"]'\n cssStyle = mSoup.select_one(demc_sel)\n if not cssStyle:\n if isv: print(f'Nothing found with selector {demc_sel}')\n return None\n \n cssStyle = cssStyle.get_text()\n errMsg = ''\n if '90deg,#0caa41 ' not in cssStyle: errMsg += 'No #0caa41'\n if '%' not in cssStyle.split('90deg,#0caa41 ', 1)[-1][:20]: \n errMsg += ' No %'\n if not errMsg:\n rPerc = cssStyle.split('90deg,#0caa41 ', 1)[-1]\n rPerc = rPerc.split('%')[0]\n try: \n rPerc = float(rPerc)\n if 0 <= rPerc <= 100:\n if type(outOf) == int and outOf > 0: rPerc = (rPerc/100)*outOf\n return float(f'{float(rPerc):.3}')\n errMsg = f'{demc_sel} --> \"{rPerc}\" is out of range'\n except: errMsg = f'{demc_sel} --> cannot convert to float \"{rPerc}\"' \n if isv: print(f'{demc_sel} --> unexpected format {errMsg}')\n return None\n\nOR, if you don't care so much about why there's a missing rating:\ndef getDECstars(starCont, mSoup, outOf=5, isv=False):\n try:\n demc = [c for c in starCont.get('class', []) if c[:4]=='css-'][0].replace('css-', '', 1)\n demc_sel = f'style[data-emotion-css=\"{demc}\"]'\n rPerc = float(mSoup.select_one(demc_sel).get_text().split('90deg,#0caa41 ', 1)[1].split('%')[0])\n return float(f'{(rPerc/100)*outOf if type(outOf) == int and outOf > 0 else rPerc:.3}')\n except: return None\n\n\nHere's an example of how you might use it:\npcCon = 'div.px-std:has(h2 > a.reviewLink) + div.px-std'\npcDiv = f'{pcCon} div.v2__EIReviewDetailsV2__fullWidth'\nrefDict = {\n 'rating_num': 'span.ratingNumber',\n 'emp_status': 'div:has(> div > span.ratingNumber) + span',\n 'header': 'h2 > a.reviewLink',\n 'subheader': 'h2:has(> a.reviewLink) + span',\n 'pros': f'{pcDiv}:first-of-type > p.pb',\n 'cons': f'{pcDiv}:nth-of-type(2) > p.pb'\n}\n\nsubRatSel = 'div:has(> .ratingNumber) ~ aside ul > li:has(div ~ div)'\nempRevs = []\nfor r in soup.select('li[id^=\"empReview_\"]'):\n rDet = {'reviewId': r.get('id')}\n for sr in r.select(subRatSel):\n k = sr.select_one('div:first-of-type').get_text(' ').strip()\n sval = getDECstars(sr.select_one('div:nth-of-type(2)'), soup)\n rDet[f'[rating] {k}'] = sval\n \n for k, sel in refDict.items():\n sval = r.select_one(sel)\n if sval: sval = sval.get_text(' ').strip()\n rDet[k] = sval\n \n empRevs.append(rDet)\n\nIf empRevs is viewed as a table:\n\n\n\n\nreviewId\n[rating] Work/Life Balance\n[rating] Culture & Values\n[rating] Diversity & Inclusion\n[rating] Career Opportunities\n[rating] Compensation and Benefits\n[rating] Senior Management\nrating_num\nemp_status\nheader\nsubheader\npros\ncons\n\n\n\n\nempReview_71400593\n5\n4\n4\n4\n5\n3\n3\n\ngreat pay but bit of obnoxious enviornment\nNov 26, 2022 - Sales Associate/Cashier in Bensalem, PA\n-Walmart's fair pay policy is ...\n-some locations wont build emp...\n\n\nempReview_70963705\n3\n3\n2\n2\n2\n2\n2\nFormer Employee\nWalmart Employees Trained Thrown to the Wolves\nNov 10, 2022 - Data Entry\nGetting a snack at break was e...\nI worked at Walmart for a very...\n\n\nempReview_71415031\n4\n4\n4\n4\n4\n4\n5\nCurrent Employee, more than 1 year\nWork\nNov 27, 2022 - Warehouse Associate in Springfield, GA\nThe money there is good during...\nIt can get stressful at times ...\n\n\nempReview_69136451\nnan\nnan\nnan\nnan\nnan\nnan\n4\nCurrent Employee\nWalmart\nSep 16, 2022 - Sales Associate/Cashier\nI'm a EXPERIENCED WORKER. I ✨...\nIn my opinion I believe that W...\n\n\nempReview_71398525\n4\n3\n4\n3\n4\n3\n4\nCurrent Employee\nDepends heavily on your team\nNov 26, 2022 - Personal Digital Shopper\nI have a generally excellent t...\nGenerally, departments are sho...\n\n\nempReview_71227029\n1\n1\n1\n1\n3\n1\n1\nFormer Employee, less than 1 year\nManagers are treated like a slave.\nNov 19, 2022 - Auto Care Center Manager (ACCM) in Cottonwood, AZ\nGreat if you like working with...\nyou only get to work in your a...\n\n\nempReview_71329467\n1\n3\n3\n3\n4\n1\n1\nCurrent Employee, more than 3 years\nNo more values\nNov 23, 2022 - GM Coach in Houston, TX\nPay compare to other retails a...\nWalmart is not a bad company t...\n\n\nempReview_71512609\n5\n5\n5\n5\n5\n5\n5\nFormer Employee\nWalmart midnight stocker\nNov 30, 2022 - Midnight Stocker in Taylor, MI\n2 paid 15 min breaks and 1 hou...\nHonestly nothing that I can th...\n\n\nempReview_70585957\n3\n4\n4\n4\n4\n4\n4\nFormer Employee\nLots of Opportunity\nOct 28, 2022 - Human Resources People Lead\nPlenty of opportunities if one...\nAs with any job, management is...\n\n\nempReview_71519435\n3\n4\n4\n5\n4\n4\n5\nCurrent Employee, more than 3 years\nLot of work but worth it\nNov 30, 2022 - People Lead\nI enjoy making associates live...\nSometimes an overwhelming amou...\n\n\n\n\n\nMarkdown for the table above was printed with pandas:\nerdf = pandas.DataFrame(empRevs).set_index('reviewId')\nerdf['pros'] = [p[:30] + '...' if len(p) > 33 else p for p in erdf['pros']]\nerdf['cons'] = [p[:30] + '...' if len(p) > 33 else p for p in erdf['cons']]\nprint(erdf.to_markdown())\n\n\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"python",
"selenium",
"web_scraping"
] | stackoverflow_0074587603_beautifulsoup_python_selenium_web_scraping.txt |
Q:
Remove minus/dash from date with strftime in Python
Having this:
import datetime
my_date = datetime.date.today()
my_str = my_date.strftime('%Y%m%d')
print(my_date)
I am getting this with the minuses in between:
2022-12-01
I want to get this:
2022121
But just by using some fancy stuff with strftime and not referring to the . and formatting a string like in the case here:
how_i_want_it = f'{my_date.year}{my_date.month}{my_date.day}'
print(how_i_want_it)
Is it possible?
A:
You're parsing the datetime correctly - you're just printing the original datetime instead of your string!
| Remove minus/dash from date with strftime in Python | Having this:
import datetime
my_date = datetime.date.today()
my_str = my_date.strftime('%Y%m%d')
print(my_date)
I am getting this with the minuses in between:
2022-12-01
I want to get this:
2022121
But just by using some fancy stuff with strftime and not referring to the . and formatting a string like in the case here:
how_i_want_it = f'{my_date.year}{my_date.month}{my_date.day}'
print(how_i_want_it)
Is it possible?
| [
"You're parsing the datetime correctly - you're just printing the original datetime instead of your string!\n"
] | [
1
] | [] | [] | [
"datetime",
"python",
"strftime"
] | stackoverflow_0074643526_datetime_python_strftime.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.