content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
I can't figure out this input
num = 0
def calculate1(player1, num):
if player1 == 1:
num = num + player1
print(f"The number is {num}")
return (num)
elif player1 == 2:
num = num + player1
print(f"The number is {num}")
return (num)
elif player1 == 3:
num = num + player1
print(f"The number is {num}")
return (num)
else:
#yrn = yes or no
yrn = input("Are you going to play game? (Y/N) : ").upper()
if yrn == "Y":
player1 = int(input("How many numbers are you going to add? : "))
num = calculate1(player1, num)
I want to make that if I type more than 3, the programme ask one more time to reenter the number. Please help meeeee
A:
If you only want to ask once:
#yrn = yes or no
yrn = input("Are you going to play game? (Y/N) : ").upper()
if yrn == "Y":
player1 = int(input("How many numbers are you going to add? : "))
if player1 > 3:
player1 = int(input("How many numbers are you going to add? : "))
num = calculate1(player1, num)
But you if you want to keep asking:
#yrn = yes or no
yrn = input("Are you going to play game? (Y/N) : ").upper()
player1 = 10 # just larger than 3
if yrn == "Y":
while player1 > 3:
player1 = int(input("How many numbers are you going to add? : "))
num = calculate1(player1, num)
If you want to make it more fool proof (i.e. your code should never break) you can also update the function:
def calculate1(player1, num):
if player1 == 1:
num = num + player1
print(f"The number is {num}")
return (num)
elif player1 == 2:
num = num + player1
print(f"The number is {num}")
return (num)
elif player1 == 3:
num = num + player1
print(f"The number is {num}")
return (num)
else:
return None
#yrn = yes or no
yrn = input("Are you going to play game? (Y/N) : ").upper()
if yrn == "Y":
num = 0
while True:
player1 = input("How many numbers are you going to add? : ")
try:
player1 = int(player1)
num = calculate1(player1, num)
except:
if player1.lower() == 'quit' or player1.lower() == 'q':
print('Bye')
break
num = None
if num is not None:
break # break while loop
A:
You can use while & check condition inside the while whenever user inputs.
num = 0
def calculate1(player1, num):
if player1 == 1:
num = num + player1
print(f"The number is {num}")
return (num)
elif player1 == 2:
num = num + player1
print(f"The number is {num}")
return (num)
elif player1 == 3:
num = num + player1
print(f"The number is {num}")
return (num)
#yrn = yes or no
yrn = input("Are you going to play game? (Y/N) : ").upper()
if yrn == "Y":
while 1:
player1 = int(input("How many numbers are you going to add? : "))
if player1 >3:
print("Should be less than 3 enter again")
pass
else:
num = calculate1(player1, num)
break
Sample outputs #
Are you going to play game? (Y/N) : y
How many numbers are you going to add? : 4
Should be less than 3 enter again
How many numbers are you going to add? : 3
The number is 3
| I can't figure out this input | num = 0
def calculate1(player1, num):
if player1 == 1:
num = num + player1
print(f"The number is {num}")
return (num)
elif player1 == 2:
num = num + player1
print(f"The number is {num}")
return (num)
elif player1 == 3:
num = num + player1
print(f"The number is {num}")
return (num)
else:
#yrn = yes or no
yrn = input("Are you going to play game? (Y/N) : ").upper()
if yrn == "Y":
player1 = int(input("How many numbers are you going to add? : "))
num = calculate1(player1, num)
I want to make that if I type more than 3, the programme ask one more time to reenter the number. Please help meeeee
| [
"If you only want to ask once:\n#yrn = yes or no\nyrn = input(\"Are you going to play game? (Y/N) : \").upper()\nif yrn == \"Y\":\n player1 = int(input(\"How many numbers are you going to add? : \"))\n if player1 > 3:\n player1 = int(input(\"How many numbers are you going to add? : \"))\n num = calculate1(player1, num)\n\nBut you if you want to keep asking:\n#yrn = yes or no\nyrn = input(\"Are you going to play game? (Y/N) : \").upper()\nplayer1 = 10 # just larger than 3\nif yrn == \"Y\":\n while player1 > 3:\n player1 = int(input(\"How many numbers are you going to add? : \"))\n num = calculate1(player1, num)\n\nIf you want to make it more fool proof (i.e. your code should never break) you can also update the function:\ndef calculate1(player1, num):\n if player1 == 1:\n num = num + player1\n print(f\"The number is {num}\")\n return (num)\n elif player1 == 2:\n num = num + player1\n print(f\"The number is {num}\")\n return (num)\n elif player1 == 3:\n num = num + player1\n print(f\"The number is {num}\")\n return (num)\n else:\n return None\n\n\n\n#yrn = yes or no\nyrn = input(\"Are you going to play game? (Y/N) : \").upper()\nif yrn == \"Y\":\n num = 0\n while True:\n player1 = input(\"How many numbers are you going to add? : \")\n try:\n player1 = int(player1)\n num = calculate1(player1, num)\n except:\n if player1.lower() == 'quit' or player1.lower() == 'q':\n print('Bye')\n break\n num = None\n if num is not None:\n break # break while loop\n\n",
"You can use while & check condition inside the while whenever user inputs.\nnum = 0\n\ndef calculate1(player1, num):\n if player1 == 1:\n num = num + player1\n print(f\"The number is {num}\")\n return (num)\n elif player1 == 2:\n num = num + player1\n print(f\"The number is {num}\")\n return (num)\n elif player1 == 3:\n num = num + player1\n print(f\"The number is {num}\")\n return (num)\n \n\n\n\n#yrn = yes or no\nyrn = input(\"Are you going to play game? (Y/N) : \").upper()\nif yrn == \"Y\":\n \n while 1:\n player1 = int(input(\"How many numbers are you going to add? : \"))\n \n if player1 >3:\n print(\"Should be less than 3 enter again\")\n pass\n else:\n \n num = calculate1(player1, num)\n break\n\nSample outputs #\nAre you going to play game? (Y/N) : y\nHow many numbers are you going to add? : 4\nShould be less than 3 enter again\n\nHow many numbers are you going to add? : 3\nThe number is 3\n\n"
] | [
0,
0
] | [] | [] | [
"input",
"loops",
"numbers",
"python"
] | stackoverflow_0074625424_input_loops_numbers_python.txt |
Q:
Is there any way to concatenate tuples with a maximum lenght?
I'm concatenating three tuples from a csv but i´m thinking if there is any way to do it with a maximum lenght.
I´m doing this:
df = pd.read_csv(FILE_NAME, header = 0)
df['all'] = df['Header'] + df['Subtitle'] + df['Text']
I want df['all] to be at most 500 characters
Thank you in advice
A:
You can slice the concatenation:
df['all'] = (df['Header'] + df['Subtitle'] + df['Text']).str[:500]
A:
You can use the df.head() function to get the first rows of a dataframe:
df = pd.read_csv(FILE_NAME, header = 0)
df_short = df.head(500)
df_short['all'] = df_short['Header'] + df_short['Subtitle'] + df_short['Text']
| Is there any way to concatenate tuples with a maximum lenght? | I'm concatenating three tuples from a csv but i´m thinking if there is any way to do it with a maximum lenght.
I´m doing this:
df = pd.read_csv(FILE_NAME, header = 0)
df['all'] = df['Header'] + df['Subtitle'] + df['Text']
I want df['all] to be at most 500 characters
Thank you in advice
| [
"You can slice the concatenation:\ndf['all'] = (df['Header'] + df['Subtitle'] + df['Text']).str[:500]\n\n",
"You can use the df.head() function to get the first rows of a dataframe:\ndf = pd.read_csv(FILE_NAME, header = 0)\ndf_short = df.head(500)\ndf_short['all'] = df_short['Header'] + df_short['Subtitle'] + df_short['Text']\n\n"
] | [
0,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074625535_pandas_python.txt |
Q:
Append value to each array in a numpy array
I have a numpy array of arrays, for example:
x = np.array([[1,2,3],[10,20,30]])
Now lets say I want to extend each array with [4,40], to generate the following resulting array:
[[1,2,3,4],[10,20,30,40]]
How can I do this without making a copy of the whole array? I tried to change the shape of the array in place but it throws a ValueError:
x[0] = np.append(x[0],4)
x[1] = np.append(x[1],40)
ValueError : could not broadcast input array from shape (4) into shape (3)
A:
You can't do this. Numpy arrays allocate contiguous blocks of memory, if at all possible. Any change to the array size will force an inefficient copy of the whole array. You should use Python lists to grow your structure if possible, then convert the end result back to an array.
However, if you know the final size of the resulting array, you could instantiate it with something like np.empty() and then assign values by index, rather than appending. This does not change the size of the array itself, only reassigns values, so should not require copying.
A:
While @roganjosh is right that you cannot modify the numpy arrays without making a copy (in the underlying process), there is a simpler way of appending each value of an ndarray to the end of each numpy array in a 2d ndarray, by using numpy.column_stack
x = np.array([[1,2,3],[10,20,30]])
array([[ 1, 2, 3],
[10, 20, 30]])
stack_y = np.array([4,40])
array([ 4, 40])
numpy.column_stack((x, stack_y))
array([[ 1, 2, 3, 4],
[10, 20, 30, 40]])
A:
Create a new matrix
Insert the values of your old matrix
Then, insert your new values in the last positions
x = np.array([[1,2,3],[10,20,30]])
new_X = np.zeros((2, 4))
new_X[:2,:3] = x
new_X[0][-1] = 4
new_X[1][-1] = 40
x=new_X
Or Use np.reshape() or np.resize() instead
| Append value to each array in a numpy array | I have a numpy array of arrays, for example:
x = np.array([[1,2,3],[10,20,30]])
Now lets say I want to extend each array with [4,40], to generate the following resulting array:
[[1,2,3,4],[10,20,30,40]]
How can I do this without making a copy of the whole array? I tried to change the shape of the array in place but it throws a ValueError:
x[0] = np.append(x[0],4)
x[1] = np.append(x[1],40)
ValueError : could not broadcast input array from shape (4) into shape (3)
| [
"You can't do this. Numpy arrays allocate contiguous blocks of memory, if at all possible. Any change to the array size will force an inefficient copy of the whole array. You should use Python lists to grow your structure if possible, then convert the end result back to an array.\nHowever, if you know the final size of the resulting array, you could instantiate it with something like np.empty() and then assign values by index, rather than appending. This does not change the size of the array itself, only reassigns values, so should not require copying.\n",
"While @roganjosh is right that you cannot modify the numpy arrays without making a copy (in the underlying process), there is a simpler way of appending each value of an ndarray to the end of each numpy array in a 2d ndarray, by using numpy.column_stack\nx = np.array([[1,2,3],[10,20,30]])\narray([[ 1, 2, 3],\n [10, 20, 30]])\n\nstack_y = np.array([4,40])\narray([ 4, 40])\n\nnumpy.column_stack((x, stack_y))\narray([[ 1, 2, 3, 4],\n [10, 20, 30, 40]])\n\n",
"\nCreate a new matrix\nInsert the values of your old matrix\nThen, insert your new values in the last positions\nx = np.array([[1,2,3],[10,20,30]])\nnew_X = np.zeros((2, 4))\nnew_X[:2,:3] = x \nnew_X[0][-1] = 4\nnew_X[1][-1] = 40\nx=new_X\n\n\nOr Use np.reshape() or np.resize() instead\n"
] | [
3,
1,
0
] | [] | [] | [
"arrays",
"numpy",
"python"
] | stackoverflow_0053418727_arrays_numpy_python.txt |
Q:
How to feed a modified value back into the modification loop in Python
I have a list of string values (Telegram posts). Many of those individual values include string patterns I want to remove (JSON formatting).
An example string value would be, "['Оппозиционный российский политик Алесей Навальный впал в кому. Его соратники считают, что его отравили.\n\nСейчас Навальный находится в омской больнице скорой медицинской помощи №\u202f1. Посетителей к нему не пускают. ', {'type': 'bold', 'text': 'Очень странные дела.'}, '']"
Examples seen here of the string patterns I want to remove include,
\n
u202f1
', {'type': 'bold', 'text':
}, ''
I have a list of the string patterns I want to remove, in a xslx spreadsheet.
For just a few corrections I would manually get there using Python's replace function. In this case, for a single string value I want to loop through the 'corrections list' and replace against each of these (replacing with blank, ie "").
But each time the string is subject to a replace action it then needs to be fed into the next replace action - unsure how to do this?
I suspect maybe a 'while True' loop, but not sure how to craft it.
This is where I am with my code playing ...
# GET THE 'CORRECTIONS' TO FIND & REPLACE (WITH BLANK) IN THE TARGET STRING (TELEGRAM POST)
def load_corrections(filepath):
corrections = []
wb = openpyxl.load_workbook(filepath)
ws = wb.active
rows = list(ws.rows) # convert the openpyxl generator object into a list
for row in rows[1:]: # skip the heading
corrections.append(row[0].value)
return corrections
# FUNCTION TO TAKE 'DIRTY' STRING VALUE, SUBJECT TO LIST OF 'CORRECTIONS', RETURN CLEAN STRING VALUE
def clean_message_text(dirty_text):
corrections_data = load_corrections(corrections_filepath) # get the list of 'corrections'
for c in corrections_data:
clean_text = dirty_text.replace(c[0], "")
# ⬆⬆⬆⬆⬆⬆⬆⬆⬆⬆ this is the issue - I need this new clean_text to be fed back into the loop to be subject to the next correction list
return clean_text
Hope that all makes sense. Thanks in advance
A:
Looks good, you only need to apply one after the other. No need for a clean text variable, calling a function with a string generates a new object (like call-by-value if you know the term, read more here).
def clean_message_text(dirty_text):
corrections_data = load_corrections(corrections_filepath)
for c in corrections_data:
dirty_text = dirty_text.replace(c, "") # note that you probably need to replace c and not c[0]?
return dirty_text
P.S.: Maybe you want to rename dirty_text to something like message to demonstrate that the string is not dirty at the end.
| How to feed a modified value back into the modification loop in Python | I have a list of string values (Telegram posts). Many of those individual values include string patterns I want to remove (JSON formatting).
An example string value would be, "['Оппозиционный российский политик Алесей Навальный впал в кому. Его соратники считают, что его отравили.\n\nСейчас Навальный находится в омской больнице скорой медицинской помощи №\u202f1. Посетителей к нему не пускают. ', {'type': 'bold', 'text': 'Очень странные дела.'}, '']"
Examples seen here of the string patterns I want to remove include,
\n
u202f1
', {'type': 'bold', 'text':
}, ''
I have a list of the string patterns I want to remove, in a xslx spreadsheet.
For just a few corrections I would manually get there using Python's replace function. In this case, for a single string value I want to loop through the 'corrections list' and replace against each of these (replacing with blank, ie "").
But each time the string is subject to a replace action it then needs to be fed into the next replace action - unsure how to do this?
I suspect maybe a 'while True' loop, but not sure how to craft it.
This is where I am with my code playing ...
# GET THE 'CORRECTIONS' TO FIND & REPLACE (WITH BLANK) IN THE TARGET STRING (TELEGRAM POST)
def load_corrections(filepath):
corrections = []
wb = openpyxl.load_workbook(filepath)
ws = wb.active
rows = list(ws.rows) # convert the openpyxl generator object into a list
for row in rows[1:]: # skip the heading
corrections.append(row[0].value)
return corrections
# FUNCTION TO TAKE 'DIRTY' STRING VALUE, SUBJECT TO LIST OF 'CORRECTIONS', RETURN CLEAN STRING VALUE
def clean_message_text(dirty_text):
corrections_data = load_corrections(corrections_filepath) # get the list of 'corrections'
for c in corrections_data:
clean_text = dirty_text.replace(c[0], "")
# ⬆⬆⬆⬆⬆⬆⬆⬆⬆⬆ this is the issue - I need this new clean_text to be fed back into the loop to be subject to the next correction list
return clean_text
Hope that all makes sense. Thanks in advance
| [
"Looks good, you only need to apply one after the other. No need for a clean text variable, calling a function with a string generates a new object (like call-by-value if you know the term, read more here).\ndef clean_message_text(dirty_text):\n corrections_data = load_corrections(corrections_filepath) \n for c in corrections_data:\n dirty_text = dirty_text.replace(c, \"\") # note that you probably need to replace c and not c[0]?\n return dirty_text\n\nP.S.: Maybe you want to rename dirty_text to something like message to demonstrate that the string is not dirty at the end.\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0074625411_python.txt |
Q:
How di I fix this Error? self.category_id.set(row[1]), IndexError: string index out of range
I'm adding data into sqlite table and when I try to update the table, I'm getting this error 'string index not in range'.
Again when I execute the update command, all the columms gets updated except the identity column but my intention is only to update a selected row.
what I not doing right from the code below>
Your assistance will be highly appreciated.
The error is coming from the function update_record(self)
Below is the code:
from tkinter import *
from tkinter import ttk
import tkinter.messagebox
import sqlite3
root =Tk()
root.title('Accounting App')
root.config(bg='#3d6466')
root.geometry("520x400")
root.resizable(False, False)
style = ttk.Style()
style.theme_use('alt')
style.configure("TCombobox", fieldbackground="Grey", background="Grey")
class Backend():
def __init__(self):
self.conn = sqlite3.connect('accounting.db')
self.cur = self.conn.cursor()
#self.conn.execute("""DROP TABLE IF EXISTS account_type""")
self.conn.execute("""CREATE TABLE IF NOT EXISTS account_type(
id INTEGER PRIMARY KEY,
category_id INTEGER NOT NULL,
category_type TEXT NOT NULL
)"""),
self.conn.commit()
# elf.conn.close()
# =========Account Type======
class Account_type():
def insert_account_type(self, category_id, category_type):
self.conn = sqlite3.connect('accounting.db')
self.cur = self.conn.cursor()
self.cur.execute("""INSERT INTO account_type(category_id,category_type) VALUES(?,?);""",
(category_id, category_type,))
self.conn.commit()
self.conn.close()
def view_account_type(self):
self.conn = sqlite3.connect('accounting.db')
self.cur = self.conn.cursor()
self.cur.execute("SELECT * FROM account_type")
rows = self.cur.fetchall()
self.conn.close()
return rows
acc_type = Backend.Account_type()
tb = Backend()
class Front_end():
def __init__(self, master):
# Frames
global cur
global conn
conn = sqlite3.connect('accounting.db')
cur = conn.cursor()
# Frames
self.left_frame = LabelFrame(master,bg='#3d6466', relief=SUNKEN,width=200)
self.left_frame.pack(fill = 'both',expand = YES , padx = 5,side=LEFT,anchor=NW)
self.right_frame = LabelFrame(master, bg='#3d6466', relief=SUNKEN)
self.right_frame.pack(fill = 'both',expand = YES ,side=LEFT,anchor=NW)
self.top_right_frame = LabelFrame(self.right_frame, bg='#3d6466', relief=SUNKEN,text='Details',fg='maroon')
self.top_right_frame.pack(fill=BOTH,side=TOP, anchor=NW,expand=YES)
self.top_r_inner_frame = LabelFrame(self.right_frame, bg='#3d6466', relief=SUNKEN, text='...', fg='maroon',height=10)
self.top_r_inner_frame.pack(fill=BOTH, side=TOP, anchor=SW, expand=YES)
self.bottom_right_frame = LabelFrame(self.right_frame, bg='#3d6466', relief=SUNKEN, text='Field View', fg='maroon')
self.bottom_right_frame.pack(fill=BOTH,side=TOP, anchor=SW, expand=YES)
self.my_canvas = Canvas(self.top_right_frame,bg='#3d6466')
self.my_canvas.pack(side=LEFT,fill='both', expand=YES)
# vertical configuration of scrollbar
self.yscrollbar = ttk.Scrollbar(self.top_right_frame, orient=VERTICAL, command = self.my_canvas.yview)
self.yscrollbar.pack(side=RIGHT,fill='both')
self.my_canvas.config(yscrollcommand = self.yscrollbar.set)
self.top_right_frame = Frame(self.my_canvas, bg='#3d6466', relief=SUNKEN)
self.my_canvas.create_window((0,0),window=self.top_right_frame, anchor=NW)
self.my_canvas.bind('<Configure>',lambda e:self.my_canvas.configure(scrollregion = self.my_canvas.bbox('all')))
self.side_frame = LabelFrame(self.left_frame,bg='#3d6466',relief=SUNKEN,text='Menu Buttons',fg='maroon',)
self.side_frame.pack(side=TOP,anchor=NW,expand=YES )
# Side Buttons
self.btn1 = Button(self.side_frame, text='Main Account Types', bg='#3d6466', font=('cambria', 12), anchor=W,
fg='white', width=18,height=2,command=self.main_account)
self.btn1.grid(row=0, column=0, sticky=W)
def main_account(self):
# variables
self.category_id = StringVar()
self.category_type = StringVar()
self.category_search =StringVar()
# functions
def add_main_accounts(self):
if self.category_id.get() == "":
tkinter.messagebox.showinfo('All fields are required')
else:
Backend.Account_type.insert_account_type(self,
self.category_id.get(),self.category_type.get())
tkinter.messagebox.showinfo('Entry successful')
def display_account_types(self):
self.trv.delete(*self.trv.get_children())
for rows in Backend.Account_type.view_account_type(self):
self.trv.insert("", END, values=rows)
def get_account_type(e):
selected_row = self.trv.focus()
data = self.trv.item(selected_row)
row = data["values"]
"""Grab items and send them to entry fields"""
self.category_id.set(row[1])
self.category_type.set(row[2])
def clear(self):
self.category_id.set("")
self.category_type.set("")
**def update_record(self):
selected = self.trv.focus()
self.trv.item(selected, values=(
self.category_id.get(), self.category_type.get()))
conn = sqlite3.connect("accounting.db")
cur = conn.cursor()
if self.category_id.get() == "" or self.category_type.get() == "" :
tkinter.messagebox.showinfo('All fields are required!')
return
update_record = tkinter.messagebox.askyesno('Confirm please',
'Do you want to update records?')
if update_record > 0:
cur.execute(
"UPDATE account_type SET category_id=:cat_id, category_type=:type",
{'cat_id': self.category_id.get(), 'type': self.category_type.get()})
tkinter.messagebox.showinfo('Record update successful!')
conn.commit()**
# call the function for Clearing the fields
clear(self)
conn.close()
"""=================TreeView==============="""
# Scrollbars
ttk.Style().configure("Treeview", background = "#3d6466", foreground = "white", fieldbackground = "grey")
scroll_x = Scrollbar(self.bottom_right_frame, orient = HORIZONTAL)
scroll_x.pack(side = BOTTOM, fill = X)
scroll_y = Scrollbar(self.bottom_right_frame, orient = VERTICAL)
scroll_y.pack(side = RIGHT, fill = Y)
# Treeview columns & setting scrollbars
self.trv = ttk.Treeview(self.bottom_right_frame, height=5, columns=
('id','category_id', 'category_type'), xscrollcommand = scroll_x.set, yscrollcommand = scroll_y.set)
# Treeview style configuration
ttk.Style().configure("Treeview", background = "#3d6466", foreground = "white", fieldbackground = "#3d6466")
# Configure vertical and Horizontal scroll
scroll_x.config(command = self.trv.xview)
scroll_y.config(command = self.trv.yview)
# Treeview Headings/columns
self.trv.heading('id', text = 'NO')
self.trv.heading('category_id', text = 'Category ID')
self.trv.heading('category_type', text = 'Category Type')
self.trv['show'] = 'headings'
# Treeview columns width
self.trv.column('id', width = 50)
self.trv.column('category_id', width = 70)
self.trv.column('category_type', width = 90)
self.trv.pack(fill = BOTH, expand = YES,anchor = NW)
# Binding Treeview with data
self.trv.bind('<<TreeviewSelect>>',get_account_type) # trv.bind('<Double-1>',"")
# Account Types Labels
self.lbl1 = Label(self.top_right_frame,text = 'Category ID',anchor = W,width=12,font = ('cambria',13,),bg = '#3d6466')
self.lbl1.grid(row = 0,column = 0,pady = 5)
self.lbl1 = Label(self.top_right_frame, text='Category Type', anchor=W, width=12, font=('cambria', 13,), bg='#3d6466')
self.lbl1.grid(row=1, column=0, pady=5)
self.lbl2 = Label(self.top_right_frame, text='Search Account', anchor=W, width=12, font=('cambria', 13,),
bg='#3d6466')
self.lbl2.grid(row=6, column=0, pady=5)
# Account Type Entries
self.entry1 = Entry(self.top_right_frame,textvariable = self.category_id,font = ('cambria',13,),bg = 'Grey',width=14)
self.entry1.grid(row = 0,column=1,sticky = W,padx = 4,columnspan=2)
self.entry1 = Entry(self.top_right_frame, textvariable=self.category_type, font=('cambria', 13,), bg='Grey', width=14)
self.entry1.grid(row=1, column=1, sticky=W, padx=4, columnspan=2)
self.entry2 = Entry(self.top_right_frame, textvariable=self.category_search, font=('cambria', 13,), bg='Grey', width=14)
self.entry2.grid(row=6, column=1, sticky=W, padx=4, columnspan=2)
# Buttons
self.btn_1 = Button(self.top_right_frame,text='Add',font=('cambria',12,'bold'),bg='#3d6466',
activebackground='green', fg = 'white',width=12,height = 1,relief=RAISED,
command = lambda :[add_main_accounts(self),display_account_types(self),clear(self)])
self.btn_1.grid(row = 3,column = 0,pady=6, padx=6)
self.btn_2 = Button(self.top_right_frame, text = 'View',command=lambda :[display_account_types(self),clear(self)],
font=('cambria', 12, 'bold'), bg = '#3d6466', activebackground='green',
fg ='white', width=12, height = 1, relief=RAISED)
self.btn_2.grid(row = 3, column=1,padx=0)
self.btn_3 = Button(self.top_right_frame, text = 'Update', command= lambda :[update_record(self),
display_account_types(self)],font=('cambria', 12, 'bold'), bg = '#3d6466',
activebackground = 'green', fg='white', width = 12, height = 1, relief=RAISED)
self.btn_3.grid(row = 4, column = 0,pady=6,padx=10)
# calling the class
app = Front_end(root)
root.mainloop() ```
A:
Your update function updates all the records exist. To avoid this you should use WHERE. Here its fixed version
def update_record(self):
selected = self.trv.focus()
oldValues = self.trv.item(selected)["values"]
self.trv.item(selected, values=(oldValues[0],
self.category_id.get(), self.category_type.get()))
conn = sqlite3.connect("accounting.db")
cur = conn.cursor()
if self.category_id.get() == "" or self.category_type.get() == "" :
tkinter.messagebox.showinfo('All fields are required!')
return
update_record = tkinter.messagebox.askyesno('Confirm please',
'Do you want to update employee records?')
if update_record > 0:
cur.execute(
"UPDATE account_type SET category_id=:cat_id, category_type=:type WHERE id=:id_value",
{'cat_id': self.category_id.get(), 'type': self.category_type.get(), "id_value": oldValues[0]})
tkinter.messagebox.showinfo('Record update successful!')
conn.commit()
# call the function for Clearing the fields
clear(self)
conn.close()
We used selected rows idx etc.
However your error string index out of range is not about this. The function above going to fix your record update problem. Your error is because it tries to show a record value but after refreshed table, nothing is selected. So it returns empty.
def get_account_type(e):
selected_row = self.trv.focus()
data = self.trv.item(selected_row)
row = data["values"]
if(row == ""):
return
"""Grab items and send them to entry fields"""
self.category_id.set(row[1])
self.category_type.set(row[2])
Now if row is just an empty string, function stops executing and you get no errors.
| How di I fix this Error? self.category_id.set(row[1]), IndexError: string index out of range | I'm adding data into sqlite table and when I try to update the table, I'm getting this error 'string index not in range'.
Again when I execute the update command, all the columms gets updated except the identity column but my intention is only to update a selected row.
what I not doing right from the code below>
Your assistance will be highly appreciated.
The error is coming from the function update_record(self)
Below is the code:
from tkinter import *
from tkinter import ttk
import tkinter.messagebox
import sqlite3
root =Tk()
root.title('Accounting App')
root.config(bg='#3d6466')
root.geometry("520x400")
root.resizable(False, False)
style = ttk.Style()
style.theme_use('alt')
style.configure("TCombobox", fieldbackground="Grey", background="Grey")
class Backend():
def __init__(self):
self.conn = sqlite3.connect('accounting.db')
self.cur = self.conn.cursor()
#self.conn.execute("""DROP TABLE IF EXISTS account_type""")
self.conn.execute("""CREATE TABLE IF NOT EXISTS account_type(
id INTEGER PRIMARY KEY,
category_id INTEGER NOT NULL,
category_type TEXT NOT NULL
)"""),
self.conn.commit()
# elf.conn.close()
# =========Account Type======
class Account_type():
def insert_account_type(self, category_id, category_type):
self.conn = sqlite3.connect('accounting.db')
self.cur = self.conn.cursor()
self.cur.execute("""INSERT INTO account_type(category_id,category_type) VALUES(?,?);""",
(category_id, category_type,))
self.conn.commit()
self.conn.close()
def view_account_type(self):
self.conn = sqlite3.connect('accounting.db')
self.cur = self.conn.cursor()
self.cur.execute("SELECT * FROM account_type")
rows = self.cur.fetchall()
self.conn.close()
return rows
acc_type = Backend.Account_type()
tb = Backend()
class Front_end():
def __init__(self, master):
# Frames
global cur
global conn
conn = sqlite3.connect('accounting.db')
cur = conn.cursor()
# Frames
self.left_frame = LabelFrame(master,bg='#3d6466', relief=SUNKEN,width=200)
self.left_frame.pack(fill = 'both',expand = YES , padx = 5,side=LEFT,anchor=NW)
self.right_frame = LabelFrame(master, bg='#3d6466', relief=SUNKEN)
self.right_frame.pack(fill = 'both',expand = YES ,side=LEFT,anchor=NW)
self.top_right_frame = LabelFrame(self.right_frame, bg='#3d6466', relief=SUNKEN,text='Details',fg='maroon')
self.top_right_frame.pack(fill=BOTH,side=TOP, anchor=NW,expand=YES)
self.top_r_inner_frame = LabelFrame(self.right_frame, bg='#3d6466', relief=SUNKEN, text='...', fg='maroon',height=10)
self.top_r_inner_frame.pack(fill=BOTH, side=TOP, anchor=SW, expand=YES)
self.bottom_right_frame = LabelFrame(self.right_frame, bg='#3d6466', relief=SUNKEN, text='Field View', fg='maroon')
self.bottom_right_frame.pack(fill=BOTH,side=TOP, anchor=SW, expand=YES)
self.my_canvas = Canvas(self.top_right_frame,bg='#3d6466')
self.my_canvas.pack(side=LEFT,fill='both', expand=YES)
# vertical configuration of scrollbar
self.yscrollbar = ttk.Scrollbar(self.top_right_frame, orient=VERTICAL, command = self.my_canvas.yview)
self.yscrollbar.pack(side=RIGHT,fill='both')
self.my_canvas.config(yscrollcommand = self.yscrollbar.set)
self.top_right_frame = Frame(self.my_canvas, bg='#3d6466', relief=SUNKEN)
self.my_canvas.create_window((0,0),window=self.top_right_frame, anchor=NW)
self.my_canvas.bind('<Configure>',lambda e:self.my_canvas.configure(scrollregion = self.my_canvas.bbox('all')))
self.side_frame = LabelFrame(self.left_frame,bg='#3d6466',relief=SUNKEN,text='Menu Buttons',fg='maroon',)
self.side_frame.pack(side=TOP,anchor=NW,expand=YES )
# Side Buttons
self.btn1 = Button(self.side_frame, text='Main Account Types', bg='#3d6466', font=('cambria', 12), anchor=W,
fg='white', width=18,height=2,command=self.main_account)
self.btn1.grid(row=0, column=0, sticky=W)
def main_account(self):
# variables
self.category_id = StringVar()
self.category_type = StringVar()
self.category_search =StringVar()
# functions
def add_main_accounts(self):
if self.category_id.get() == "":
tkinter.messagebox.showinfo('All fields are required')
else:
Backend.Account_type.insert_account_type(self,
self.category_id.get(),self.category_type.get())
tkinter.messagebox.showinfo('Entry successful')
def display_account_types(self):
self.trv.delete(*self.trv.get_children())
for rows in Backend.Account_type.view_account_type(self):
self.trv.insert("", END, values=rows)
def get_account_type(e):
selected_row = self.trv.focus()
data = self.trv.item(selected_row)
row = data["values"]
"""Grab items and send them to entry fields"""
self.category_id.set(row[1])
self.category_type.set(row[2])
def clear(self):
self.category_id.set("")
self.category_type.set("")
**def update_record(self):
selected = self.trv.focus()
self.trv.item(selected, values=(
self.category_id.get(), self.category_type.get()))
conn = sqlite3.connect("accounting.db")
cur = conn.cursor()
if self.category_id.get() == "" or self.category_type.get() == "" :
tkinter.messagebox.showinfo('All fields are required!')
return
update_record = tkinter.messagebox.askyesno('Confirm please',
'Do you want to update records?')
if update_record > 0:
cur.execute(
"UPDATE account_type SET category_id=:cat_id, category_type=:type",
{'cat_id': self.category_id.get(), 'type': self.category_type.get()})
tkinter.messagebox.showinfo('Record update successful!')
conn.commit()**
# call the function for Clearing the fields
clear(self)
conn.close()
"""=================TreeView==============="""
# Scrollbars
ttk.Style().configure("Treeview", background = "#3d6466", foreground = "white", fieldbackground = "grey")
scroll_x = Scrollbar(self.bottom_right_frame, orient = HORIZONTAL)
scroll_x.pack(side = BOTTOM, fill = X)
scroll_y = Scrollbar(self.bottom_right_frame, orient = VERTICAL)
scroll_y.pack(side = RIGHT, fill = Y)
# Treeview columns & setting scrollbars
self.trv = ttk.Treeview(self.bottom_right_frame, height=5, columns=
('id','category_id', 'category_type'), xscrollcommand = scroll_x.set, yscrollcommand = scroll_y.set)
# Treeview style configuration
ttk.Style().configure("Treeview", background = "#3d6466", foreground = "white", fieldbackground = "#3d6466")
# Configure vertical and Horizontal scroll
scroll_x.config(command = self.trv.xview)
scroll_y.config(command = self.trv.yview)
# Treeview Headings/columns
self.trv.heading('id', text = 'NO')
self.trv.heading('category_id', text = 'Category ID')
self.trv.heading('category_type', text = 'Category Type')
self.trv['show'] = 'headings'
# Treeview columns width
self.trv.column('id', width = 50)
self.trv.column('category_id', width = 70)
self.trv.column('category_type', width = 90)
self.trv.pack(fill = BOTH, expand = YES,anchor = NW)
# Binding Treeview with data
self.trv.bind('<<TreeviewSelect>>',get_account_type) # trv.bind('<Double-1>',"")
# Account Types Labels
self.lbl1 = Label(self.top_right_frame,text = 'Category ID',anchor = W,width=12,font = ('cambria',13,),bg = '#3d6466')
self.lbl1.grid(row = 0,column = 0,pady = 5)
self.lbl1 = Label(self.top_right_frame, text='Category Type', anchor=W, width=12, font=('cambria', 13,), bg='#3d6466')
self.lbl1.grid(row=1, column=0, pady=5)
self.lbl2 = Label(self.top_right_frame, text='Search Account', anchor=W, width=12, font=('cambria', 13,),
bg='#3d6466')
self.lbl2.grid(row=6, column=0, pady=5)
# Account Type Entries
self.entry1 = Entry(self.top_right_frame,textvariable = self.category_id,font = ('cambria',13,),bg = 'Grey',width=14)
self.entry1.grid(row = 0,column=1,sticky = W,padx = 4,columnspan=2)
self.entry1 = Entry(self.top_right_frame, textvariable=self.category_type, font=('cambria', 13,), bg='Grey', width=14)
self.entry1.grid(row=1, column=1, sticky=W, padx=4, columnspan=2)
self.entry2 = Entry(self.top_right_frame, textvariable=self.category_search, font=('cambria', 13,), bg='Grey', width=14)
self.entry2.grid(row=6, column=1, sticky=W, padx=4, columnspan=2)
# Buttons
self.btn_1 = Button(self.top_right_frame,text='Add',font=('cambria',12,'bold'),bg='#3d6466',
activebackground='green', fg = 'white',width=12,height = 1,relief=RAISED,
command = lambda :[add_main_accounts(self),display_account_types(self),clear(self)])
self.btn_1.grid(row = 3,column = 0,pady=6, padx=6)
self.btn_2 = Button(self.top_right_frame, text = 'View',command=lambda :[display_account_types(self),clear(self)],
font=('cambria', 12, 'bold'), bg = '#3d6466', activebackground='green',
fg ='white', width=12, height = 1, relief=RAISED)
self.btn_2.grid(row = 3, column=1,padx=0)
self.btn_3 = Button(self.top_right_frame, text = 'Update', command= lambda :[update_record(self),
display_account_types(self)],font=('cambria', 12, 'bold'), bg = '#3d6466',
activebackground = 'green', fg='white', width = 12, height = 1, relief=RAISED)
self.btn_3.grid(row = 4, column = 0,pady=6,padx=10)
# calling the class
app = Front_end(root)
root.mainloop() ```
| [
"Your update function updates all the records exist. To avoid this you should use WHERE. Here its fixed version\ndef update_record(self):\n selected = self.trv.focus()\n \n oldValues = self.trv.item(selected)[\"values\"]\n\n self.trv.item(selected, values=(oldValues[0],\n self.category_id.get(), self.category_type.get()))\n conn = sqlite3.connect(\"accounting.db\")\n cur = conn.cursor()\n if self.category_id.get() == \"\" or self.category_type.get() == \"\" :\n tkinter.messagebox.showinfo('All fields are required!')\n return\n update_record = tkinter.messagebox.askyesno('Confirm please',\n 'Do you want to update employee records?')\n if update_record > 0:\n cur.execute(\n \"UPDATE account_type SET category_id=:cat_id, category_type=:type WHERE id=:id_value\",\n {'cat_id': self.category_id.get(), 'type': self.category_type.get(), \"id_value\": oldValues[0]})\n tkinter.messagebox.showinfo('Record update successful!')\n conn.commit()\n\n # call the function for Clearing the fields\n clear(self)\n conn.close()\n\nWe used selected rows idx etc.\nHowever your error string index out of range is not about this. The function above going to fix your record update problem. Your error is because it tries to show a record value but after refreshed table, nothing is selected. So it returns empty.\ndef get_account_type(e):\n selected_row = self.trv.focus()\n data = self.trv.item(selected_row)\n row = data[\"values\"]\n if(row == \"\"):\n return\n \"\"\"Grab items and send them to entry fields\"\"\"\n self.category_id.set(row[1])\n self.category_type.set(row[2])\n\nNow if row is just an empty string, function stops executing and you get no errors.\n"
] | [
0
] | [] | [] | [
"python",
"sqlite",
"tkinter"
] | stackoverflow_0074625300_python_sqlite_tkinter.txt |
Q:
why do I receive these errors "WARNING: Ignoring invalid distribution -yproj " while installing any python module in cmd
WARNING: Ignoring invalid distribution -yproj (c:\users\space_junk\appdata\local\programs\python\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -yproj (c:\users\space_junk\appdata\local\programs\python\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -yproj (c:\users\space_junk\appdata\local\programs\python\python310\lib\site-packages)
A:
I was getting a similar message that turned out be caused by a previous failed pip upgrade. I had attempted to upgrade pip from a user account that didn't have the proper rights. There was a temp directory left behind in site-packages that began with ~ip which was causing pip to complain every time it ran. I removed the directory and was able to re-upgrade pip using an account that had proper permissions. No more warnings from pip.
Did you have a problem installing something like pyproj by any chance? The temp directory seems to be named by replacing the first letter of the library with a ~.
A:
I had the same problem with matplotlib. It looked like I wanted to install a package from some sort of unauthorized source or sth. The only thing that you gotta do is to go to the site-packages folder and delete the folder that caused the problem. In your case, it is ~yproj (In my case, it was ~atplotlib). Then you are good to go.
STEP BY STEP:
STEP 1: find the site-packages folder -> Type "pip show pyproj" or any other library you want!
STEP 2: delete the folder mentioned in the warning (it has "~" in the beginning) -> in your case, it would be ~yproj.
DONE!
| why do I receive these errors "WARNING: Ignoring invalid distribution -yproj " while installing any python module in cmd | WARNING: Ignoring invalid distribution -yproj (c:\users\space_junk\appdata\local\programs\python\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -yproj (c:\users\space_junk\appdata\local\programs\python\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -yproj (c:\users\space_junk\appdata\local\programs\python\python310\lib\site-packages)
| [
"I was getting a similar message that turned out be caused by a previous failed pip upgrade. I had attempted to upgrade pip from a user account that didn't have the proper rights. There was a temp directory left behind in site-packages that began with ~ip which was causing pip to complain every time it ran. I removed the directory and was able to re-upgrade pip using an account that had proper permissions. No more warnings from pip.\nDid you have a problem installing something like pyproj by any chance? The temp directory seems to be named by replacing the first letter of the library with a ~.\n",
"I had the same problem with matplotlib. It looked like I wanted to install a package from some sort of unauthorized source or sth. The only thing that you gotta do is to go to the site-packages folder and delete the folder that caused the problem. In your case, it is ~yproj (In my case, it was ~atplotlib). Then you are good to go.\nSTEP BY STEP:\n\nSTEP 1: find the site-packages folder -> Type \"pip show pyproj\" or any other library you want!\nSTEP 2: delete the folder mentioned in the warning (it has \"~\" in the beginning) -> in your case, it would be ~yproj.\n\nDONE!\n"
] | [
6,
0
] | [] | [] | [
"fiona",
"geopandas",
"osgeo",
"python",
"torch"
] | stackoverflow_0072547834_fiona_geopandas_osgeo_python_torch.txt |
Q:
Logic Python task with bitwise operators
I have a very specific task to complete and I am honestly lost in it. The goal is to define function in Python, that would remove all 1s in binary input that do not have any 1 next to it. I will show you in example.
Let's have input 0b11010 -–> the output of this would be 0b11000. Another example 0b10101 --> output would be Ob00000.
The real twist is that I cannot use any for/while loop, import any library or use zip, lists, map etc. The function needs to be defined purely using bitwise operations and nothing else.
I was already trying to implement the concepts of operations, but those were only blind shots and got nowhere. Any help would be appreciated, thanks!
A:
To break down the condition mathematically, the i-th bit of the output should be 1 if and only if:
The i-th bit of the input is 1.
And either the (i-1)-th bit or the (i+1)-th bit of the input is also 1.
Logically the condition is input[i] and (input[i-1] or input[i+1]) if the input is a bit vector. If the input is simply a number, indexing can be emulated with bit shifting and masking, giving this code:
def remove_lonely_ones(b):
return b & ((b << 1) | (b >> 1))
Testing shows that it works both on your examples and on edge cases:
print("{: 5b}".format(remove_lonely_ones(0b11111))) # prints 11111
print("{: 5b}".format(remove_lonely_ones(0b11010))) # prints 11000
print("{: 5b}".format(remove_lonely_ones(0b11011))) # prints 11011
print("{: 5b}".format(remove_lonely_ones(0b10101))) # prints 0
print("{: 5b}".format(remove_lonely_ones(0b00000))) # prints 0
| Logic Python task with bitwise operators | I have a very specific task to complete and I am honestly lost in it. The goal is to define function in Python, that would remove all 1s in binary input that do not have any 1 next to it. I will show you in example.
Let's have input 0b11010 -–> the output of this would be 0b11000. Another example 0b10101 --> output would be Ob00000.
The real twist is that I cannot use any for/while loop, import any library or use zip, lists, map etc. The function needs to be defined purely using bitwise operations and nothing else.
I was already trying to implement the concepts of operations, but those were only blind shots and got nowhere. Any help would be appreciated, thanks!
| [
"To break down the condition mathematically, the i-th bit of the output should be 1 if and only if:\n\nThe i-th bit of the input is 1.\nAnd either the (i-1)-th bit or the (i+1)-th bit of the input is also 1.\n\nLogically the condition is input[i] and (input[i-1] or input[i+1]) if the input is a bit vector. If the input is simply a number, indexing can be emulated with bit shifting and masking, giving this code:\ndef remove_lonely_ones(b):\n return b & ((b << 1) | (b >> 1))\n\nTesting shows that it works both on your examples and on edge cases:\nprint(\"{: 5b}\".format(remove_lonely_ones(0b11111))) # prints 11111\nprint(\"{: 5b}\".format(remove_lonely_ones(0b11010))) # prints 11000\nprint(\"{: 5b}\".format(remove_lonely_ones(0b11011))) # prints 11011\nprint(\"{: 5b}\".format(remove_lonely_ones(0b10101))) # prints 0\nprint(\"{: 5b}\".format(remove_lonely_ones(0b00000))) # prints 0\n\n"
] | [
3
] | [] | [] | [
"algorithm",
"bit",
"bitwise_operators",
"logic",
"python"
] | stackoverflow_0074625271_algorithm_bit_bitwise_operators_logic_python.txt |
Q:
Initialising variables in an init_vars() function
This code doesn't initializes the variables that I expect to be initialized.
a,b,c = [None]*3
def __init_abc():
a="a"
b="b"
c="c"
def print_abc():
__init_abc()
print(a,b,c)
print_abc()
Output is:
None None None
A:
Within the __init_abc function you need to specify the global variables a, b, c otherwise the variables are presumed to be local to the function.
a,b,c = [None]*3
def __init_abc():
global a,b,c
a="a"
b="b"
c="c"
def print_abc():
__init_abc()
print(a,b,c)
print_abc()
Output:
a b c
| Initialising variables in an init_vars() function | This code doesn't initializes the variables that I expect to be initialized.
a,b,c = [None]*3
def __init_abc():
a="a"
b="b"
c="c"
def print_abc():
__init_abc()
print(a,b,c)
print_abc()
Output is:
None None None
| [
"Within the __init_abc function you need to specify the global variables a, b, c otherwise the variables are presumed to be local to the function.\na,b,c = [None]*3\n\ndef __init_abc():\n global a,b,c\n a=\"a\"\n b=\"b\"\n c=\"c\"\n \ndef print_abc():\n __init_abc()\n print(a,b,c)\n \nprint_abc()\n\nOutput:\na b c\n\n"
] | [
0
] | [] | [] | [
"global_variables",
"python"
] | stackoverflow_0074577178_global_variables_python.txt |
Q:
The function np.dot multiplies the GF4 field matrices for a very long time
Multiplies large matrices for a very long time. How can this problem be solved. I use the galois library, and numpy, I think it should still work stably. I tried to implement my GF4 arithmetic and multiplied matrices using numpy, but it takes even longer. Thank you for your reply.
When r = 2,3,4,5,6 multiplies quickly, then it takes a long time. As for me, these are not very large sizes of matrices. This is just a code snippet. I get the sizes n, k of matrices of a certain family given r. And I need to multiply the matrices of those obtained parameters.
import numpy as np
import galois
def family_Hamming(q,r):
n = int((q**r-1)/(q-1))
k = int((q**r-1)/(q-1)-r)
res = (n,k)
return res
q = 4
r = 7
n,k = family_Hamming(q,r)
GF = galois.GF(2**2)
#(5461,5461)
a = GF(np.random.randint(4, size=(k, k)))
#(5454,5461)
b = GF(np.random.randint(4, size=(k, n)))
c = np.dot(a,b)
print(c)
A:
I'm not sure if it is actually faster but np.dot should be used for the dot product of two vectors, for matrix multiplication use A @ B. That's as efficient as you can get with Python as far as I know
A:
Try using jax on a CUDA runtime. For example, you can try it out on Google Colab's free GPU. (Open a notebook -> Runtime -> Change runtime type -> GPU).
import jax.numpy as jnp
from jax import device_put
a = GF(np.random.randint(4, size=(k, k)))
b = GF(np.random.randint(4, size=(k, n)))
a, b = device_put(a), device_put(b)
c = jnp.dot(a, b)
c = np.asarray(c)
Timing test:
%timeit jnp.dot(a, b).block_until_ready()
# 765 ms ± 96.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
| The function np.dot multiplies the GF4 field matrices for a very long time | Multiplies large matrices for a very long time. How can this problem be solved. I use the galois library, and numpy, I think it should still work stably. I tried to implement my GF4 arithmetic and multiplied matrices using numpy, but it takes even longer. Thank you for your reply.
When r = 2,3,4,5,6 multiplies quickly, then it takes a long time. As for me, these are not very large sizes of matrices. This is just a code snippet. I get the sizes n, k of matrices of a certain family given r. And I need to multiply the matrices of those obtained parameters.
import numpy as np
import galois
def family_Hamming(q,r):
n = int((q**r-1)/(q-1))
k = int((q**r-1)/(q-1)-r)
res = (n,k)
return res
q = 4
r = 7
n,k = family_Hamming(q,r)
GF = galois.GF(2**2)
#(5461,5461)
a = GF(np.random.randint(4, size=(k, k)))
#(5454,5461)
b = GF(np.random.randint(4, size=(k, n)))
c = np.dot(a,b)
print(c)
| [
"I'm not sure if it is actually faster but np.dot should be used for the dot product of two vectors, for matrix multiplication use A @ B. That's as efficient as you can get with Python as far as I know\n",
"Try using jax on a CUDA runtime. For example, you can try it out on Google Colab's free GPU. (Open a notebook -> Runtime -> Change runtime type -> GPU).\nimport jax.numpy as jnp\nfrom jax import device_put\n\na = GF(np.random.randint(4, size=(k, k)))\nb = GF(np.random.randint(4, size=(k, n)))\n\na, b = device_put(a), device_put(b)\nc = jnp.dot(a, b)\n\nc = np.asarray(c)\n\nTiming test:\n%timeit jnp.dot(a, b).block_until_ready()\n# 765 ms ± 96.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n"
] | [
0,
0
] | [] | [] | [
"galois_field",
"linear_algebra",
"math",
"numpy",
"python"
] | stackoverflow_0074625066_galois_field_linear_algebra_math_numpy_python.txt |
Q:
Django download a file
I'm quite new to using Django and I am trying to develop a website where the user is able to upload a number of excel files, these files are then stored in a media folder Webproject/project/media.
def upload(request):
if request.POST:
form = FileForm(request.POST, request.FILES)
if form.is_valid():
form.save()
return render_to_response('project/upload_successful.html')
else:
form = FileForm()
args = {}
args.update(csrf(request))
args['form'] = form
return render_to_response('project/create.html', args)
The document is then displayed in a list along with any other document they have uploaded, which you can click into and it will displays basic info about them and the name of the excelfile they have uploaded. From here I want to be able to download the same excel file again using the link:
<a href="/project/download"> Download Document </a>
My urls are
urlpatterns = [
url(r'^$', ListView.as_view(queryset=Post.objects.all().order_by("-date")[:25],
template_name="project/project.html")),
url(r'^(?P<pk>\d+)$', DetailView.as_view(model=Post, template_name="project/post.html")),
url(r'^upload/$', upload),
url(r'^download/(?P<path>.*)$', serve, {'document root': settings.MEDIA_ROOT}),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
but I get the error, serve() got an unexpected keyword argument 'document root'. can anyone explain how to fix this?
OR
Explain how I can get the uploaded files to to be selected and served using
def download(request):
file_name = #get the filename of desired excel file
path_to_file = #get the path of desired excel file
response = HttpResponse(mimetype='application/force-download')
response['Content-Disposition'] = 'attachment; filename=%s' % smart_str(file_name)
response['X-Sendfile'] = smart_str(path_to_file)
return response
A:
You missed underscore in argument document_root. But it's bad idea to use serve in production. Use something like this instead:
import os
from django.conf import settings
from django.http import HttpResponse, Http404
def download(request, path):
file_path = os.path.join(settings.MEDIA_ROOT, path)
if os.path.exists(file_path):
with open(file_path, 'rb') as fh:
response = HttpResponse(fh.read(), content_type="application/vnd.ms-excel")
response['Content-Disposition'] = 'inline; filename=' + os.path.basename(file_path)
return response
raise Http404
A:
You can add "download" attribute inside your tag to download files.
<a href="/project/download" download> Download Document </a>
https://www.w3schools.com/tags/att_a_download.asp
A:
Reference:
In view.py Implement function like,
def download(request, id):
obj = your_model_name.objects.get(id=id)
filename = obj.model_attribute_name.path
response = FileResponse(open(filename, 'rb'))
return response
A:
When you upload a file using FileField, the file will have a URL that you can use to point to the file and use HTML download attribute to download that file you can simply do this.
models.py
The model.py looks like this
class CsvFile(models.Model):
csv_file = models.FileField(upload_to='documents')
views.py
#csv upload
class CsvUploadView(generic.CreateView):
model = CsvFile
fields = ['csv_file']
template_name = 'upload.html'
#csv download
class CsvDownloadView(generic.ListView):
model = CsvFile
fields = ['csv_file']
template_name = 'download.html'
Then in your templates.
#Upload template
upload.html
<div class="container">
<form action="#" method="POST" enctype="multipart/form-data">
{% csrf_token %}
{{ form.media }}
{{ form.as_p }}
<button class="btn btn-primary btn-sm" type="submit">Upload</button>
</form>
#download template
download.html
{% for document in object_list %}
<a href="{{ document.csv_file.url }}" download class="btn btn-dark float-right">Download</a>
{% endfor %}
I did not use forms, just rendered model but either way, FileField is there and it will work the same.
A:
I've found Django's FileField to be really helpful for letting users upload and download files. The Django documentation has a section on managing files. You can store some information about the file in a table, along with a FileField that points to the file itself. Then you can list the available files by searching the table.
A:
@Biswadp's solution worked greatly for me
In your static folder, make sure to have the desired files you would like the user to download
In your HTML template, your code should look like this :
<a href="{% static 'Highlight.docx' %}"> Download </a>
A:
Using the below approach makes everything less secure since any user can access any user's file.
<a href="/project/download" download> Download Document </a>
Using the below approach makes no sense since Django only handles one requests at the time (unless you are using gunicorn or something else), and believe me, the below approach takes a lot of time to complete.
def download(request, path):
file_path = os.path.join(settings.MEDIA_ROOT, path)
if os.path.exists(file_path):
with open(file_path, 'rb') as fh:
response = HttpResponse(fh.read(), content_type="application/vnd.ms-excel")
response['Content-Disposition'] = 'inline; filename=' + os.path.basename(file_path)
return response
raise Http404
So what is the optimum solution?
Use Nginx authenticated routes. When requesting a file from Nginx you can make a request to a route and depending on the HTTP response Nginx allows to denies that request. This makes it very secure and also scalable and performant.
You can ready about more here
A:
<a href='/your-download-view/' download>Download</a>
In your view:
def download(request):
# pre-processing, authorizations, etc.
# ...
return FileResponse(open(path_to_file, 'rb'), as_attachment=True)
A:
Simple using html like this downloads the file mentioned using static keyword
<a href="{% static 'bt.docx' %}" class="btn btn-secondary px-4 py-2 btn-sm">Download CV</a>
A:
1.settings.py:
MEDIA_DIR = os.path.join(BASE_DIR,'media')
#Media
MEDIA_ROOT = MEDIA_DIR
MEDIA_URL = '/media/'
2.urls.py:
from django.conf.urls.static import static
urlpatterns += static(settings.MEDIA_URL,document_root=settings.MEDIA_ROOT)
3.in template:
<a href="{{ file.url }}" download>Download File.</a>
Work and test in django >=3
for more detail use this link:
https://youtu.be/MpDZ34mEJ5Y
A:
I use this method:
{% if quote.myfile %}
<div class="">
<a role="button"
href="{{ quote.myfile.url }}"
download="{{ quote.myfile.url }}"
class="btn btn-light text-dark ml-0">
Download attachment
</a>
</div>
{% endif %}
A:
If you hafe upload your file in media than:
media
example-input-file.txt
views.py
def download_csv(request):
file_path = os.path.join(settings.MEDIA_ROOT, 'example-input-file.txt')
if os.path.exists(file_path):
with open(file_path, 'rb') as fh:
response = HttpResponse(fh.read(), content_type="application/vnd.ms-excel")
response['Content-Disposition'] = 'inline; filename=' + os.path.basename(file_path)
return response
urls.py
path('download_csv/', views.download_csv, name='download_csv'),
download.html
a href="{% url 'download_csv' %}" download=""
A:
import mimetypes
from django.http import HttpResponse, Http404
mime_type, _ = mimetypes.guess_type(json_file_path)
if os.path.exists(json_file_path):
with open(json_file_path, 'r') as fh:
response = HttpResponse(fh, content_type=mime_type)
response['Content-Disposition'] = "attachment; filename=%s" % 'config.json'
return response
raise Http404
| Django download a file | I'm quite new to using Django and I am trying to develop a website where the user is able to upload a number of excel files, these files are then stored in a media folder Webproject/project/media.
def upload(request):
if request.POST:
form = FileForm(request.POST, request.FILES)
if form.is_valid():
form.save()
return render_to_response('project/upload_successful.html')
else:
form = FileForm()
args = {}
args.update(csrf(request))
args['form'] = form
return render_to_response('project/create.html', args)
The document is then displayed in a list along with any other document they have uploaded, which you can click into and it will displays basic info about them and the name of the excelfile they have uploaded. From here I want to be able to download the same excel file again using the link:
<a href="/project/download"> Download Document </a>
My urls are
urlpatterns = [
url(r'^$', ListView.as_view(queryset=Post.objects.all().order_by("-date")[:25],
template_name="project/project.html")),
url(r'^(?P<pk>\d+)$', DetailView.as_view(model=Post, template_name="project/post.html")),
url(r'^upload/$', upload),
url(r'^download/(?P<path>.*)$', serve, {'document root': settings.MEDIA_ROOT}),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
but I get the error, serve() got an unexpected keyword argument 'document root'. can anyone explain how to fix this?
OR
Explain how I can get the uploaded files to to be selected and served using
def download(request):
file_name = #get the filename of desired excel file
path_to_file = #get the path of desired excel file
response = HttpResponse(mimetype='application/force-download')
response['Content-Disposition'] = 'attachment; filename=%s' % smart_str(file_name)
response['X-Sendfile'] = smart_str(path_to_file)
return response
| [
"You missed underscore in argument document_root. But it's bad idea to use serve in production. Use something like this instead:\nimport os\nfrom django.conf import settings\nfrom django.http import HttpResponse, Http404\n\ndef download(request, path):\n file_path = os.path.join(settings.MEDIA_ROOT, path)\n if os.path.exists(file_path):\n with open(file_path, 'rb') as fh:\n response = HttpResponse(fh.read(), content_type=\"application/vnd.ms-excel\")\n response['Content-Disposition'] = 'inline; filename=' + os.path.basename(file_path)\n return response\n raise Http404\n\n",
"You can add \"download\" attribute inside your tag to download files.\n<a href=\"/project/download\" download> Download Document </a>\n\nhttps://www.w3schools.com/tags/att_a_download.asp\n",
"Reference:\nIn view.py Implement function like,\ndef download(request, id):\n obj = your_model_name.objects.get(id=id)\n filename = obj.model_attribute_name.path\n response = FileResponse(open(filename, 'rb'))\n return response\n\n",
"When you upload a file using FileField, the file will have a URL that you can use to point to the file and use HTML download attribute to download that file you can simply do this.\nmodels.py\nThe model.py looks like this\nclass CsvFile(models.Model):\n csv_file = models.FileField(upload_to='documents')\n\nviews.py\n#csv upload\nclass CsvUploadView(generic.CreateView):\n\n model = CsvFile\n fields = ['csv_file']\n template_name = 'upload.html'\n\n#csv download\nclass CsvDownloadView(generic.ListView):\n\n model = CsvFile\n fields = ['csv_file']\n template_name = 'download.html'\n\nThen in your templates.\n#Upload template\nupload.html\n<div class=\"container\">\n<form action=\"#\" method=\"POST\" enctype=\"multipart/form-data\">\n {% csrf_token %}\n {{ form.media }}\n {{ form.as_p }}\n <button class=\"btn btn-primary btn-sm\" type=\"submit\">Upload</button>\n</form>\n\n#download template\ndownload.html\n {% for document in object_list %}\n\n <a href=\"{{ document.csv_file.url }}\" download class=\"btn btn-dark float-right\">Download</a>\n\n {% endfor %}\n\nI did not use forms, just rendered model but either way, FileField is there and it will work the same.\n",
"I've found Django's FileField to be really helpful for letting users upload and download files. The Django documentation has a section on managing files. You can store some information about the file in a table, along with a FileField that points to the file itself. Then you can list the available files by searching the table.\n",
"@Biswadp's solution worked greatly for me\nIn your static folder, make sure to have the desired files you would like the user to download\nIn your HTML template, your code should look like this :\n<a href=\"{% static 'Highlight.docx' %}\"> Download </a>\n\n",
"Using the below approach makes everything less secure since any user can access any user's file.\n<a href=\"/project/download\" download> Download Document </a>\n\nUsing the below approach makes no sense since Django only handles one requests at the time (unless you are using gunicorn or something else), and believe me, the below approach takes a lot of time to complete.\ndef download(request, path):\n file_path = os.path.join(settings.MEDIA_ROOT, path)\n if os.path.exists(file_path):\n with open(file_path, 'rb') as fh:\n response = HttpResponse(fh.read(), content_type=\"application/vnd.ms-excel\")\n response['Content-Disposition'] = 'inline; filename=' + os.path.basename(file_path)\n return response\n raise Http404\n\nSo what is the optimum solution?\nUse Nginx authenticated routes. When requesting a file from Nginx you can make a request to a route and depending on the HTTP response Nginx allows to denies that request. This makes it very secure and also scalable and performant.\nYou can ready about more here\n",
"\n<a href='/your-download-view/' download>Download</a>\n\nIn your view:\n\n\ndef download(request):\n # pre-processing, authorizations, etc.\n # ...\n return FileResponse(open(path_to_file, 'rb'), as_attachment=True)\n\n\n",
"Simple using html like this downloads the file mentioned using static keyword\n<a href=\"{% static 'bt.docx' %}\" class=\"btn btn-secondary px-4 py-2 btn-sm\">Download CV</a>\n\n",
"1.settings.py:\nMEDIA_DIR = os.path.join(BASE_DIR,'media')\n#Media\nMEDIA_ROOT = MEDIA_DIR\nMEDIA_URL = '/media/'\n\n2.urls.py:\nfrom django.conf.urls.static import static\nurlpatterns += static(settings.MEDIA_URL,document_root=settings.MEDIA_ROOT)\n\n3.in template:\n<a href=\"{{ file.url }}\" download>Download File.</a>\n\nWork and test in django >=3\nfor more detail use this link:\nhttps://youtu.be/MpDZ34mEJ5Y\n",
"I use this method:\n{% if quote.myfile %}\n <div class=\"\">\n <a role=\"button\" \n href=\"{{ quote.myfile.url }}\"\n download=\"{{ quote.myfile.url }}\"\n class=\"btn btn-light text-dark ml-0\">\n Download attachment\n </a>\n </div>\n{% endif %}\n\n",
"If you hafe upload your file in media than:\nmedia \nexample-input-file.txt\nviews.py\ndef download_csv(request): \n file_path = os.path.join(settings.MEDIA_ROOT, 'example-input-file.txt') \n if os.path.exists(file_path): \n with open(file_path, 'rb') as fh: \n response = HttpResponse(fh.read(), content_type=\"application/vnd.ms-excel\") \n response['Content-Disposition'] = 'inline; filename=' + os.path.basename(file_path) \n return response\n\nurls.py\npath('download_csv/', views.download_csv, name='download_csv'),\n\ndownload.html\na href=\"{% url 'download_csv' %}\" download=\"\"\n\n",
"import mimetypes\nfrom django.http import HttpResponse, Http404\n\nmime_type, _ = mimetypes.guess_type(json_file_path)\n \nif os.path.exists(json_file_path):\n with open(json_file_path, 'r') as fh:\n response = HttpResponse(fh, content_type=mime_type)\n response['Content-Disposition'] = \"attachment; filename=%s\" % 'config.json'\n return response\n raise Http404\n\n"
] | [
129,
59,
34,
7,
4,
2,
2,
2,
1,
1,
0,
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0036392510_django_python.txt |
Q:
How could I use 'assert' and a variable 'actual' to write a test code for a user input code for the conversion of time?
`
def conversion():
options = print('Would you like to convert hours to mins, or mins to hours?')
choice = input()
if choice == 'hours to mins':
hours = int(input('How many hours? '))
mins = hours * 60
print(mins, 'Minutes')
elif choice == 'mins to hours':
mins = int(input('How many minutes? '))
hours = mins/60
print(hours, 'Hours')
else:
print('An error has occured')
conversion()
This is the production code which is meant to be used to write a corresponding test code. `
I am unsure on how to go about writing a test code using 'siminput' 'assert' and the a variable 'actual' to write a working test code for the line of code above for it to properly run in unittest.
A:
You can use pytest with the pytest-mock extension. Install them via pip or conda, or whatever you use.
Quick Fix
First I made a small change to your code to make it a bit easier to test: I added a return statement. Now the code will also return the result.
# conversion.py
def conversion():
print('Would you like to convert hours to mins, or mins to hours?')
choice = input()
if choice == 'hours to mins':
hours = int(input('How many hours? '))
mins = hours * 60
print(mins, 'Minutes')
return mins
elif choice == 'mins to hours':
mins = int(input('How many minutes? '))
hours = mins/60
print(hours, 'Hours')
return hours
else:
print('An error has occured')
return False
Ok, now we create a test
# conversion_test.py
def test_hrs_to_min(mocker):
input_provider = mocker.patch('builtins.input')
# The following line is crucial: You configure the
# values each call to `Input` will return in order.
input_provider.side_effect = ['hours to mins', '3']
result = conversion()
assert result == 3*60
when we run this now with pytest -s from the command line, we see the expected print result and a green dot for the passed test. Try to add the other scenarios and error cases on your own (e.g. what happens if hour input is not an int)
You can also mock the builtin.print and check if it was called with the right arguments (mock_print.assert_called_with(3*60, "Minutes").
See Mocking examples for further details.
Better Solution
As already mentioned it'd be a good idea to separate concerns in your code.
def conversion():
print('Would you like to convert hours to mins, or mins to hours?')
choice = input()
if choice == 'hours to mins':
hours = int(input('How many hours? '))
print(hrs2mins(hours), 'Minutes')
elif choice == 'mins to hours':
mins = int(input('How many minutes? '))
print(min2hrs(mins), 'Hours')
print('An error has occurred')
return False
def hrs2mins(hrs: int) -> int:
return hrs * 60
def min2hrs(mins: int) -> float:
return mins/60
now you can test the "business logic" (the conversion) separately from the User interface...
A:
test_input.py:
def conversion():
print("Would you like to conver...")
choice = input()
if choice == 'hour to mins':
hours = int(input("How many hours?"))
mins = hours * 60
print(mins, "Minutes")
else:
print('An error has occured')
test_conversion.py:
from unittest import mock
from unittest import TestCase
from test_input import conversion
from io import StringIO
class ConversionTest(TestCase):
@mock.patch('test_input.input', create=True)
def test_minutes(self, mocked_input):
mocked_input.side_effect = ["hour to mins", 4]
with mock.patch('sys.stdout', new=StringIO()) as fake_out:
conversion()
output = fake_out.getvalue()
self.assertEqual(output.replace("\n", ""), 'Would you like to conver...240 Minutes')
| How could I use 'assert' and a variable 'actual' to write a test code for a user input code for the conversion of time? | `
def conversion():
options = print('Would you like to convert hours to mins, or mins to hours?')
choice = input()
if choice == 'hours to mins':
hours = int(input('How many hours? '))
mins = hours * 60
print(mins, 'Minutes')
elif choice == 'mins to hours':
mins = int(input('How many minutes? '))
hours = mins/60
print(hours, 'Hours')
else:
print('An error has occured')
conversion()
This is the production code which is meant to be used to write a corresponding test code. `
I am unsure on how to go about writing a test code using 'siminput' 'assert' and the a variable 'actual' to write a working test code for the line of code above for it to properly run in unittest.
| [
"You can use pytest with the pytest-mock extension. Install them via pip or conda, or whatever you use.\n\nQuick Fix\nFirst I made a small change to your code to make it a bit easier to test: I added a return statement. Now the code will also return the result.\n# conversion.py\ndef conversion():\n print('Would you like to convert hours to mins, or mins to hours?')\n choice = input()\n\n if choice == 'hours to mins':\n hours = int(input('How many hours? '))\n mins = hours * 60\n print(mins, 'Minutes')\n return mins\n elif choice == 'mins to hours':\n mins = int(input('How many minutes? '))\n hours = mins/60\n print(hours, 'Hours')\n return hours\n else:\n print('An error has occured')\n return False\n\nOk, now we create a test\n# conversion_test.py\ndef test_hrs_to_min(mocker):\n input_provider = mocker.patch('builtins.input')\n # The following line is crucial: You configure the \n # values each call to `Input` will return in order. \n input_provider.side_effect = ['hours to mins', '3']\n result = conversion()\n assert result == 3*60\n\nwhen we run this now with pytest -s from the command line, we see the expected print result and a green dot for the passed test. Try to add the other scenarios and error cases on your own (e.g. what happens if hour input is not an int)\nYou can also mock the builtin.print and check if it was called with the right arguments (mock_print.assert_called_with(3*60, \"Minutes\").\nSee Mocking examples for further details.\n\nBetter Solution\nAs already mentioned it'd be a good idea to separate concerns in your code.\ndef conversion():\n print('Would you like to convert hours to mins, or mins to hours?')\n choice = input()\n if choice == 'hours to mins':\n hours = int(input('How many hours? '))\n print(hrs2mins(hours), 'Minutes')\n elif choice == 'mins to hours':\n mins = int(input('How many minutes? '))\n print(min2hrs(mins), 'Hours')\n\n print('An error has occurred')\n return False\n\n\ndef hrs2mins(hrs: int) -> int:\n return hrs * 60\n\n\ndef min2hrs(mins: int) -> float:\n return mins/60\n\nnow you can test the \"business logic\" (the conversion) separately from the User interface...\n",
"test_input.py:\ndef conversion():\n print(\"Would you like to conver...\")\n choice = input()\n\n if choice == 'hour to mins':\n hours = int(input(\"How many hours?\"))\n mins = hours * 60\n print(mins, \"Minutes\")\n else:\n print('An error has occured')\n\ntest_conversion.py:\nfrom unittest import mock\nfrom unittest import TestCase\nfrom test_input import conversion\nfrom io import StringIO\n\n\nclass ConversionTest(TestCase):\n @mock.patch('test_input.input', create=True)\n def test_minutes(self, mocked_input):\n mocked_input.side_effect = [\"hour to mins\", 4]\n with mock.patch('sys.stdout', new=StringIO()) as fake_out:\n conversion()\n output = fake_out.getvalue()\n self.assertEqual(output.replace(\"\\n\", \"\"), 'Would you like to conver...240 Minutes')\n\n"
] | [
1,
0
] | [] | [] | [
"jupyter",
"jupyter_notebook",
"python",
"python_unittest",
"ubuntu"
] | stackoverflow_0074625146_jupyter_jupyter_notebook_python_python_unittest_ubuntu.txt |
Q:
Conway's Game of Life in Python - Competitive Programming - how to optimize
I am solving Game of Life problem on csacademy and I can't manage to beat the time on larger inputs. Any help on optimizing the code?
I tried changing things, like using np.array() instead of list, and not converting the original input to 1s and 0s (original is '*' and '-', and needs to be printed that way).
from copy import deepcopy
import numpy as np
aliveToDead = {0, 1, 4, 5, 6, 7, 8}
deadToAlive = {3}
def countNeighbors(M, n, m, i, j):
s = M[i, (j + 1) % m] + M[i, j - 1] + M[(i + 1) % n, j] + M[i - 1, j]
s += M[i - 1, j - 1] + M[(i + 1) % n, (j + 1) % m] + M[i - 1, (j + 1) % m] + M[(i + 1) % n, j - 1]
return s
def gameOfLife(mat, n, m, C):
cells = deepcopy(mat)
for c in range(C):
for i in range(n):
for j in range(m):
neighbors = countNeighbors(mat, n, m, i, j)
if mat[i, j] == 1 and neighbors in aliveToDead:
cells[i, j] = 0
elif mat[i, j] == 0 and neighbors in deadToAlive:
cells[i, j] = 1
mat = deepcopy(cells)
return mat
def buildList(n):
return np.array([[0 if x == '-' else 1 for x in input()] for i in range(n)])
def printResult(mat):
mat = mat.astype(str)
mat[mat == "1"] = '*'
mat[mat == "0"] = '-'
for row in mat:
print(*row, sep="")
def main():
n, m, c = map(int, input().split())
mat = buildList(n)
result = gameOfLife(mat, n, m, c)
printResult(result)
if __name__ == "__main__":
main()
A:
This mathematical solution seems to help pass more tests, but there are still a few that fail.
def gameOfLife(mat, n, m, C):
cells = deepcopy(mat)
loop = n*m*4*3*5
while loop % 16:
loop *= 2
num_iter = C % loop
for c in range(num_iter):
for i in range(n):
for j in range(m):
neighbors = countNeighbors(mat, n, m, i, j)
if mat[i, j] == 1:
cells[i, j] = aliveToDead[neighbors]
else:
if neighbors==3:
cells[i, j] = 1
else:
cells[i, j] = 0
mat,cells = cells,mat
return mat
| Conway's Game of Life in Python - Competitive Programming - how to optimize | I am solving Game of Life problem on csacademy and I can't manage to beat the time on larger inputs. Any help on optimizing the code?
I tried changing things, like using np.array() instead of list, and not converting the original input to 1s and 0s (original is '*' and '-', and needs to be printed that way).
from copy import deepcopy
import numpy as np
aliveToDead = {0, 1, 4, 5, 6, 7, 8}
deadToAlive = {3}
def countNeighbors(M, n, m, i, j):
s = M[i, (j + 1) % m] + M[i, j - 1] + M[(i + 1) % n, j] + M[i - 1, j]
s += M[i - 1, j - 1] + M[(i + 1) % n, (j + 1) % m] + M[i - 1, (j + 1) % m] + M[(i + 1) % n, j - 1]
return s
def gameOfLife(mat, n, m, C):
cells = deepcopy(mat)
for c in range(C):
for i in range(n):
for j in range(m):
neighbors = countNeighbors(mat, n, m, i, j)
if mat[i, j] == 1 and neighbors in aliveToDead:
cells[i, j] = 0
elif mat[i, j] == 0 and neighbors in deadToAlive:
cells[i, j] = 1
mat = deepcopy(cells)
return mat
def buildList(n):
return np.array([[0 if x == '-' else 1 for x in input()] for i in range(n)])
def printResult(mat):
mat = mat.astype(str)
mat[mat == "1"] = '*'
mat[mat == "0"] = '-'
for row in mat:
print(*row, sep="")
def main():
n, m, c = map(int, input().split())
mat = buildList(n)
result = gameOfLife(mat, n, m, c)
printResult(result)
if __name__ == "__main__":
main()
| [
"This mathematical solution seems to help pass more tests, but there are still a few that fail.\ndef gameOfLife(mat, n, m, C):\n cells = deepcopy(mat)\n loop = n*m*4*3*5\n while loop % 16:\n loop *= 2\n num_iter = C % loop\n for c in range(num_iter):\n for i in range(n):\n for j in range(m):\n neighbors = countNeighbors(mat, n, m, i, j)\n if mat[i, j] == 1:\n cells[i, j] = aliveToDead[neighbors]\n else:\n if neighbors==3:\n cells[i, j] = 1\n else:\n cells[i, j] = 0\n mat,cells = cells,mat\n return mat\n\n"
] | [
0
] | [] | [] | [
"conways_game_of_life",
"numpy",
"optimization",
"performance",
"python"
] | stackoverflow_0071280998_conways_game_of_life_numpy_optimization_performance_python.txt |
Q:
How to fix Key error "Item" dynamodb, if item does not exist?
def func(name):
ddb = session.resource(service_name="dynamodb")
table = ddb.Table("TABLE_X")
response = table.get_item(Key={"employee": user})
data = response["Item"]
for item in data.items():
if data["employee"] == name:
manager = data["manager"]
return name, manager
return False
ddb table has:
employee
manager
Jane
John
Ben
Mike
I want to be able to say, if user does not exist, return no user found.
I understand that it will return Key error if it doesn't exist, so how can I achieve that in a way that it will return me an output no user found instead of the key error?
A:
You can check if a key is in a dictionary with 'foo' in bar. So judging from the comments you want to return False and print an error message if something is not found.
explicit check
For each key you want, check if it exists:
def func(name):
ddb = session.resource(service_name="dynamodb")
table = ddb.Table("TABLE_X")
response = table.get_item(Key={"employee": user})
if not 'Item' in response:
print('User not found')
return False
data = response["Item"]
for item in data.items():
if "employee" in data and data["employee"] == name:
manager = data["manager"]
return name, manager
return False
Implicit check (catch the error)*
The method above has a high likelyhood of missing an occetion to check ... There is a python build in way to deal with expected errors...
def func(name):
try:
ddb = session.resource(service_name="dynamodb")
table = ddb.Table("TABLE_X")
response = table.get_item(Key={"employee": user})
data = response["Item"]
for item in data.items():
if data["employee"] == name:
manager = data["manager"]
return name, manager
return False
except KeyError as e:
print('User could not be found. Original error:', e.message)
return False
| How to fix Key error "Item" dynamodb, if item does not exist? | def func(name):
ddb = session.resource(service_name="dynamodb")
table = ddb.Table("TABLE_X")
response = table.get_item(Key={"employee": user})
data = response["Item"]
for item in data.items():
if data["employee"] == name:
manager = data["manager"]
return name, manager
return False
ddb table has:
employee
manager
Jane
John
Ben
Mike
I want to be able to say, if user does not exist, return no user found.
I understand that it will return Key error if it doesn't exist, so how can I achieve that in a way that it will return me an output no user found instead of the key error?
| [
"You can check if a key is in a dictionary with 'foo' in bar. So judging from the comments you want to return False and print an error message if something is not found.\n\nexplicit check\nFor each key you want, check if it exists:\n\n\ndef func(name):\n ddb = session.resource(service_name=\"dynamodb\")\n table = ddb.Table(\"TABLE_X\")\n\n response = table.get_item(Key={\"employee\": user})\n if not 'Item' in response:\n print('User not found')\n return False \n\n data = response[\"Item\"]\n\n for item in data.items():\n if \"employee\" in data and data[\"employee\"] == name:\n manager = data[\"manager\"]\n return name, manager\n return False\n\n\n\nImplicit check (catch the error)*\nThe method above has a high likelyhood of missing an occetion to check ... There is a python build in way to deal with expected errors...\ndef func(name):\n try:\n ddb = session.resource(service_name=\"dynamodb\")\n table = ddb.Table(\"TABLE_X\")\n\n response = table.get_item(Key={\"employee\": user})\n data = response[\"Item\"]\n\n for item in data.items():\n if data[\"employee\"] == name:\n manager = data[\"manager\"]\n return name, manager\n return False\n except KeyError as e:\n print('User could not be found. Original error:', e.message)\n return False\n\n"
] | [
0
] | [] | [] | [
"amazon_dynamodb",
"python"
] | stackoverflow_0074625105_amazon_dynamodb_python.txt |
Q:
How to pass a variable from one function to another function
I am making python based Email broadcasting in which i have created entries like email, pass, there is csv browse as well which will brose a Email_list_container file and a submit button which will call a send mail function to send bulk email along with attachment, problem is when browse is used to grab emails from csv it stores to a variable and then return to function but when I call this variable in send mail function is does not allow me to use it there. same with attachment function is is not coming in send mail function either.
i have tried Global
newvar = browse()
and calling new var but this calls whole function to pop-up again new window to open another file which does not make any sense.
help me guys.
from tkinter import *
import tkinter.messagebox as msg
import smtplib as smtp
import csv
from itertools import chain
#browse function which stores value from csv file
def browse():
from itertools import chain
file_path=filedialog.askopenfilename(title="Open CSV file")
with open(file_path) as csvfile:
read = csv.reader(csvfile)
for row in read:
ini_list.append(row)
flatten_list = list(chain.from_iterable(ini_list))
rcvr_emails =list(flatten_list)
# print(rcvr_emails)
file_label = Label(window,text=file_path, border=0, bg='#BAE1E3',font="inter 10", fg="grey").place(x=330,y=230)
recemail = rcvr_emails
#what i want is submit function to grab a variable from browse function as email list
def submit():
try:
email = login_email.get()
pass_word = login_pass.get()
subject = email_subject.get()
body = email_body.get()
server = smtp.SMTP("smtp.gmail.com",587)
server.starttls()
server.ehlo()
server.login(email,pass_word)
massage = "subject:{}\n\n{}".format(subject,body)
server.sendmail(email,recemail,massage)
server.quit()
msg.showinfo("Status","Mails have been sent to the Targatted Email's List.\nThank You for using our services.")
except:
msg.showwarning("ERROR","SMTP API could not login the credentials,\nPlease check Email & Password then try again.")
A:
Just return recemail from browse function then pass it as argument to submit function:
def browse():
from itertools import chain
file_path=filedialog.askopenfilename(title="Open CSV file")
with open(file_path) as csvfile:
read = csv.reader(csvfile)
for row in read:
ini_list.append(row)
flatten_list = list(chain.from_iterable(ini_list))
rcvr_emails =list(flatten_list)
# print(rcvr_emails)
file_label = Label(window,text=file_path, border=0, bg='#BAE1E3',font="inter 10", fg="grey").place(x=330,y=230)
recemail = rcvr_emails
return recemail
def submit(email_list):
// your code
Then in your main program:
received_email = browse()
submit(received_email)
Or in one line:
submit(browse())
| How to pass a variable from one function to another function | I am making python based Email broadcasting in which i have created entries like email, pass, there is csv browse as well which will brose a Email_list_container file and a submit button which will call a send mail function to send bulk email along with attachment, problem is when browse is used to grab emails from csv it stores to a variable and then return to function but when I call this variable in send mail function is does not allow me to use it there. same with attachment function is is not coming in send mail function either.
i have tried Global
newvar = browse()
and calling new var but this calls whole function to pop-up again new window to open another file which does not make any sense.
help me guys.
from tkinter import *
import tkinter.messagebox as msg
import smtplib as smtp
import csv
from itertools import chain
#browse function which stores value from csv file
def browse():
from itertools import chain
file_path=filedialog.askopenfilename(title="Open CSV file")
with open(file_path) as csvfile:
read = csv.reader(csvfile)
for row in read:
ini_list.append(row)
flatten_list = list(chain.from_iterable(ini_list))
rcvr_emails =list(flatten_list)
# print(rcvr_emails)
file_label = Label(window,text=file_path, border=0, bg='#BAE1E3',font="inter 10", fg="grey").place(x=330,y=230)
recemail = rcvr_emails
#what i want is submit function to grab a variable from browse function as email list
def submit():
try:
email = login_email.get()
pass_word = login_pass.get()
subject = email_subject.get()
body = email_body.get()
server = smtp.SMTP("smtp.gmail.com",587)
server.starttls()
server.ehlo()
server.login(email,pass_word)
massage = "subject:{}\n\n{}".format(subject,body)
server.sendmail(email,recemail,massage)
server.quit()
msg.showinfo("Status","Mails have been sent to the Targatted Email's List.\nThank You for using our services.")
except:
msg.showwarning("ERROR","SMTP API could not login the credentials,\nPlease check Email & Password then try again.")
| [
"Just return recemail from browse function then pass it as argument to submit function:\n def browse():\n from itertools import chain\n file_path=filedialog.askopenfilename(title=\"Open CSV file\")\n with open(file_path) as csvfile:\n read = csv.reader(csvfile)\n for row in read:\n ini_list.append(row)\n flatten_list = list(chain.from_iterable(ini_list))\n rcvr_emails =list(flatten_list)\n # print(rcvr_emails)\n file_label = Label(window,text=file_path, border=0, bg='#BAE1E3',font=\"inter 10\", fg=\"grey\").place(x=330,y=230)\n recemail = rcvr_emails\n return recemail\ndef submit(email_list):\n// your code\n\nThen in your main program:\nreceived_email = browse()\n\nsubmit(received_email)\n\nOr in one line:\nsubmit(browse())\n\n"
] | [
1
] | [] | [] | [
"function",
"python"
] | stackoverflow_0074618553_function_python.txt |
Q:
Selenium TimeoutException: Message using selenium
from selenium import webdriver
import time
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
import pandas as pd
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
options = webdriver.ChromeOptions()
options.add_argument("--no-sandbox")
options.add_argument("--disable-gpu")
options.add_argument("--window-size=1920x1080")
options.add_argument("--disable-extensions")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
URL = 'https://gemelnet.cma.gov.il/views/dafmakdim.aspx'
driver.get(URL)
time.sleep(2)
review=WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//input[@id='knisa']")))
review.click()
table=WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//div[@class='Aaaa89455bbfe4387b92529246ea52dc6114']//font"))).text()
print(table)
I am trying to extract the table but they give me raise TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message: how I solve these error any recommendation.
Kindly tell me what mistake I will be doing this is page link https://gemelnet.cma.gov.il/views/dafmakdim.aspx
table
A:
There are several issues you need to improve here:
The Aaaa89455bbfe4387b92529246ea52dc6114 class you trying to use is a dynamic value. This can't be used as a locator.
The first element you clicking to enter the system - you should wait for element clickability, not only visibility. These conditions are almost similar, but since you are going to click the element clickability should be checked. Visibility is normally used when we are going to extract the text form that element.
No need to add time.sleep(2) before review=WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//input[@id='knisa']")))
You can apply click directly on the web element object returned by WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//input[@id='knisa']"))), no need to store it into review temporary variable.
The table you trying to print initially presents "Loading" content. So, to overcome this problem I'm waiting for one of it columns to appear , add some more delay and then get the entire table text.
Not ideally, but the following is worked:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 10)
url = "https://gemelnet.cma.gov.il/views/dafmakdim.aspx"
driver.get(url)
wait.until(EC.element_to_be_clickable((By.XPATH, "//input[@id='knisa']"))).click()
wait.until(EC.visibility_of_element_located((By.XPATH, "//td[contains(.,'קרנות השתלמות')]")))
time.sleep(2)
table = wait.until(EC.visibility_of_element_located((By.XPATH, "//table[@id='ReportViewer1_fixedTable']"))).text
print(table)
The output is:
30/11/2022
(30/11/2022)
סה"כ נכסי הקופות - לפי סוג קופה
(במיליוני ש"ח)
נכון לסוף אוקטובר 2022
תשואה שנתית
סה"כ נכסים
קופ"ג להשקעה- חסכון לילד
קופ"ג להשקעה
מטרה אחרת
מרכזית לפיצויים
קרנות השתלמות
תגמולים ואישית לפיצויים
שנת דיווח
---
648,227
14,480
34,775
946
10,806
303,352
283,869
2022
12.33%
688,304
14,441
34,409
1,043
12,370
321,477
304,565
2021
4.58%
579,438
10,997
20,172
950
12,022
272,631
262,666
2020
11.77%
511,987
---
---
933
13,463
250,174
247,416
2019
התשואה הממוצעת בענף קופות גמל
-6.66%
ב- 12 חודשים האחרונים עמדה על
| Selenium TimeoutException: Message using selenium | from selenium import webdriver
import time
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
import pandas as pd
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
options = webdriver.ChromeOptions()
options.add_argument("--no-sandbox")
options.add_argument("--disable-gpu")
options.add_argument("--window-size=1920x1080")
options.add_argument("--disable-extensions")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
URL = 'https://gemelnet.cma.gov.il/views/dafmakdim.aspx'
driver.get(URL)
time.sleep(2)
review=WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//input[@id='knisa']")))
review.click()
table=WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//div[@class='Aaaa89455bbfe4387b92529246ea52dc6114']//font"))).text()
print(table)
I am trying to extract the table but they give me raise TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message: how I solve these error any recommendation.
Kindly tell me what mistake I will be doing this is page link https://gemelnet.cma.gov.il/views/dafmakdim.aspx
table
| [
"There are several issues you need to improve here:\n\nThe Aaaa89455bbfe4387b92529246ea52dc6114 class you trying to use is a dynamic value. This can't be used as a locator.\nThe first element you clicking to enter the system - you should wait for element clickability, not only visibility. These conditions are almost similar, but since you are going to click the element clickability should be checked. Visibility is normally used when we are going to extract the text form that element.\nNo need to add time.sleep(2) before review=WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, \"//input[@id='knisa']\")))\nYou can apply click directly on the web element object returned by WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, \"//input[@id='knisa']\"))), no need to store it into review temporary variable.\nThe table you trying to print initially presents \"Loading\" content. So, to overcome this problem I'm waiting for one of it columns to appear , add some more delay and then get the entire table text.\n\nNot ideally, but the following is worked:\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 10)\n\nurl = \"https://gemelnet.cma.gov.il/views/dafmakdim.aspx\"\ndriver.get(url)\n\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//input[@id='knisa']\"))).click()\nwait.until(EC.visibility_of_element_located((By.XPATH, \"//td[contains(.,'קרנות השתלמות')]\")))\ntime.sleep(2)\ntable = wait.until(EC.visibility_of_element_located((By.XPATH, \"//table[@id='ReportViewer1_fixedTable']\"))).text\nprint(table)\n\nThe output is:\n30/11/2022\n(30/11/2022)\nסה\"כ נכסי הקופות - לפי סוג קופה\n(במיליוני ש\"ח)\nנכון לסוף אוקטובר 2022\nתשואה שנתית\nסה\"כ נכסים\nקופ\"ג להשקעה- חסכון לילד\nקופ\"ג להשקעה\nמטרה אחרת\nמרכזית לפיצויים\nקרנות השתלמות\nתגמולים ואישית לפיצויים\nשנת דיווח\n --- \n648,227\n14,480\n34,775\n946\n10,806\n303,352\n283,869\n2022\n12.33%\n688,304\n14,441\n34,409\n1,043\n12,370\n321,477\n304,565\n2021\n4.58%\n579,438\n10,997\n20,172\n950\n12,022\n272,631\n262,666\n2020\n11.77%\n511,987\n---\n---\n933\n13,463\n250,174\n247,416\n2019\nהתשואה הממוצעת בענף קופות גמל\n-6.66%\nב- 12 חודשים האחרונים עמדה על\n \n\n\n"
] | [
0
] | [] | [] | [
"python",
"selenium",
"web_scraping",
"webdriverwait",
"xpath"
] | stackoverflow_0074625443_python_selenium_web_scraping_webdriverwait_xpath.txt |
Q:
How to import GDAL embedded in new Fiona wheels
Since october 2022, Fiona's wheels include GDAL (according to the releases documentations). Many packages refer to GDAL using this command, but it won't work :
from osgeo import gdal
For instance, I've just loaded an environment using poetry (and python 3.9.15 on Linux) :
poetry new dummy
cd dummy
poetry add geopandas richdem
Calling this script will throw an error:
import richdem as rd
dem = rd.LoadGDAL("dummy")
Exception: richdem.LoadGDAL() requires GDAL.
I don't think this troubles arises specifically from richdem, but it will be linked to any third package using from osgeo import gdal. Indeed, this is the command which is loaded in richdem's __init__.py file.
I also tried to load Fiona at first (as it patches GDAL environment variables at launch), but it doesn't change anything.
Note :
If I now install gdal (same version as the one included in my 1.8.22 Fiona, ie. gdal 3.4.3), then :
rd.LoadGDAL("dummy") triggers a correct RuntimeError: dummy: No such file or directory
rd.LoadGDAL("a/real/tif_file.tif") triggers a ModuleNotFoundError: No module named '_gdal_array'
I think this is a distinct problem; besides, I now have two (same) distributions of GDAL on my pc, which can't be good.
So the question is : how could I fix the from osgeo import gdal from third party packages calling for GDAL (if GDAL is embedded in Fiona) ?
A:
Long story short : you can't simply switch from osgeo to Fiona.
In fact, Fiona doesn't includes GDAL python package (ie. the GDAL python bindings) but the shared library :
From Sean Gillies :
Fiona's wheels contain a GDAL shared library (libgdal.dll or .so or .dylib) and its own library dependencies (libproj, etc). By GDAL, we mean the C library.
Beware also of trying to use both GDAL (ie. python bindings and Fiona) at the same time :
From Sean Gillies :
It's not a great solution, I agree, and not only because you'll have two copies of libgdal. The Fiona project is not promoting it as a solution at all. In https://rasterio.readthedocs.io/en/latest/topics/switch.html#mutual-incompatibilities, I've warned users against this and I should probably do the same for Fiona to make it more clear.
More detailed info on fiona.groups.io; following that discussion, an issue was opened on github Fiona's repo.
| How to import GDAL embedded in new Fiona wheels | Since october 2022, Fiona's wheels include GDAL (according to the releases documentations). Many packages refer to GDAL using this command, but it won't work :
from osgeo import gdal
For instance, I've just loaded an environment using poetry (and python 3.9.15 on Linux) :
poetry new dummy
cd dummy
poetry add geopandas richdem
Calling this script will throw an error:
import richdem as rd
dem = rd.LoadGDAL("dummy")
Exception: richdem.LoadGDAL() requires GDAL.
I don't think this troubles arises specifically from richdem, but it will be linked to any third package using from osgeo import gdal. Indeed, this is the command which is loaded in richdem's __init__.py file.
I also tried to load Fiona at first (as it patches GDAL environment variables at launch), but it doesn't change anything.
Note :
If I now install gdal (same version as the one included in my 1.8.22 Fiona, ie. gdal 3.4.3), then :
rd.LoadGDAL("dummy") triggers a correct RuntimeError: dummy: No such file or directory
rd.LoadGDAL("a/real/tif_file.tif") triggers a ModuleNotFoundError: No module named '_gdal_array'
I think this is a distinct problem; besides, I now have two (same) distributions of GDAL on my pc, which can't be good.
So the question is : how could I fix the from osgeo import gdal from third party packages calling for GDAL (if GDAL is embedded in Fiona) ?
| [
"Long story short : you can't simply switch from osgeo to Fiona.\nIn fact, Fiona doesn't includes GDAL python package (ie. the GDAL python bindings) but the shared library :\nFrom Sean Gillies :\n\nFiona's wheels contain a GDAL shared library (libgdal.dll or .so or .dylib) and its own library dependencies (libproj, etc). By GDAL, we mean the C library.\n\nBeware also of trying to use both GDAL (ie. python bindings and Fiona) at the same time :\nFrom Sean Gillies :\n\nIt's not a great solution, I agree, and not only because you'll have two copies of libgdal. The Fiona project is not promoting it as a solution at all. In https://rasterio.readthedocs.io/en/latest/topics/switch.html#mutual-incompatibilities, I've warned users against this and I should probably do the same for Fiona to make it more clear.\n\nMore detailed info on fiona.groups.io; following that discussion, an issue was opened on github Fiona's repo.\n"
] | [
0
] | [] | [] | [
"fiona",
"gdal",
"python"
] | stackoverflow_0074559182_fiona_gdal_python.txt |
Q:
ERROR in CNN Pytorch; shape '[-1, 192]' is invalid for input of size 300000
I want to change kernal size to 3, output channels of convolutional layers to 8 and 16 respectively. But when i try to change it i get an error message The following code is working fine but when I change kernal size and output channels like this:
self.conv1 = nn.Conv2d(in_channels=1,out_channels=**8**,kernel_size=**3**)
self.conv2 = nn.Conv2d(in_channels=**8**,out_channels=**16**,kernel_size=**3**)
self.fc1 = nn.Linear(in_features=**16*2*2**,out_features=128)
It generate an error for invalid input size.
working code
class Network(nn.Module):
def __init__(self):
super(Network,self).__init__()
self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)
self.fc1 = nn.Linear(in_features=12*4*4,out_features=128)
self.fc2 = nn.Linear(in_features=128,out_features=64)
self.out = nn.Linear(in_features=64,out_features=10)
def forward(self,x):
#input layer
x = x
#first hidden layer
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x,kernel_size=2,stride=2)
#second hidden layer
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x,kernel_size=2,stride=2)
#third hidden layer
x = x.reshape(-1,12*4*4)
x = self.fc1(x)
x = F.relu(x)
#fourth hidden layer
x = self.fc2(x)
x = F.relu(x)
#output layer
x = self.out(x)
return x
batch_size = 1000
train_dataset = FashionMNIST(
'../data', train=True, download=True,
transform=transforms.ToTensor())
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_dataset = FashionMNIST(
'../data', train=False, download=True,
transform=transforms.ToTensor())
testloader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
model = Network()
losses = []
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())
epochs = 1
for i in range(epochs):
batch_loss = []
for j, (data, targets) in enumerate(trainloader):
optimizer.zero_grad()
ypred = model(data)
loss = criterion(ypred, targets.reshape(-1))
loss.backward()
optimizer.step()
batch_loss.append(loss.item())
if i>10:
optimizer.lr = 0.0005
losses .append(sum(batch_loss) / len(batch_loss))
print('Epoch {}:\tloss {:.4f}'.format(i, losses [-1]))
A:
By changing your kernel size and output size in intermediate filters, you also change the size of your intermediate activations.
I suppose your input data is of size (1,28,28) (the usual size for FashionMNIST).
In your original code, before the layer self.fc1, after two 2D convolutionnal layers and two maxpools, the shape of your activations will be (12, 4, 4). However, if you change your kernel size to 3 and output channels of convolutional layers to 8 and 16, this shape will change. It will now be (16, 5, 5). Thus, you have to change your network. Try the following:
class Network(nn.Module):
def __init__(self):
super(Network,self).__init__()
self.conv1 = nn.Conv2d(in_channels=1,out_channels=8,kernel_size=3)
self.conv2 = nn.Conv2d(in_channels=8,out_channels=16,kernel_size=3)
self.fc1 = nn.Linear(in_features=16*5*5,out_features=128)
self.fc2 = nn.Linear(in_features=128,out_features=64)
self.out = nn.Linear(in_features=64,out_features=10)
def forward(self,x):
#input layer
x = x
#first hidden layer
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x,kernel_size=2,stride=2)
#second hidden layer
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x,kernel_size=2,stride=2)
#third hidden layer
x = x.reshape(-1,16*5*5)
x = self.fc1(x)
x = F.relu(x)
#fourth hidden layer
x = self.fc2(x)
x = F.relu(x)
#output layer
x = self.out(x)
return x
Check Pytorch's documentation for the Conv2D and Maxpool layers.
The output size after a Conv2D layer is:
H_out = ⌊[H_in + 2*padding[0] − dilation[0]*(kernel_size[0]−1)−1]/stride[0] +1⌋
W_out = ⌊[W_in + 2*padding[1] - dilation[1]*(kernel_size[1]-1)-1]/stride[1] +1⌋
As you use the default values, the output size after the first convolutionnal layer will be :
H_out = W_out = 28+0-2-1+1=26
The maxpool following will divide this size by 2, and after the second convolutionnal layer the size will be:
13+0-2-1+1=11
The second maxpool will divide this by 2 again, taking the floor value, which is 5. Thus, the output shape after the second layer will be (n, 16, 5, 5). Before the first fully connected layer, this has to be flattened. This is why the input features of self.fc1 is 16*5*5.
| ERROR in CNN Pytorch; shape '[-1, 192]' is invalid for input of size 300000 | I want to change kernal size to 3, output channels of convolutional layers to 8 and 16 respectively. But when i try to change it i get an error message The following code is working fine but when I change kernal size and output channels like this:
self.conv1 = nn.Conv2d(in_channels=1,out_channels=**8**,kernel_size=**3**)
self.conv2 = nn.Conv2d(in_channels=**8**,out_channels=**16**,kernel_size=**3**)
self.fc1 = nn.Linear(in_features=**16*2*2**,out_features=128)
It generate an error for invalid input size.
working code
class Network(nn.Module):
def __init__(self):
super(Network,self).__init__()
self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)
self.fc1 = nn.Linear(in_features=12*4*4,out_features=128)
self.fc2 = nn.Linear(in_features=128,out_features=64)
self.out = nn.Linear(in_features=64,out_features=10)
def forward(self,x):
#input layer
x = x
#first hidden layer
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x,kernel_size=2,stride=2)
#second hidden layer
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x,kernel_size=2,stride=2)
#third hidden layer
x = x.reshape(-1,12*4*4)
x = self.fc1(x)
x = F.relu(x)
#fourth hidden layer
x = self.fc2(x)
x = F.relu(x)
#output layer
x = self.out(x)
return x
batch_size = 1000
train_dataset = FashionMNIST(
'../data', train=True, download=True,
transform=transforms.ToTensor())
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_dataset = FashionMNIST(
'../data', train=False, download=True,
transform=transforms.ToTensor())
testloader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
model = Network()
losses = []
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())
epochs = 1
for i in range(epochs):
batch_loss = []
for j, (data, targets) in enumerate(trainloader):
optimizer.zero_grad()
ypred = model(data)
loss = criterion(ypred, targets.reshape(-1))
loss.backward()
optimizer.step()
batch_loss.append(loss.item())
if i>10:
optimizer.lr = 0.0005
losses .append(sum(batch_loss) / len(batch_loss))
print('Epoch {}:\tloss {:.4f}'.format(i, losses [-1]))
| [
"By changing your kernel size and output size in intermediate filters, you also change the size of your intermediate activations.\nI suppose your input data is of size (1,28,28) (the usual size for FashionMNIST).\nIn your original code, before the layer self.fc1, after two 2D convolutionnal layers and two maxpools, the shape of your activations will be (12, 4, 4). However, if you change your kernel size to 3 and output channels of convolutional layers to 8 and 16, this shape will change. It will now be (16, 5, 5). Thus, you have to change your network. Try the following:\nclass Network(nn.Module):\n def __init__(self):\n super(Network,self).__init__()\n self.conv1 = nn.Conv2d(in_channels=1,out_channels=8,kernel_size=3)\n self.conv2 = nn.Conv2d(in_channels=8,out_channels=16,kernel_size=3)\n self.fc1 = nn.Linear(in_features=16*5*5,out_features=128)\n self.fc2 = nn.Linear(in_features=128,out_features=64)\n self.out = nn.Linear(in_features=64,out_features=10)\n \n def forward(self,x):\n #input layer\n x = x\n \n #first hidden layer\n x = self.conv1(x)\n x = F.relu(x)\n x = F.max_pool2d(x,kernel_size=2,stride=2)\n \n #second hidden layer\n x = self.conv2(x)\n x = F.relu(x)\n x = F.max_pool2d(x,kernel_size=2,stride=2)\n \n #third hidden layer\n x = x.reshape(-1,16*5*5)\n x = self.fc1(x)\n x = F.relu(x)\n \n #fourth hidden layer\n x = self.fc2(x)\n x = F.relu(x)\n\n #output layer\n x = self.out(x)\n return x\n\nCheck Pytorch's documentation for the Conv2D and Maxpool layers.\nThe output size after a Conv2D layer is:\nH_out = ⌊[H_in + 2*padding[0] − dilation[0]*(kernel_size[0]−1)−1]/stride[0] +1⌋\n\nW_out = ⌊[W_in + 2*padding[1] - dilation[1]*(kernel_size[1]-1)-1]/stride[1] +1⌋\n\nAs you use the default values, the output size after the first convolutionnal layer will be :\nH_out = W_out = 28+0-2-1+1=26\n\nThe maxpool following will divide this size by 2, and after the second convolutionnal layer the size will be:\n13+0-2-1+1=11\n\nThe second maxpool will divide this by 2 again, taking the floor value, which is 5. Thus, the output shape after the second layer will be (n, 16, 5, 5). Before the first fully connected layer, this has to be flattened. This is why the input features of self.fc1 is 16*5*5.\n"
] | [
0
] | [] | [] | [
"conv_neural_network",
"python",
"pytorch"
] | stackoverflow_0074625420_conv_neural_network_python_pytorch.txt |
Q:
Get number from textbox
I have a textbox which is displayed in my window. I want to get the number from this textbox (inputted from the user) and use it for calculations
n=Text(window,width=6,height=2,bg="white").place(x=20,y=80)
num1=n.get(1.0,END)
A:
Try
num1=n.get("1.0","end")
| Get number from textbox | I have a textbox which is displayed in my window. I want to get the number from this textbox (inputted from the user) and use it for calculations
n=Text(window,width=6,height=2,bg="white").place(x=20,y=80)
num1=n.get(1.0,END)
| [
"Try\nnum1=n.get(\"1.0\",\"end\")\n"
] | [
1
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0074625727_python_tkinter.txt |
Q:
scikit learn output metrics.classification_report into CSV/tab-delimited format
I'm doing a multiclass text classification in Scikit-Learn. The dataset is being trained using the Multinomial Naive Bayes classifier having hundreds of labels. Here's an extract from the Scikit Learn script for fitting the MNB model
from __future__ import print_function
# Read **`file.csv`** into a pandas DataFrame
import pandas as pd
path = 'data/file.csv'
merged = pd.read_csv(path, error_bad_lines=False, low_memory=False)
# define X and y using the original DataFrame
X = merged.text
y = merged.grid
# split X and y into training and testing sets;
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# import and instantiate CountVectorizer
from sklearn.feature_extraction.text import CountVectorizer
vect = CountVectorizer()
# create document-term matrices using CountVectorizer
X_train_dtm = vect.fit_transform(X_train)
X_test_dtm = vect.transform(X_test)
# import and instantiate MultinomialNB
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
# fit a Multinomial Naive Bayes model
nb.fit(X_train_dtm, y_train)
# make class predictions
y_pred_class = nb.predict(X_test_dtm)
# generate classification report
from sklearn import metrics
print(metrics.classification_report(y_test, y_pred_class))
And a simplified output of the metrics.classification_report on command line screen looks like this:
precision recall f1-score support
12 0.84 0.48 0.61 2843
13 0.00 0.00 0.00 69
15 1.00 0.19 0.32 232
16 0.75 0.02 0.05 965
33 1.00 0.04 0.07 155
4 0.59 0.34 0.43 5600
41 0.63 0.49 0.55 6218
42 0.00 0.00 0.00 102
49 0.00 0.00 0.00 11
5 0.90 0.06 0.12 2010
50 0.00 0.00 0.00 5
51 0.96 0.07 0.13 1267
58 1.00 0.01 0.02 180
59 0.37 0.80 0.51 8127
7 0.91 0.05 0.10 579
8 0.50 0.56 0.53 7555
avg/total 0.59 0.48 0.45 35919
I was wondering if there was any way to get the report output into a standard csv file with regular column headers
When I send the command line output into a csv file or try to copy/paste the screen output into a spreadsheet - Openoffice Calc or Excel, It lumps the results in one column. Looking like this:
A:
As of scikit-learn v0.20, the easiest way to convert a classification report to a pandas Dataframe is by simply having the report returned as a dict:
report = classification_report(y_test, y_pred, output_dict=True)
and then construct a Dataframe and transpose it:
df = pandas.DataFrame(report).transpose()
From here on, you are free to use the standard pandas methods to generate your desired output formats (CSV, HTML, LaTeX, ...).
See the documentation.
A:
If you want the individual scores this should do the job just fine.
import pandas as pd
def classification_report_csv(report):
report_data = []
lines = report.split('\n')
for line in lines[2:-3]:
row = {}
row_data = line.split(' ')
row['class'] = row_data[0]
row['precision'] = float(row_data[1])
row['recall'] = float(row_data[2])
row['f1_score'] = float(row_data[3])
row['support'] = float(row_data[4])
report_data.append(row)
dataframe = pd.DataFrame.from_dict(report_data)
dataframe.to_csv('classification_report.csv', index = False)
report = classification_report(y_true, y_pred)
classification_report_csv(report)
A:
Just import pandas as pd and make sure that you set the output_dict parameter which by default is False to True when computing the classification_report. This will result in an classification_report dictionary which you can then pass to a pandas DataFrame method. You may want to transpose the resulting DataFrame to fit the fit the output format that you want. The resulting DataFrame may then be written to a csv file as you wish.
clsf_report = pd.DataFrame(classification_report(y_true = your_y_true, y_pred = your_y_preds5, output_dict=True)).transpose()
clsf_report.to_csv('Your Classification Report Name.csv', index= True)
A:
We can get the actual values from the precision_recall_fscore_support function and then put them into data frames.
the below code will give the same result, but now in a pandas dataframe:
clf_rep = metrics.precision_recall_fscore_support(true, pred)
out_dict = {
"precision" :clf_rep[0].round(2)
,"recall" : clf_rep[1].round(2)
,"f1-score" : clf_rep[2].round(2)
,"support" : clf_rep[3]
}
out_df = pd.DataFrame(out_dict, index = nb.classes_)
avg_tot = (out_df.apply(lambda x: round(x.mean(), 2) if x.name!="support" else round(x.sum(), 2)).to_frame().T)
avg_tot.index = ["avg/total"]
out_df = out_df.append(avg_tot)
print out_df
A:
While the previous answers are probably all working I found them a bit verbose. The following stores the individual class results as well as the summary line in a single dataframe. Not very sensitive to changes in the report but did the trick for me.
#init snippet and fake data
from io import StringIO
import re
import pandas as pd
from sklearn import metrics
true_label = [1,1,2,2,3,3]
pred_label = [1,2,2,3,3,1]
def report_to_df(report):
report = re.sub(r" +", " ", report).replace("avg / total", "avg/total").replace("\n ", "\n")
report_df = pd.read_csv(StringIO("Classes" + report), sep=' ', index_col=0)
return(report_df)
#txt report to df
report = metrics.classification_report(true_label, pred_label)
report_df = report_to_df(report)
#store, print, copy...
print (report_df)
Which gives the desired output:
Classes precision recall f1-score support
1 0.5 0.5 0.5 2
2 0.5 0.5 0.5 2
3 0.5 0.5 0.5 2
avg/total 0.5 0.5 0.5 6
A:
It's obviously a better idea to just output the classification report as dict:
sklearn.metrics.classification_report(y_true, y_pred, output_dict=True)
But here's a function I made to convert all classes (only classes) results to a pandas dataframe.
def report_to_df(report):
report = [x.split(' ') for x in report.split('\n')]
header = ['Class Name']+[x for x in report[0] if x!='']
values = []
for row in report[1:-5]:
row = [value for value in row if value!='']
if row!=[]:
values.append(row)
df = pd.DataFrame(data = values, columns = header)
return df
A:
As mentioned in one of the posts in here, precision_recall_fscore_support is analogous to classification_report.
Then it suffices to use pandas to easily format the data in a columnar format, similar to what classification_report does. Here is an example:
import numpy as np
import pandas as pd
from sklearn.metrics import classification_report
from sklearn.metrics import precision_recall_fscore_support
np.random.seed(0)
y_true = np.array([0]*400 + [1]*600)
y_pred = np.random.randint(2, size=1000)
def pandas_classification_report(y_true, y_pred):
metrics_summary = precision_recall_fscore_support(
y_true=y_true,
y_pred=y_pred)
avg = list(precision_recall_fscore_support(
y_true=y_true,
y_pred=y_pred,
average='weighted'))
metrics_sum_index = ['precision', 'recall', 'f1-score', 'support']
class_report_df = pd.DataFrame(
list(metrics_summary),
index=metrics_sum_index)
support = class_report_df.loc['support']
total = support.sum()
avg[-1] = total
class_report_df['avg / total'] = avg
return class_report_df.T
With classification_report You'll get something like:
print(classification_report(y_true=y_true, y_pred=y_pred, digits=6))
Output:
precision recall f1-score support
0 0.379032 0.470000 0.419643 400
1 0.579365 0.486667 0.528986 600
avg / total 0.499232 0.480000 0.485248 1000
Then with our custom funtion pandas_classification_report:
df_class_report = pandas_classification_report(y_true=y_true, y_pred=y_pred)
print(df_class_report)
Output:
precision recall f1-score support
0 0.379032 0.470000 0.419643 400.0
1 0.579365 0.486667 0.528986 600.0
avg / total 0.499232 0.480000 0.485248 1000.0
Then just save it to csv format (refer to here for other separator formating like sep=';'):
df_class_report.to_csv('my_csv_file.csv', sep=',')
I open my_csv_file.csv with LibreOffice Calc (although you could use any tabular/spreadsheet editor like excel):
A:
I also found some of the answers a bit verbose. Here is my three line solution, using precision_recall_fscore_support as others have suggested.
import pandas as pd
from sklearn.metrics import precision_recall_fscore_support
report = pd.DataFrame(list(precision_recall_fscore_support(y_true, y_pred)),
index=['Precision', 'Recall', 'F1-score', 'Support']).T
# Now add the 'Avg/Total' row
report.loc['Avg/Total', :] = precision_recall_fscore_support(y_true, y_test,
average='weighted')
report.loc['Avg/Total', 'Support'] = report['Support'].sum()
A:
The simplest and best way I found is:
classes = ['class 1','class 2','class 3']
report = classification_report(Y[test], Y_pred, target_names=classes)
report_path = "report.txt"
text_file = open(report_path, "w")
n = text_file.write(report)
text_file.close()
A:
Another option is to calculate the underlying data and compose the report on your own. All the statistics you will get by
precision_recall_fscore_support
A:
Along with example input-output, here's the other function metrics_report_to_df(). Implementing precision_recall_fscore_support from Sklearn metrics should do:
# Generates classification metrics using precision_recall_fscore_support:
from sklearn import metrics
import pandas as pd
import numpy as np; from numpy import random
# Simulating true and predicted labels as test dataset:
np.random.seed(10)
y_true = np.array([0]*300 + [1]*700)
y_pred = np.random.randint(2, size=1000)
# Here's the custom function returning classification report dataframe:
def metrics_report_to_df(ytrue, ypred):
precision, recall, fscore, support = metrics.precision_recall_fscore_support(ytrue, ypred)
classification_report = pd.concat(map(pd.DataFrame, [precision, recall, fscore, support]), axis=1)
classification_report.columns = ["precision", "recall", "f1-score", "support"] # Add row w "avg/total"
classification_report.loc['avg/Total', :] = metrics.precision_recall_fscore_support(ytrue, ypred, average='weighted')
classification_report.loc['avg/Total', 'support'] = classification_report['support'].sum()
return(classification_report)
# Provide input as true_label and predicted label (from classifier)
classification_report = metrics_report_to_df(y_true, y_pred)
# Here's the output (metrics report transformed to dataframe )
In [1047]: classification_report
Out[1047]:
precision recall f1-score support
0 0.300578 0.520000 0.380952 300.0
1 0.700624 0.481429 0.570703 700.0
avg/Total 0.580610 0.493000 0.513778 1000.0
A:
I have modified @kindjacket's answer.
Try this:
import collections
def classification_report_df(report):
report_data = []
lines = report.split('\n')
del lines[-5]
del lines[-1]
del lines[1]
for line in lines[1:]:
row = collections.OrderedDict()
row_data = line.split()
row_data = list(filter(None, row_data))
row['class'] = row_data[0] + " " + row_data[1]
row['precision'] = float(row_data[2])
row['recall'] = float(row_data[3])
row['f1_score'] = float(row_data[4])
row['support'] = int(row_data[5])
report_data.append(row)
df = pd.DataFrame.from_dict(report_data)
df.set_index('class', inplace=True)
return df
You can just export that df to csv using pandas
A:
Below function can be used to get the classification report as a pandas dataframe which then can be dumped as a csv file. The resulting csv file will look exactly like when we print the classification report.
import pandas as pd
from sklearn import metrics
def classification_report_df(y_true, y_pred):
report = metrics.classification_report(y_true, y_pred, output_dict=True)
df_report = pd.DataFrame(report).transpose()
df_report.round(3)
df_report = df_report.astype({'support': int})
df_report.loc['accuracy',['precision','recall','support']] = [None,None,df_report.loc['macro avg']['support']]
return df_report
report = classification_report_df(y_true, y_pred)
report.to_csv("<Full Path to Save CSV>")
A:
def to_table(report):
report = report.splitlines()
res = []
res.append(['']+report[0].split())
for row in report[2:-2]:
res.append(row.split())
lr = report[-1].split()
res.append([' '.join(lr[:3])]+lr[3:])
return np.array(res)
returns a numpy array which can be turned to pandas dataframe or just be saved as csv file.
A:
This is my code for 2 classes(pos,neg) classification
report = metrics.precision_recall_fscore_support(true_labels,predicted_labels,labels=classes)
rowDicionary["precision_pos"] = report[0][0]
rowDicionary["recall_pos"] = report[1][0]
rowDicionary["f1-score_pos"] = report[2][0]
rowDicionary["support_pos"] = report[3][0]
rowDicionary["precision_neg"] = report[0][1]
rowDicionary["recall_neg"] = report[1][1]
rowDicionary["f1-score_neg"] = report[2][1]
rowDicionary["support_neg"] = report[3][1]
writer = csv.DictWriter(file, fieldnames=fieldnames)
writer.writerow(rowDicionary)
A:
I have written below code to extract the classification report and save it to an excel file:
def classifcation_report_processing(model_to_report):
tmp = list()
for row in model_to_report.split("\n"):
parsed_row = [x for x in row.split(" ") if len(x) > 0]
if len(parsed_row) > 0:
tmp.append(parsed_row)
# Store in dictionary
measures = tmp[0]
D_class_data = defaultdict(dict)
for row in tmp[1:]:
class_label = row[0]
for j, m in enumerate(measures):
D_class_data[class_label][m.strip()] = float(row[j + 1].strip())
save_report = pd.DataFrame.from_dict(D_class_data).T
path_to_save = os.getcwd() +'/Classification_report.xlsx'
save_report.to_excel(path_to_save, index=True)
return save_report.head(5)
To call the function below line can be used anywhere in the program:
saving_CL_report_naive_bayes = classifcation_report_processing(classification_report(y_val, prediction))
The output looks like below:
A:
I had the same problem what i did was, paste the string output of metrics.classification_report into google sheets or excel and split the text into columns by custom 5 whitespaces.
A:
Definitely worth using:
sklearn.metrics.classification_report(y_true, y_pred, output_dict=True)
But a slightly revised version of the function by Yash Nag is as follows. The function includes the accuracy, macro accuracy and weighted accuracy rows along with the classes:
def classification_report_to_dataframe(str_representation_of_report):
split_string = [x.split(' ') for x in str_representation_of_report.split('\n')]
column_names = ['']+[x for x in split_string[0] if x!='']
values = []
for table_row in split_string[1:-1]:
table_row = [value for value in table_row if value!='']
if table_row!=[]:
values.append(table_row)
for i in values:
for j in range(len(i)):
if i[1] == 'avg':
i[0:2] = [' '.join(i[0:2])]
if len(i) == 3:
i.insert(1,np.nan)
i.insert(2, np.nan)
else:
pass
report_to_df = pd.DataFrame(data=values, columns=column_names)
return report_to_df
The output for a test classification report may be found here
| scikit learn output metrics.classification_report into CSV/tab-delimited format | I'm doing a multiclass text classification in Scikit-Learn. The dataset is being trained using the Multinomial Naive Bayes classifier having hundreds of labels. Here's an extract from the Scikit Learn script for fitting the MNB model
from __future__ import print_function
# Read **`file.csv`** into a pandas DataFrame
import pandas as pd
path = 'data/file.csv'
merged = pd.read_csv(path, error_bad_lines=False, low_memory=False)
# define X and y using the original DataFrame
X = merged.text
y = merged.grid
# split X and y into training and testing sets;
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# import and instantiate CountVectorizer
from sklearn.feature_extraction.text import CountVectorizer
vect = CountVectorizer()
# create document-term matrices using CountVectorizer
X_train_dtm = vect.fit_transform(X_train)
X_test_dtm = vect.transform(X_test)
# import and instantiate MultinomialNB
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
# fit a Multinomial Naive Bayes model
nb.fit(X_train_dtm, y_train)
# make class predictions
y_pred_class = nb.predict(X_test_dtm)
# generate classification report
from sklearn import metrics
print(metrics.classification_report(y_test, y_pred_class))
And a simplified output of the metrics.classification_report on command line screen looks like this:
precision recall f1-score support
12 0.84 0.48 0.61 2843
13 0.00 0.00 0.00 69
15 1.00 0.19 0.32 232
16 0.75 0.02 0.05 965
33 1.00 0.04 0.07 155
4 0.59 0.34 0.43 5600
41 0.63 0.49 0.55 6218
42 0.00 0.00 0.00 102
49 0.00 0.00 0.00 11
5 0.90 0.06 0.12 2010
50 0.00 0.00 0.00 5
51 0.96 0.07 0.13 1267
58 1.00 0.01 0.02 180
59 0.37 0.80 0.51 8127
7 0.91 0.05 0.10 579
8 0.50 0.56 0.53 7555
avg/total 0.59 0.48 0.45 35919
I was wondering if there was any way to get the report output into a standard csv file with regular column headers
When I send the command line output into a csv file or try to copy/paste the screen output into a spreadsheet - Openoffice Calc or Excel, It lumps the results in one column. Looking like this:
| [
"As of scikit-learn v0.20, the easiest way to convert a classification report to a pandas Dataframe is by simply having the report returned as a dict:\nreport = classification_report(y_test, y_pred, output_dict=True)\n\nand then construct a Dataframe and transpose it:\ndf = pandas.DataFrame(report).transpose()\n\nFrom here on, you are free to use the standard pandas methods to generate your desired output formats (CSV, HTML, LaTeX, ...).\nSee the documentation.\n",
"If you want the individual scores this should do the job just fine.\nimport pandas as pd\n\ndef classification_report_csv(report):\n report_data = []\n lines = report.split('\\n')\n for line in lines[2:-3]:\n row = {}\n row_data = line.split(' ')\n row['class'] = row_data[0]\n row['precision'] = float(row_data[1])\n row['recall'] = float(row_data[2])\n row['f1_score'] = float(row_data[3])\n row['support'] = float(row_data[4])\n report_data.append(row)\n dataframe = pd.DataFrame.from_dict(report_data)\n dataframe.to_csv('classification_report.csv', index = False)\n\nreport = classification_report(y_true, y_pred)\nclassification_report_csv(report)\n\n",
"Just import pandas as pd and make sure that you set the output_dict parameter which by default is False to True when computing the classification_report. This will result in an classification_report dictionary which you can then pass to a pandas DataFrame method. You may want to transpose the resulting DataFrame to fit the fit the output format that you want. The resulting DataFrame may then be written to a csv file as you wish.\nclsf_report = pd.DataFrame(classification_report(y_true = your_y_true, y_pred = your_y_preds5, output_dict=True)).transpose()\nclsf_report.to_csv('Your Classification Report Name.csv', index= True)\n\n",
"We can get the actual values from the precision_recall_fscore_support function and then put them into data frames.\nthe below code will give the same result, but now in a pandas dataframe:\nclf_rep = metrics.precision_recall_fscore_support(true, pred)\nout_dict = {\n \"precision\" :clf_rep[0].round(2)\n ,\"recall\" : clf_rep[1].round(2)\n ,\"f1-score\" : clf_rep[2].round(2)\n ,\"support\" : clf_rep[3]\n }\nout_df = pd.DataFrame(out_dict, index = nb.classes_)\navg_tot = (out_df.apply(lambda x: round(x.mean(), 2) if x.name!=\"support\" else round(x.sum(), 2)).to_frame().T)\navg_tot.index = [\"avg/total\"]\nout_df = out_df.append(avg_tot)\nprint out_df\n\n",
"While the previous answers are probably all working I found them a bit verbose. The following stores the individual class results as well as the summary line in a single dataframe. Not very sensitive to changes in the report but did the trick for me.\n#init snippet and fake data\nfrom io import StringIO\nimport re\nimport pandas as pd\nfrom sklearn import metrics\ntrue_label = [1,1,2,2,3,3]\npred_label = [1,2,2,3,3,1]\n\ndef report_to_df(report):\n report = re.sub(r\" +\", \" \", report).replace(\"avg / total\", \"avg/total\").replace(\"\\n \", \"\\n\")\n report_df = pd.read_csv(StringIO(\"Classes\" + report), sep=' ', index_col=0) \n return(report_df)\n\n#txt report to df\nreport = metrics.classification_report(true_label, pred_label)\nreport_df = report_to_df(report)\n\n#store, print, copy...\nprint (report_df)\n\nWhich gives the desired output:\nClasses precision recall f1-score support\n1 0.5 0.5 0.5 2\n2 0.5 0.5 0.5 2\n3 0.5 0.5 0.5 2\navg/total 0.5 0.5 0.5 6\n\n",
"It's obviously a better idea to just output the classification report as dict:\nsklearn.metrics.classification_report(y_true, y_pred, output_dict=True)\n\nBut here's a function I made to convert all classes (only classes) results to a pandas dataframe.\ndef report_to_df(report):\n report = [x.split(' ') for x in report.split('\\n')]\n header = ['Class Name']+[x for x in report[0] if x!='']\n values = []\n for row in report[1:-5]:\n row = [value for value in row if value!='']\n if row!=[]:\n values.append(row)\n df = pd.DataFrame(data = values, columns = header)\n return df\n\n",
"As mentioned in one of the posts in here, precision_recall_fscore_support is analogous to classification_report.\nThen it suffices to use pandas to easily format the data in a columnar format, similar to what classification_report does. Here is an example:\nimport numpy as np\nimport pandas as pd\n\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import precision_recall_fscore_support\n\nnp.random.seed(0)\n\ny_true = np.array([0]*400 + [1]*600)\ny_pred = np.random.randint(2, size=1000)\n\ndef pandas_classification_report(y_true, y_pred):\n metrics_summary = precision_recall_fscore_support(\n y_true=y_true, \n y_pred=y_pred)\n \n avg = list(precision_recall_fscore_support(\n y_true=y_true, \n y_pred=y_pred,\n average='weighted'))\n\n metrics_sum_index = ['precision', 'recall', 'f1-score', 'support']\n class_report_df = pd.DataFrame(\n list(metrics_summary),\n index=metrics_sum_index)\n \n support = class_report_df.loc['support']\n total = support.sum() \n avg[-1] = total\n \n class_report_df['avg / total'] = avg\n\n return class_report_df.T\n\nWith classification_report You'll get something like:\nprint(classification_report(y_true=y_true, y_pred=y_pred, digits=6))\n\nOutput:\n precision recall f1-score support\n\n 0 0.379032 0.470000 0.419643 400\n 1 0.579365 0.486667 0.528986 600\n\navg / total 0.499232 0.480000 0.485248 1000\n\nThen with our custom funtion pandas_classification_report:\ndf_class_report = pandas_classification_report(y_true=y_true, y_pred=y_pred)\nprint(df_class_report)\n\nOutput:\n precision recall f1-score support\n0 0.379032 0.470000 0.419643 400.0\n1 0.579365 0.486667 0.528986 600.0\navg / total 0.499232 0.480000 0.485248 1000.0\n\nThen just save it to csv format (refer to here for other separator formating like sep=';'):\ndf_class_report.to_csv('my_csv_file.csv', sep=',')\n\nI open my_csv_file.csv with LibreOffice Calc (although you could use any tabular/spreadsheet editor like excel):\n\n",
"I also found some of the answers a bit verbose. Here is my three line solution, using precision_recall_fscore_support as others have suggested.\nimport pandas as pd\nfrom sklearn.metrics import precision_recall_fscore_support\n\nreport = pd.DataFrame(list(precision_recall_fscore_support(y_true, y_pred)),\n index=['Precision', 'Recall', 'F1-score', 'Support']).T\n\n# Now add the 'Avg/Total' row\nreport.loc['Avg/Total', :] = precision_recall_fscore_support(y_true, y_test,\n average='weighted')\nreport.loc['Avg/Total', 'Support'] = report['Support'].sum()\n\n",
"The simplest and best way I found is:\nclasses = ['class 1','class 2','class 3']\n\nreport = classification_report(Y[test], Y_pred, target_names=classes)\n\nreport_path = \"report.txt\"\n\ntext_file = open(report_path, \"w\")\nn = text_file.write(report)\ntext_file.close()\n\n",
"Another option is to calculate the underlying data and compose the report on your own. All the statistics you will get by\nprecision_recall_fscore_support\n\n",
"Along with example input-output, here's the other function metrics_report_to_df(). Implementing precision_recall_fscore_support from Sklearn metrics should do:\n# Generates classification metrics using precision_recall_fscore_support:\nfrom sklearn import metrics\nimport pandas as pd\nimport numpy as np; from numpy import random\n\n# Simulating true and predicted labels as test dataset: \nnp.random.seed(10)\ny_true = np.array([0]*300 + [1]*700)\ny_pred = np.random.randint(2, size=1000)\n\n# Here's the custom function returning classification report dataframe:\ndef metrics_report_to_df(ytrue, ypred):\n precision, recall, fscore, support = metrics.precision_recall_fscore_support(ytrue, ypred)\n classification_report = pd.concat(map(pd.DataFrame, [precision, recall, fscore, support]), axis=1)\n classification_report.columns = [\"precision\", \"recall\", \"f1-score\", \"support\"] # Add row w \"avg/total\"\n classification_report.loc['avg/Total', :] = metrics.precision_recall_fscore_support(ytrue, ypred, average='weighted')\n classification_report.loc['avg/Total', 'support'] = classification_report['support'].sum() \n return(classification_report)\n\n# Provide input as true_label and predicted label (from classifier)\nclassification_report = metrics_report_to_df(y_true, y_pred)\n\n# Here's the output (metrics report transformed to dataframe )\nIn [1047]: classification_report\nOut[1047]: \n precision recall f1-score support\n0 0.300578 0.520000 0.380952 300.0\n1 0.700624 0.481429 0.570703 700.0\navg/Total 0.580610 0.493000 0.513778 1000.0\n\n",
"I have modified @kindjacket's answer.\nTry this:\nimport collections\ndef classification_report_df(report):\n report_data = []\n lines = report.split('\\n')\n del lines[-5]\n del lines[-1]\n del lines[1]\n for line in lines[1:]:\n row = collections.OrderedDict()\n row_data = line.split()\n row_data = list(filter(None, row_data))\n row['class'] = row_data[0] + \" \" + row_data[1]\n row['precision'] = float(row_data[2])\n row['recall'] = float(row_data[3])\n row['f1_score'] = float(row_data[4])\n row['support'] = int(row_data[5])\n report_data.append(row)\n df = pd.DataFrame.from_dict(report_data)\n df.set_index('class', inplace=True)\n return df\n\nYou can just export that df to csv using pandas\n",
"Below function can be used to get the classification report as a pandas dataframe which then can be dumped as a csv file. The resulting csv file will look exactly like when we print the classification report.\nimport pandas as pd\nfrom sklearn import metrics\n\n\ndef classification_report_df(y_true, y_pred):\n report = metrics.classification_report(y_true, y_pred, output_dict=True)\n df_report = pd.DataFrame(report).transpose()\n df_report.round(3) \n df_report = df_report.astype({'support': int}) \n df_report.loc['accuracy',['precision','recall','support']] = [None,None,df_report.loc['macro avg']['support']]\n return df_report\n\n\nreport = classification_report_df(y_true, y_pred)\nreport.to_csv(\"<Full Path to Save CSV>\")\n\n",
"def to_table(report):\n report = report.splitlines()\n res = []\n res.append(['']+report[0].split())\n for row in report[2:-2]:\n res.append(row.split())\n lr = report[-1].split()\n res.append([' '.join(lr[:3])]+lr[3:])\n return np.array(res)\n\nreturns a numpy array which can be turned to pandas dataframe or just be saved as csv file.\n",
"This is my code for 2 classes(pos,neg) classification \nreport = metrics.precision_recall_fscore_support(true_labels,predicted_labels,labels=classes)\n rowDicionary[\"precision_pos\"] = report[0][0]\n rowDicionary[\"recall_pos\"] = report[1][0]\n rowDicionary[\"f1-score_pos\"] = report[2][0]\n rowDicionary[\"support_pos\"] = report[3][0]\n rowDicionary[\"precision_neg\"] = report[0][1]\n rowDicionary[\"recall_neg\"] = report[1][1]\n rowDicionary[\"f1-score_neg\"] = report[2][1]\n rowDicionary[\"support_neg\"] = report[3][1]\n writer = csv.DictWriter(file, fieldnames=fieldnames)\n writer.writerow(rowDicionary)\n\n",
"I have written below code to extract the classification report and save it to an excel file:\ndef classifcation_report_processing(model_to_report):\n tmp = list()\n for row in model_to_report.split(\"\\n\"):\n parsed_row = [x for x in row.split(\" \") if len(x) > 0]\n if len(parsed_row) > 0:\n tmp.append(parsed_row)\n\n # Store in dictionary\n measures = tmp[0]\n\n D_class_data = defaultdict(dict)\n for row in tmp[1:]:\n class_label = row[0]\n for j, m in enumerate(measures):\n D_class_data[class_label][m.strip()] = float(row[j + 1].strip())\n save_report = pd.DataFrame.from_dict(D_class_data).T\n path_to_save = os.getcwd() +'/Classification_report.xlsx'\n save_report.to_excel(path_to_save, index=True)\n return save_report.head(5)\n\nTo call the function below line can be used anywhere in the program:\nsaving_CL_report_naive_bayes = classifcation_report_processing(classification_report(y_val, prediction))\n\nThe output looks like below:\n\n",
"I had the same problem what i did was, paste the string output of metrics.classification_report into google sheets or excel and split the text into columns by custom 5 whitespaces.\n",
"Definitely worth using:\nsklearn.metrics.classification_report(y_true, y_pred, output_dict=True)\n\nBut a slightly revised version of the function by Yash Nag is as follows. The function includes the accuracy, macro accuracy and weighted accuracy rows along with the classes:\ndef classification_report_to_dataframe(str_representation_of_report):\n split_string = [x.split(' ') for x in str_representation_of_report.split('\\n')]\n column_names = ['']+[x for x in split_string[0] if x!='']\n values = []\n for table_row in split_string[1:-1]:\n table_row = [value for value in table_row if value!='']\n if table_row!=[]:\n values.append(table_row)\n for i in values:\n for j in range(len(i)):\n if i[1] == 'avg':\n i[0:2] = [' '.join(i[0:2])]\n if len(i) == 3:\n i.insert(1,np.nan)\n i.insert(2, np.nan)\n else:\n pass\n report_to_df = pd.DataFrame(data=values, columns=column_names)\n return report_to_df\n\nThe output for a test classification report may be found here\n"
] | [
110,
21,
13,
10,
6,
6,
4,
3,
3,
2,
2,
1,
1,
0,
0,
0,
0,
0
] | [
"The way I have always solved output problems is like what I've mentioned in my previous comment, I've converted my output to a DataFrame. Not only is it incredibly easy to send to files (see here), but Pandas is really easy to manipulate the data structure. The other way I have solved this is writing the output line-by-line using CSV and specifically using writerow.\nIf you manage to get the output into a dataframe it would be\ndataframe_name_here.to_csv()\n\nor if using CSV it would be something like the example they provide in the CSV link.\n"
] | [
-2
] | [
"classification",
"csv",
"python",
"scikit_learn",
"text"
] | stackoverflow_0039662398_classification_csv_python_scikit_learn_text.txt |
Q:
Running a Python function in BASH
I usually run Python on Google Colab, however I need to run a script in the terminal in Ubuntu.
I have the following script
test.py:
#!/usr/bin/env python
# testing a func
def hello(x):
if x > 5:
return "good"
else:
return "bad"
hello(2)
When executed it fails to return anything. Now I could just replace the return statements with a print statement. However, for other scripts I have, a return statement is needed.
I tried:
python test.py
You see, on Google Colab, I can simply call the function (hello(2)) and it will execute.
Desired output:
> python test.py
> bad
A:
You don't print anything to STDOUT so you won't see the good/bad in your terminal.
You should change hello(2) line to print(hello(2)) in your code (In this case the return value of hello(2) function call will be printed to STDOUT file descriptor) then you will see your result in your terminal.
A:
In case you want to sent the argument when calling the script you could do it like this:
#!/usr/bin/env python
import sys
def hello(x):
if x > 5:
return "good"
else:
return "bad"
print(hello(int(sys.argv[1])))
And so you could call the function like so:
python test.py 6
Then the output would be:
> python test.py 6
> good
| Running a Python function in BASH | I usually run Python on Google Colab, however I need to run a script in the terminal in Ubuntu.
I have the following script
test.py:
#!/usr/bin/env python
# testing a func
def hello(x):
if x > 5:
return "good"
else:
return "bad"
hello(2)
When executed it fails to return anything. Now I could just replace the return statements with a print statement. However, for other scripts I have, a return statement is needed.
I tried:
python test.py
You see, on Google Colab, I can simply call the function (hello(2)) and it will execute.
Desired output:
> python test.py
> bad
| [
"You don't print anything to STDOUT so you won't see the good/bad in your terminal.\nYou should change hello(2) line to print(hello(2)) in your code (In this case the return value of hello(2) function call will be printed to STDOUT file descriptor) then you will see your result in your terminal.\n",
"In case you want to sent the argument when calling the script you could do it like this:\n#!/usr/bin/env python\nimport sys\n\ndef hello(x):\n if x > 5:\n return \"good\"\n else:\n return \"bad\"\n\n print(hello(int(sys.argv[1])))\n\nAnd so you could call the function like so:\npython test.py 6\n\nThen the output would be:\n> python test.py 6\n> good\n\n"
] | [
2,
0
] | [] | [] | [
"bash",
"python",
"terminal"
] | stackoverflow_0074624724_bash_python_terminal.txt |
Q:
Problem triggering nested dependencies in Azure Function
I have a problem using the videohash package for python when deployed to Azure function.
My deployed azure function does not seem to be able to use a nested dependency properly. Specifically, I am trying to use the package “videohash” and the function VideoHash from it. The
input to VideoHash is a SAS url token for a video placed on an Azure blob storage.
In the monitor of my output it prints:
Accessing the sas url token directly takes me to the video, so that part seems to be working.
Looking at the source code for videohash this error seems to occur in the process of downloading the video from a given url (link:
https://github.com/akamhy/videohash/blob/main/videohash/downloader.py).
.. where self.yt_dlp_path = str(which("yt-dlp")). This to me indicates, that after deploying the function, the package yt-dlp isn’t properly activated. This is a dependency from the videohash
module, but adding yt-dlp directly to the requirements file of the azure function also does not solve the issue.
Any ideas on what is happening?
Deploying code to Azure function, which resulted in the details highlighted in the issue description.
A:
I have a work around where you download the video file on you own instead of the videohash using azure.storage.blob
To download you will need a BlobServiceClient , ContainerClient and connection string of azure storage account.
Please create two files called v1.mp3 and v2.mp3 before downloading the video.
file structure:
Complete Code:
import logging
from videohash import VideoHash
import azure.functions as func
import subprocess
import tempfile
import os
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
def main(req: func.HttpRequest) -> func.HttpResponse:
# local file path on the server
local_path = tempfile.gettempdir()
filepath1 = os.path.join(local_path, "v1.mp3")
filepath2 = os.path.join(local_path,"v2.mp3")
# Reference to Blob Storage
client = BlobServiceClient.from_connection_string("<Connection String >")
# Reference to Container
container = client.get_container_client(container= "test")
# Downloading the file
with open(file=filepath1, mode="wb") as download_file:
download_file.write(container.download_blob("v1.mp3").readall())
with open(file=filepath2, mode="wb") as download_file:
download_file.write(container.download_blob("v2.mp3").readall())
// video hash code .
videohash1 = VideoHash(path=filepath1)
videohash2 = VideoHash(path=filepath2)
t = videohash2.is_similar(videohash1)
return func.HttpResponse(f"Hello, {t}. This HTTP triggered function executed successfully.")
Output :
Now here I am getting the ffmpeg error which related to my test file and not related to error you are facing.
This work around as far as I know will not affect performance as in both scenario you are downloading blobs anyway
| Problem triggering nested dependencies in Azure Function | I have a problem using the videohash package for python when deployed to Azure function.
My deployed azure function does not seem to be able to use a nested dependency properly. Specifically, I am trying to use the package “videohash” and the function VideoHash from it. The
input to VideoHash is a SAS url token for a video placed on an Azure blob storage.
In the monitor of my output it prints:
Accessing the sas url token directly takes me to the video, so that part seems to be working.
Looking at the source code for videohash this error seems to occur in the process of downloading the video from a given url (link:
https://github.com/akamhy/videohash/blob/main/videohash/downloader.py).
.. where self.yt_dlp_path = str(which("yt-dlp")). This to me indicates, that after deploying the function, the package yt-dlp isn’t properly activated. This is a dependency from the videohash
module, but adding yt-dlp directly to the requirements file of the azure function also does not solve the issue.
Any ideas on what is happening?
Deploying code to Azure function, which resulted in the details highlighted in the issue description.
| [
"\nI have a work around where you download the video file on you own instead of the videohash using azure.storage.blob\n\nTo download you will need a BlobServiceClient , ContainerClient and connection string of azure storage account.\n\nPlease create two files called v1.mp3 and v2.mp3 before downloading the video.\n\n\nfile structure:\n\nComplete Code:\nimport logging\n\nfrom videohash import VideoHash\n\nimport azure.functions as func\n\nimport subprocess\n\nimport tempfile\n\nimport os\n\nfrom azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient\n\n \n \n\ndef main(req: func.HttpRequest) -> func.HttpResponse:\n\n # local file path on the server\n \n local_path = tempfile.gettempdir()\nfilepath1 = os.path.join(local_path, \"v1.mp3\")\n\nfilepath2 = os.path.join(local_path,\"v2.mp3\")\n\n\n # Reference to Blob Storage\n client = BlobServiceClient.from_connection_string(\"<Connection String >\")\n\n # Reference to Container\n container = client.get_container_client(container= \"test\")\n\n # Downloading the file \n\n with open(file=filepath1, mode=\"wb\") as download_file:\n download_file.write(container.download_blob(\"v1.mp3\").readall())\n\n with open(file=filepath2, mode=\"wb\") as download_file:\n download_file.write(container.download_blob(\"v2.mp3\").readall())\n\n // video hash code . \n videohash1 = VideoHash(path=filepath1)\n videohash2 = VideoHash(path=filepath2)\n t = videohash2.is_similar(videohash1)\n return func.HttpResponse(f\"Hello, {t}. This HTTP triggered function executed successfully.\")\n\nOutput :\n\nNow here I am getting the ffmpeg error which related to my test file and not related to error you are facing.\nThis work around as far as I know will not affect performance as in both scenario you are downloading blobs anyway\n"
] | [
0
] | [] | [] | [
"azure",
"azure_functions",
"python",
"yt_dlp"
] | stackoverflow_0074552478_azure_azure_functions_python_yt_dlp.txt |
Q:
Selecting links within a div tag using beautiful soup
I am trying to run the following code
headers = {
'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'
}
params = {
'q': 'Machine learning,
'hl': 'en'
}
html = requests.get('https://scholar.google.com/scholar', headers=headers,
params=params).text
soup = BeautifulSoup(html, 'lxml')
for result in soup.select('.gs_r.gs_or.gs_scl'):
profiles=result.select('.gs_a a')['href']
The following output (error) is being shown
"TypeError: list indices must be integers or slices, not str"
What is it I am doing wrong?
A:
The following is tested and works:
import requests
from bs4 import BeautifulSoup as bs
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36'
}
params = {
'q': 'Machine learning',
'hl': 'en'
}
html = requests.get('https://scholar.google.com/scholar', headers=headers,
params=params).text
soup = bs(html, 'lxml')
for result in soup.select('.gs_r.gs_or.gs_scl'):
profiles=result.select('.gs_a a')
for p in profiles:
print(p.get('href'))
Result in terminal:
/citations?user=rSVIHasAAAAJ&hl=en&oi=sra
/citations?user=MnfzuPYAAAAJ&hl=en&oi=sra
/citations?user=09kJn28AAAAJ&hl=en&oi=sra
/citations?user=yxUduqMAAAAJ&hl=en&oi=sra
/citations?user=MnfzuPYAAAAJ&hl=en&oi=sra
/citations?user=9Vdfc2sAAAAJ&hl=en&oi=sra
/citations?user=lXYKgiYAAAAJ&hl=en&oi=sra
/citations?user=xzss3t0AAAAJ&hl=en&oi=sra
/citations?user=BFdcm_gAAAAJ&hl=en&oi=sra
/citations?user=okf5bmQAAAAJ&hl=en&oi=sra
/citations?user=09kJn28AAAAJ&hl=en&oi=sra
In your code, you were trying to obtain the href attribute from a list (soup.select returns a list, and soup.select_one return a single element).
See BeautifulSoup documentation here
| Selecting links within a div tag using beautiful soup | I am trying to run the following code
headers = {
'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'
}
params = {
'q': 'Machine learning,
'hl': 'en'
}
html = requests.get('https://scholar.google.com/scholar', headers=headers,
params=params).text
soup = BeautifulSoup(html, 'lxml')
for result in soup.select('.gs_r.gs_or.gs_scl'):
profiles=result.select('.gs_a a')['href']
The following output (error) is being shown
"TypeError: list indices must be integers or slices, not str"
What is it I am doing wrong?
| [
"The following is tested and works:\nimport requests\nfrom bs4 import BeautifulSoup as bs\n\nheaders = {\n'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36'\n}\n\nparams = {\n 'q': 'Machine learning',\n 'hl': 'en'\n }\nhtml = requests.get('https://scholar.google.com/scholar', headers=headers, \nparams=params).text\nsoup = bs(html, 'lxml')\nfor result in soup.select('.gs_r.gs_or.gs_scl'):\n profiles=result.select('.gs_a a')\n for p in profiles:\n print(p.get('href'))\n\nResult in terminal:\n/citations?user=rSVIHasAAAAJ&hl=en&oi=sra\n/citations?user=MnfzuPYAAAAJ&hl=en&oi=sra\n/citations?user=09kJn28AAAAJ&hl=en&oi=sra\n/citations?user=yxUduqMAAAAJ&hl=en&oi=sra\n/citations?user=MnfzuPYAAAAJ&hl=en&oi=sra\n/citations?user=9Vdfc2sAAAAJ&hl=en&oi=sra\n/citations?user=lXYKgiYAAAAJ&hl=en&oi=sra\n/citations?user=xzss3t0AAAAJ&hl=en&oi=sra\n/citations?user=BFdcm_gAAAAJ&hl=en&oi=sra\n/citations?user=okf5bmQAAAAJ&hl=en&oi=sra\n/citations?user=09kJn28AAAAJ&hl=en&oi=sra\n\nIn your code, you were trying to obtain the href attribute from a list (soup.select returns a list, and soup.select_one return a single element).\nSee BeautifulSoup documentation here\n"
] | [
1
] | [] | [] | [
"beautifulsoup",
"html",
"python",
"web_scraping"
] | stackoverflow_0074624730_beautifulsoup_html_python_web_scraping.txt |
Q:
reading pdf file using tabula
I have a pdf file with tables in it and would like to read it as a dataframe using tabula. But only the first page has column header. While reading using
tabula.read_pdf(pdf_file, pages='all', lattice = 'True')
the data is coming in desired format and all the pages are extracted properly however while using
pd.DataFrame(tabula.read_pdf(pdf_file, pages='all', lattice = 'True')
showing only some rows.
A:
You should actually do it this way (assumming your pdf doesn't contain both text and tables)
table = tabula.read_pdf(pdf_file, pages='all',output_format="dataframe" ,lattice = 'True')
| reading pdf file using tabula | I have a pdf file with tables in it and would like to read it as a dataframe using tabula. But only the first page has column header. While reading using
tabula.read_pdf(pdf_file, pages='all', lattice = 'True')
the data is coming in desired format and all the pages are extracted properly however while using
pd.DataFrame(tabula.read_pdf(pdf_file, pages='all', lattice = 'True')
showing only some rows.
| [
"You should actually do it this way (assumming your pdf doesn't contain both text and tables)\ntable = tabula.read_pdf(pdf_file, pages='all',output_format=\"dataframe\" ,lattice = 'True')\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python",
"tabula"
] | stackoverflow_0074624251_dataframe_pandas_python_tabula.txt |
Q:
Type Narrowing of Class Attributes in Python (TypeGuard) without Subclassing
Consider I have a python class that has a attributes (i.e. a dataclass, pydantic, attrs, django model, ...) that consist of a union, i.e. None and and a state.
Now I have a complex checking function that checks some values.
If I use this checking function, I want to tell the type checker, that some of my class attributes are narrowed.
For instance see this simplified example:
import dataclasses
from typing import TypeGuard
@dataclasses.dataclass
class SomeDataClass:
state: tuple[int, int] | None
name: str
# Assume many more data attributes
class SomeDataClassWithSetState(SomeDataClass):
state: tuple[int, int]
def complex_check(data: SomeDataClass) -> TypeGuard[SomeDataClassWithSetState]:
# Assume some complex checks here, for simplicity it is only:
return data.state is not None and data.name.startswith("SPECIAL")
def get_sum(data: SomeDataClass) -> int:
if complex_check(data):
return data.state[0] + data.state[1]
return 0
Explore on mypy Playground
As seen it is possible to do this with subclasses, which for various reason is not an option for me:
it introduces a lot of duplication
some possible libraries used for dataclasses are not happy with being subclasses without side condition
there could be some Metaclass or __subclasses__ magic that handles all subclass specially, i.e. creating database for the dataclasses
So is there an option to type narrow a(n) attribute(s) of a class without introducing a solely new class, as proposed here?
A:
TL;DR: You cannot narrow the type of an attribute. You can only narrow the type of an object.
As I already mentioned in my comment, for typing.TypeGuard to be useful it relies on two distinct types T and S. Then, depending on the returned bool, the type guard function tells the type checker to assume the object to be either T or S.
You say, you don't want to have another class/subclass alongside SomeDataClass for various (vaguely valid) reasons. But if you don't have another type, then TypeGuard is useless. So that is not the route to take here.
I understand that you want to reduce the type-safety checks like if obj.state is None because you may need to access the state attribute in multiple different places in your code. You must have some place in your code, where you create/mutate a SomeDataClass instance in a way that ensures its state attribute is not None. One solution then is to have a getter for that attribute that performs the type-safety check and only ever returns the narrower type or raises an error. I typically do this via @property for improved readability. Example:
from dataclasses import dataclass
@dataclass
class SomeDataClass:
name: str
optional_state: tuple[int, int] | None = None
@property
def state(self) -> tuple[int, int]:
if self.optional_state is None:
raise RuntimeError("or some other appropriate exception")
return self.optional_state
def set_state(obj: SomeDataClass, value: tuple[int, int]) -> None:
obj.optional_state = value
if __name__ == "__main__":
foo = SomeDataClass(optional_state=(1, 2), name="foo")
bar = SomeDataClass(name="bar")
baz = SomeDataClass(name="baz")
set_state(bar, (2, 3))
print(foo.state)
print(bar.state)
try:
print(baz.state)
except RuntimeError:
print("baz has no state")
I realize you mean there are many more checks happening in complex_check, but either that function doesn't change the type of data or it does. If the type remains the same, you need to introduce type-safety for attributes like state in some other place, which is why I suggest a getter method.
Another option is obviously to have a separate class, which is what is typically done with FastAPI/Pydantic/SQLModel for example and use clever inheritance to reduce code duplication. You mentioned this may cause problems because of subclassing magic. Well, if it does, use the other approach, but I can't think of an example that would cause the problems you mentioned. Maybe you can be more specific and show a case where subclassing would lead to problems.
| Type Narrowing of Class Attributes in Python (TypeGuard) without Subclassing | Consider I have a python class that has a attributes (i.e. a dataclass, pydantic, attrs, django model, ...) that consist of a union, i.e. None and and a state.
Now I have a complex checking function that checks some values.
If I use this checking function, I want to tell the type checker, that some of my class attributes are narrowed.
For instance see this simplified example:
import dataclasses
from typing import TypeGuard
@dataclasses.dataclass
class SomeDataClass:
state: tuple[int, int] | None
name: str
# Assume many more data attributes
class SomeDataClassWithSetState(SomeDataClass):
state: tuple[int, int]
def complex_check(data: SomeDataClass) -> TypeGuard[SomeDataClassWithSetState]:
# Assume some complex checks here, for simplicity it is only:
return data.state is not None and data.name.startswith("SPECIAL")
def get_sum(data: SomeDataClass) -> int:
if complex_check(data):
return data.state[0] + data.state[1]
return 0
Explore on mypy Playground
As seen it is possible to do this with subclasses, which for various reason is not an option for me:
it introduces a lot of duplication
some possible libraries used for dataclasses are not happy with being subclasses without side condition
there could be some Metaclass or __subclasses__ magic that handles all subclass specially, i.e. creating database for the dataclasses
So is there an option to type narrow a(n) attribute(s) of a class without introducing a solely new class, as proposed here?
| [
"TL;DR: You cannot narrow the type of an attribute. You can only narrow the type of an object.\nAs I already mentioned in my comment, for typing.TypeGuard to be useful it relies on two distinct types T and S. Then, depending on the returned bool, the type guard function tells the type checker to assume the object to be either T or S.\nYou say, you don't want to have another class/subclass alongside SomeDataClass for various (vaguely valid) reasons. But if you don't have another type, then TypeGuard is useless. So that is not the route to take here.\nI understand that you want to reduce the type-safety checks like if obj.state is None because you may need to access the state attribute in multiple different places in your code. You must have some place in your code, where you create/mutate a SomeDataClass instance in a way that ensures its state attribute is not None. One solution then is to have a getter for that attribute that performs the type-safety check and only ever returns the narrower type or raises an error. I typically do this via @property for improved readability. Example:\nfrom dataclasses import dataclass\n\n\n@dataclass\nclass SomeDataClass:\n name: str\n optional_state: tuple[int, int] | None = None\n\n @property\n def state(self) -> tuple[int, int]:\n if self.optional_state is None:\n raise RuntimeError(\"or some other appropriate exception\")\n return self.optional_state\n\n\ndef set_state(obj: SomeDataClass, value: tuple[int, int]) -> None:\n obj.optional_state = value\n\n\nif __name__ == \"__main__\":\n foo = SomeDataClass(optional_state=(1, 2), name=\"foo\")\n bar = SomeDataClass(name=\"bar\")\n baz = SomeDataClass(name=\"baz\")\n set_state(bar, (2, 3))\n print(foo.state)\n print(bar.state)\n try:\n print(baz.state)\n except RuntimeError:\n print(\"baz has no state\")\n\nI realize you mean there are many more checks happening in complex_check, but either that function doesn't change the type of data or it does. If the type remains the same, you need to introduce type-safety for attributes like state in some other place, which is why I suggest a getter method.\nAnother option is obviously to have a separate class, which is what is typically done with FastAPI/Pydantic/SQLModel for example and use clever inheritance to reduce code duplication. You mentioned this may cause problems because of subclassing magic. Well, if it does, use the other approach, but I can't think of an example that would cause the problems you mentioned. Maybe you can be more specific and show a case where subclassing would lead to problems.\n"
] | [
1
] | [] | [] | [
"python",
"python_typing",
"type_narrowing",
"typeguards"
] | stackoverflow_0074624626_python_python_typing_type_narrowing_typeguards.txt |
Q:
WebDriverWait by class name when there are more class with the same name
I’m trying to click on a button that has the same class as other 5 buttons.
This code is working but clicks on the first button that finds the class.
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, ".com-ex-5"))).click()
How can I click on the 5th button?
This ain’t working :
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, ".com-ex-5")))[5].click()
A:
presence_of_element_located returns single element. You need to use presence_of_all_elements_located.
So that your code would look like:
WebDriverWait(driver, 10)
.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, ".com-ex-5")))[4]
.click()
P.S. - If you need 5th button then you need to pick element by index 4 since indexing starts from 0 in Python
A:
In case you want to click on an element you have to use element_to_be_clickable expected_conditions, not presence_of_element_located.
In case there is no unique locator for that element (that's strange) you can use XPath to locate that element.
So, this should work:
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "(//*[contains(@class,'com-ex-5')])[5]"))).click()
And in case there are 5 elements matching this locator and this is the last of them, this can be used:
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "(//*[contains(@class,'com-ex-5')])[last()]"))).click()
| WebDriverWait by class name when there are more class with the same name | I’m trying to click on a button that has the same class as other 5 buttons.
This code is working but clicks on the first button that finds the class.
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, ".com-ex-5"))).click()
How can I click on the 5th button?
This ain’t working :
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, ".com-ex-5")))[5].click()
| [
"presence_of_element_located returns single element. You need to use presence_of_all_elements_located.\nSo that your code would look like:\nWebDriverWait(driver, 10)\n .until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, \".com-ex-5\")))[4]\n .click()\n\nP.S. - If you need 5th button then you need to pick element by index 4 since indexing starts from 0 in Python\n",
"\nIn case you want to click on an element you have to use element_to_be_clickable expected_conditions, not presence_of_element_located.\nIn case there is no unique locator for that element (that's strange) you can use XPath to locate that element.\nSo, this should work:\n\nWebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, \"(//*[contains(@class,'com-ex-5')])[5]\"))).click()\n\nAnd in case there are 5 elements matching this locator and this is the last of them, this can be used:\nWebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, \"(//*[contains(@class,'com-ex-5')])[last()]\"))).click()\n\n"
] | [
0,
0
] | [] | [] | [
"css_selectors",
"python",
"selenium",
"webdriverwait",
"xpath"
] | stackoverflow_0074623486_css_selectors_python_selenium_webdriverwait_xpath.txt |
Q:
how to make a system in python that support limited users? for instance, it should support 200 users
How to Limit the amount of user registration? a system that should support a limited amount of users? how can we do that? please someone suggest a tutorial, website, or any good source.
I try to search about it but I couldn't find any useful thing on the internet
A:
Information you provided is little insufficient, but do you mean check of registered user before new registration?
Something like this?
def check_max_users(db_connection):
users_count = db_connection.query("SELECT COUNT(*) AS users_count FROM users").first().users_count
if users_count > 200:
raise MaxUsersError()
| how to make a system in python that support limited users? for instance, it should support 200 users | How to Limit the amount of user registration? a system that should support a limited amount of users? how can we do that? please someone suggest a tutorial, website, or any good source.
I try to search about it but I couldn't find any useful thing on the internet
| [
"Information you provided is little insufficient, but do you mean check of registered user before new registration?\nSomething like this?\ndef check_max_users(db_connection):\n users_count = db_connection.query(\"SELECT COUNT(*) AS users_count FROM users\").first().users_count\n if users_count > 200:\n raise MaxUsersError()\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074625898_python.txt |
Q:
exploding a multi dictionnary columns
I have a data that contains +15 columns all of them with dictionnary as values. all of the dictionnary has the same keys but different values depending on th column and the key of course. i need to explode them into on data that has the keys as index;this a part of the data
i ve tried this code ! but it only work on one column. i have to do it for all 15 columns and merge them.
data = pd.DataFrame([[i, k, v] for i, d in df[['halstead_vol', 'cyclomatic_complexity']].values for k, v in d.items()],
columns=['halstead_vol', 'cyclomatic_complexity', 'h1'])
A:
If you check explode function documentation in pandas, to explode multiple column you can achieve that in this format:
DataFrame.explode(list(col1col2col3...))
for your case:
df.explode(list('halstead_volcyclomatic_complexityh1'), ignore_index=True)
For dictionary values try this:
new_df = df['halstead_vol'].apply(pd.Series)
Also check this thread on SO for more info.
| exploding a multi dictionnary columns | I have a data that contains +15 columns all of them with dictionnary as values. all of the dictionnary has the same keys but different values depending on th column and the key of course. i need to explode them into on data that has the keys as index;this a part of the data
i ve tried this code ! but it only work on one column. i have to do it for all 15 columns and merge them.
data = pd.DataFrame([[i, k, v] for i, d in df[['halstead_vol', 'cyclomatic_complexity']].values for k, v in d.items()],
columns=['halstead_vol', 'cyclomatic_complexity', 'h1'])
| [
"If you check explode function documentation in pandas, to explode multiple column you can achieve that in this format:\nDataFrame.explode(list(col1col2col3...))\n\nfor your case:\ndf.explode(list('halstead_volcyclomatic_complexityh1'), ignore_index=True)\n\nFor dictionary values try this:\nnew_df = df['halstead_vol'].apply(pd.Series)\n\nAlso check this thread on SO for more info.\n"
] | [
0
] | [] | [] | [
"pandas",
"pandas_explode",
"python"
] | stackoverflow_0074625840_pandas_pandas_explode_python.txt |
Q:
AttributeError: 'str' object has no attribute 'readline' when trying to get all lines from a file
trying to get all lines in a file and print them out
topic = 1
if topic == 1:
allquestions = open("quizquestions1.txt","r")
allquestions = allquestions.read()
print(allquestions.readfile())
A:
It's only allquestions in print..You alredy read lines before no need to read again.
topic = 1
if topic == 1:
allquestions = open("quizquestions1.txt","r")
allquestions = allquestions.read()
print(allquestions)
A:
The read method for a file will return the file content. You can use help function for it:
read(size=-1, /) method of _io.TextIOWrapper instance
Read at most n characters from stream.
Read from underlying buffer until we have n characters or we hit EOF.
If n is negative or omitted, read until EOF.
It will return the content as a string containing whole file content.
topic = 1
if topic == 1:
with open("quizquestions1.txt","r") as file:
allquestions = file.read()
print(allquestions)
You can also use readlines method to get a splitted content of file based on \n character.
| AttributeError: 'str' object has no attribute 'readline' when trying to get all lines from a file | trying to get all lines in a file and print them out
topic = 1
if topic == 1:
allquestions = open("quizquestions1.txt","r")
allquestions = allquestions.read()
print(allquestions.readfile())
| [
"It's only allquestions in print..You alredy read lines before no need to read again.\ntopic = 1\nif topic == 1:\n allquestions = open(\"quizquestions1.txt\",\"r\")\n allquestions = allquestions.read()\n print(allquestions)\n\n",
"The read method for a file will return the file content. You can use help function for it:\nread(size=-1, /) method of _io.TextIOWrapper instance\n Read at most n characters from stream.\n \n Read from underlying buffer until we have n characters or we hit EOF.\n If n is negative or omitted, read until EOF.\n\nIt will return the content as a string containing whole file content.\ntopic = 1\nif topic == 1:\n with open(\"quizquestions1.txt\",\"r\") as file:\n allquestions = file.read()\n print(allquestions)\n\nYou can also use readlines method to get a splitted content of file based on \\n character.\n"
] | [
1,
0
] | [] | [] | [
"error_handling",
"python"
] | stackoverflow_0074625876_error_handling_python.txt |
Q:
Aggregate and concatenate multiple columns
I want to groupby my dataframe and concatenate the values/strings from the other columns together.
Year Letter Number Note Text
0 2022 a 1 8 hi
1 2022 b 1 7 hello
2 2022 a 1 6 bye
3 2022 b 3 5 joe
To this:
Column
Year Letter
2022 a 1|8|hi; 1|6|bye
b 1|7|hello; 3|5|joe
I tried some things with groupby, apply() and agg() but I can't get it work:
df.groupby(['Year', 'Letter']).agg(lambda x: '|'.join(x))
Output:
Text
Year Letter
2022 a hi|bye
b hello|joe
A:
You can first join values per rows converted to strings by DataFrame.astype and DataFrame.agg and then aggregate join in GroupBy.agg:
df1 = (df.assign(Text= df[['Number','Note','Text']].astype(str).agg('|'.join, axis=1))
.groupby(['Year', 'Letter'])['Text']
.agg('; '.join)
.to_frame())
print (df1)
Text
Year Letter
2022 a 1|8|hi; 1|6|bye
b 1|7|hello; 3|5|joe
Or create custom lambda function in GroupBy.apply:
f = lambda x: '; '.join('|'.join(y) for y in x.astype(str).to_numpy())
df1 = (df.groupby(['Year', 'Letter'])[['Number','Note','Text']].apply(f)
.to_frame(name='Text')
)
print (df1)
Text
Year Letter
2022 a 1|8|hi; 1|6|bye
b 1|7|hello; 3|5|joe
If need join all columns without grouping columns:
grouped = ['Year','Letter']
df1 = (df.assign(Text= df[df.columns.difference(grouped, sort=False)]
.astype(str).agg('|'.join, axis=1))
.groupby(['Year', 'Letter'])['Text']
.agg('; '.join)
.to_frame())
grouped = ['Year','Letter']
f = lambda x: '; '.join('|'.join(y) for y in x.astype(str).to_numpy())
df1 = (df.groupby(grouped)[df.columns.difference(grouped, sort=False)].apply(f)
.to_frame(name='Text')
)
A:
Using groupby.apply:
cols = ['Year', 'Letter']
(df.groupby(cols)
.apply(lambda d: '; '.join(d.drop(columns=cols) # or slice the columns here
.astype(str)
.agg('|'.join, axis=1)))
.to_frame(name='Column')
)
Output:
Column
Year Letter
2022 a 1|8|hi; 1|6|bye
b 1|7|hello; 3|5|joe
| Aggregate and concatenate multiple columns | I want to groupby my dataframe and concatenate the values/strings from the other columns together.
Year Letter Number Note Text
0 2022 a 1 8 hi
1 2022 b 1 7 hello
2 2022 a 1 6 bye
3 2022 b 3 5 joe
To this:
Column
Year Letter
2022 a 1|8|hi; 1|6|bye
b 1|7|hello; 3|5|joe
I tried some things with groupby, apply() and agg() but I can't get it work:
df.groupby(['Year', 'Letter']).agg(lambda x: '|'.join(x))
Output:
Text
Year Letter
2022 a hi|bye
b hello|joe
| [
"You can first join values per rows converted to strings by DataFrame.astype and DataFrame.agg and then aggregate join in GroupBy.agg:\ndf1 = (df.assign(Text= df[['Number','Note','Text']].astype(str).agg('|'.join, axis=1))\n .groupby(['Year', 'Letter'])['Text']\n .agg('; '.join)\n .to_frame())\nprint (df1)\n Text\nYear Letter \n2022 a 1|8|hi; 1|6|bye\n b 1|7|hello; 3|5|joe\n\nOr create custom lambda function in GroupBy.apply:\nf = lambda x: '; '.join('|'.join(y) for y in x.astype(str).to_numpy())\ndf1 = (df.groupby(['Year', 'Letter'])[['Number','Note','Text']].apply(f)\n .to_frame(name='Text')\n )\nprint (df1)\n Text\nYear Letter \n2022 a 1|8|hi; 1|6|bye\n b 1|7|hello; 3|5|joe\n\nIf need join all columns without grouping columns:\ngrouped = ['Year','Letter']\n\ndf1 = (df.assign(Text= df[df.columns.difference(grouped, sort=False)]\n .astype(str).agg('|'.join, axis=1))\n .groupby(['Year', 'Letter'])['Text']\n .agg('; '.join)\n .to_frame())\n\n\ngrouped = ['Year','Letter']\n\nf = lambda x: '; '.join('|'.join(y) for y in x.astype(str).to_numpy())\ndf1 = (df.groupby(grouped)[df.columns.difference(grouped, sort=False)].apply(f)\n .to_frame(name='Text')\n )\n\n",
"Using groupby.apply:\ncols = ['Year', 'Letter']\n(df.groupby(cols)\n .apply(lambda d: '; '.join(d.drop(columns=cols) # or slice the columns here\n .astype(str)\n .agg('|'.join, axis=1)))\n .to_frame(name='Column')\n)\n\nOutput:\n Column\nYear Letter \n2022 a 1|8|hi; 1|6|bye\n b 1|7|hello; 3|5|joe\n\n"
] | [
2,
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074625959_pandas_python.txt |
Q:
DateTime to quarter end in Pandas
My current date is in this format "202003", "202006", "202009". I want to change it to "2020-03-31", "2020-06-30", "2020-09-30".
Here is my code:
df6['Date'] = pd.to_datetime(df6['Date'], format = "%Y%m").dt.strftime('%Y-%m-%d')
df6['Date'] = df6['Date'] + pd.offsets.QuarterEnd(0)
TypeError: unsupported operand type(s) for +: 'pandas._libs.tslibs.offsets.QuarterEnd' and 'str'
How can I fix this?
A:
Add the date offset before converting to string with strftime, Ex:
import pandas as pd
df = pd.DataFrame({"Date": ["202003", "202006", "202009"]})
df['Date'] = pd.to_datetime(df['Date'], format="%Y%m") + pd.offsets.QuarterEnd(0)
df['Date_str'] = df['Date'].dt.strftime('%Y-%m-%d')
df.head()
Date Date_str
0 2020-03-31 2020-03-31
1 2020-06-30 2020-06-30
2 2020-09-30 2020-09-30
df.dtypes
Date datetime64[ns]
Date_str object
dtype: object
| DateTime to quarter end in Pandas | My current date is in this format "202003", "202006", "202009". I want to change it to "2020-03-31", "2020-06-30", "2020-09-30".
Here is my code:
df6['Date'] = pd.to_datetime(df6['Date'], format = "%Y%m").dt.strftime('%Y-%m-%d')
df6['Date'] = df6['Date'] + pd.offsets.QuarterEnd(0)
TypeError: unsupported operand type(s) for +: 'pandas._libs.tslibs.offsets.QuarterEnd' and 'str'
How can I fix this?
| [
"Add the date offset before converting to string with strftime, Ex:\nimport pandas as pd\n\ndf = pd.DataFrame({\"Date\": [\"202003\", \"202006\", \"202009\"]})\n\ndf['Date'] = pd.to_datetime(df['Date'], format=\"%Y%m\") + pd.offsets.QuarterEnd(0)\n\ndf['Date_str'] = df['Date'].dt.strftime('%Y-%m-%d')\n\ndf.head()\n Date Date_str\n0 2020-03-31 2020-03-31\n1 2020-06-30 2020-06-30\n2 2020-09-30 2020-09-30\n\ndf.dtypes\nDate datetime64[ns]\nDate_str object\ndtype: object\n\n"
] | [
0
] | [] | [] | [
"datetime",
"datetimeoffset",
"pandas",
"python",
"quarter"
] | stackoverflow_0074612190_datetime_datetimeoffset_pandas_python_quarter.txt |
Q:
Converting list of string to string variable doesn't retain the order of elements in python
I have an a list of strings like ["a", "b"]. When I convert it to the string variable separated by ", " it works fine when I test the test cases on the local machine via debugging and tox. It doesn't work fine in the pipeline once I commit the code on GitLab. The order of elements gets reversed. Sometimes it is retained and sometimes not.
I am using the below code to convert it to a string variable:
drop_account_names = ", ".join(drop for drop in set(to_drop))
The output on my local machine when tested via tox and debugging of tests is correct i.e 'a', 'b'
However, on the Gitlab pipeline, I get it as 'b', 'a' sometimes. Why is it so?
A:
sets are unordered, so you will get different results every time. You can use dict to retain the order
to_drop = ["a", "b", "a"]
to_drop = dict.fromkeys(to_drop)
drop_account_names = ", ".join(to_drop)
print(drop_account_names) # a, b
| Converting list of string to string variable doesn't retain the order of elements in python | I have an a list of strings like ["a", "b"]. When I convert it to the string variable separated by ", " it works fine when I test the test cases on the local machine via debugging and tox. It doesn't work fine in the pipeline once I commit the code on GitLab. The order of elements gets reversed. Sometimes it is retained and sometimes not.
I am using the below code to convert it to a string variable:
drop_account_names = ", ".join(drop for drop in set(to_drop))
The output on my local machine when tested via tox and debugging of tests is correct i.e 'a', 'b'
However, on the Gitlab pipeline, I get it as 'b', 'a' sometimes. Why is it so?
| [
"sets are unordered, so you will get different results every time. You can use dict to retain the order\nto_drop = [\"a\", \"b\", \"a\"]\nto_drop = dict.fromkeys(to_drop)\ndrop_account_names = \", \".join(to_drop)\nprint(drop_account_names) # a, b\n\n"
] | [
0
] | [] | [] | [
"gitlab",
"python",
"python_3.x"
] | stackoverflow_0074626021_gitlab_python_python_3.x.txt |
Q:
Trying to use df.groupby function to group new dataframe according to year information
I need to use the groupby function to group new dataframe according to year. I have seen other topics on this issue however they don't have it reading from a csv file. I'm wondering am I already doing this right or if I am wrong what is the right way to do this
I tried using
df = pd.read_csv('data.csv', usecols= ['price','year'])
df.groupby('price')
print(df)
But this gives me back information that is in the image ->
A:
You could do that in this way:
df = df.groupby('year')
Print first value in each group:
df.first()
to get highest price for each year group:
df.groupby('year').max()
Or:
df.groupby('year')['price'].max()
| Trying to use df.groupby function to group new dataframe according to year information | I need to use the groupby function to group new dataframe according to year. I have seen other topics on this issue however they don't have it reading from a csv file. I'm wondering am I already doing this right or if I am wrong what is the right way to do this
I tried using
df = pd.read_csv('data.csv', usecols= ['price','year'])
df.groupby('price')
print(df)
But this gives me back information that is in the image ->
| [
"You could do that in this way:\ndf = df.groupby('year')\n\nPrint first value in each group:\ndf.first()\n\nto get highest price for each year group:\ndf.groupby('year').max()\n\nOr:\n df.groupby('year')['price'].max()\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"group_by",
"python"
] | stackoverflow_0074626077_dataframe_group_by_python.txt |
Q:
Selenium can not find elements in dynamic web page, page source does not be loaded completely
I try to recover some elements in a web page with selenium but the page_source I'm getting it does not have that elements loaded.
Find element returns elem.text empty and driver.page_source does not have the id titulotramitedocu.
What am I missing?
Code:
URL = "https://seu.conselldemallorca.net/fitxa?key=91913"
driver = webdriver.Chrome()
driver.get(URL)
try:
driver.implicitly_wait(20)
elem = driver.find_element(By.ID,"titulotramitedocu")
print(elem.text)
finally:
driver.quit()
I also tried with a wait..
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "titulotramitedocu"))
)
A:
To locate and print the text from the visible element instead of presence_of_element_located() you need to induce WebDriverWait for the visibility_of_element_located() and you can use either of the following Locator Strategies:
Using CSS_SELECTOR:
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div.titulotramitedocu#titulotramitedocu > h1"))).text)
Using XPATH:
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//div[@class='titulotramitedocu' and @id='titulotramitedocu']//h1"))).text)
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
Console Output:
Concurs de mèrits de personal funcionari del Consell de Mallorca per a la categoria d'enginyer-a tècnic-a industrial d'administració especial (codi CFCEA2/024)
| Selenium can not find elements in dynamic web page, page source does not be loaded completely | I try to recover some elements in a web page with selenium but the page_source I'm getting it does not have that elements loaded.
Find element returns elem.text empty and driver.page_source does not have the id titulotramitedocu.
What am I missing?
Code:
URL = "https://seu.conselldemallorca.net/fitxa?key=91913"
driver = webdriver.Chrome()
driver.get(URL)
try:
driver.implicitly_wait(20)
elem = driver.find_element(By.ID,"titulotramitedocu")
print(elem.text)
finally:
driver.quit()
I also tried with a wait..
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "titulotramitedocu"))
)
| [
"To locate and print the text from the visible element instead of presence_of_element_located() you need to induce WebDriverWait for the visibility_of_element_located() and you can use either of the following Locator Strategies:\n\nUsing CSS_SELECTOR:\nprint(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, \"div.titulotramitedocu#titulotramitedocu > h1\"))).text)\n\n\nUsing XPATH:\nprint(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, \"//div[@class='titulotramitedocu' and @id='titulotramitedocu']//h1\"))).text)\n\n\nNote : You have to add the following imports :\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\n\nConsole Output:\nConcurs de mèrits de personal funcionari del Consell de Mallorca per a la categoria d'enginyer-a tècnic-a industrial d'administració especial (codi CFCEA2/024)\n\n\n\n"
] | [
2
] | [] | [] | [
"dynamic",
"python",
"selenium",
"selenium_chromedriver",
"web_scraping"
] | stackoverflow_0074626150_dynamic_python_selenium_selenium_chromedriver_web_scraping.txt |
Q:
Iterating over data to combine
I'm fairly new to python and just now getting started with working with data. I'm attempting to combine different objects to display the data in a more readable way to view the comparison.
Here is the data i'm working with:
{
"flowDefinitionArn": "arn:aws:sagemaker:us-east-1:2345:flow-definition/definition_name",
"humanAnswers": [
{
"acceptanceTime": "2022-11-15T18:37:50.085Z",
"answerContent": {
"extracted1_1": "Italy",
"extracted1_2": "Rome",
"extracted1_3": "5555",
"extracted2_1": "Czech",
"extracted2_2": "Prague",
"extracted2_3": "3333",
"reportDate": "2022-06-01T08:30",
"reportOwner": "John Smith"
},
"submissionTime": "2022-11-15T18:38:32.791Z",
"timeSpentInSeconds": 42.706,
"workerId": "1234",
"workerMetadata": {
"identityData": {
"identityProviderType": "Cognito",
"issuer": "https://cognito-idp.us-east-1.amazonaws.com/",
"sub": "c"
}
}
}
],
"humanLoopName": "test",
"inputContent": {
"document": {
"documentType": "countryReport",
"fields": [
{
"id": "reportOwner",
"type": "string",
"validation": "",
"value": "John Smith"
},
{
"id": "reportDate",
"type": "date",
"validation": "",
"value": "2022-06-01T08:30"
},
{
"id": "locationList",
"type": "table",
"value": {
"columns": [
{
"id": "country",
"type": "string"
},
{
"id": "capital",
"type": "string"
},
{
"id": "population",
"type": "number"
}
],
"rows": [
[
"UK",
"London",
1234
],
[
"France",
"Paris",
321
]
]
}
}
]
},
"document_types": [
{
"displayName": "Email",
"id": "email"
},
{
"displayName": "Invoice",
"id": "invoice"
},
{
"displayName": "Other",
"id": "other"
}
],
"input_s3_uri": "s3://my-input-bucket/file1.pdf"
}
}
I would like for the data to come out to look something like this:
Input info: country, Original answer: UK, Human answer: extracted1_1: Italy
Input info: capital, Original answer: London, Human answer: extracted1_2: Rome
Input info: population, Original answer: 1234, Human answer: extracted1_3: 5555
Input info: country, Original answer: France, Human answer: extracted2_1: Czech
Input info: capital, Original answer: Paris, Human answer: extracted2_2: Prague
Input info: population, Original answer: 321, Human answer: extracted2_3: 3333
This is a sample of the code i've written so far:
s3_client = boto3.client('s3')
response = s3_client.get_object(Bucket=f'{config["bucket"]}', Key=f'{config["file_name"]}')
data = response['Body'].read()
d = json.loads(data)
column = d['inputContent']['document']['fields'][2]['value']['columns']
row = d['inputContent']['document']['fields'][2]['value']['rows']
answers = d['humanAnswers'][0]['answerContent']
str_row = str(row)
iter_col = iter(column)
iter_row = iter(str_row)
combined = ''
for a in answers.items():
nxt_col = next(iter_col)
for list in row:
for values in list:
v = values
combined += str(v + ", ")
print(f'Input info: {nxt_col}, Original Answer: {str_row}, Human Answer: {a}')
I'm kind of stuck now and looking for some guidance on how to combine the columns (input info), row (original answer), and answerContent (human answers) with the corresponding values.
A:
You can try something like this:
d = json.loads(data)
cols=[i['id'] for i in d['inputContent']['document']['fields'][2]['value']['columns']] # ['country', 'capital', 'population']
extracted=d['humanAnswers'][0]['answerContent']
extracted_vals=list(dict(filter(lambda e:e[0].startswith('extra'), extracted.items())).values())
# output -- > ['Italy', 'Rome', '5555', 'Czech', 'Prague', '3333']
datacol_rows =[i for i in d['inputContent']['document']['fields'][2]['value']['rows']]
datacol_rows = [item for sublist in datacol_rows for item in sublist]
# output -- > ['UK', 'London', 1234, 'France', 'Paris', 321]
final=pd.DataFrame({k: extracted_vals[i::3] for i, k in enumerate(['extracted_' + i for i in cols])})
'''
extracted_country extracted_capital extracted_population
0 Italy Rome 5555
1 Czech Prague 3333
'''
final2=pd.DataFrame({k: datacol_rows[i::3] for i, k in enumerate(cols)})
'''
country capital population
0 UK London 1234
1 France Paris 321
'''
final=final.join(final2)
final=final[['country','extracted_country','capital','extracted_capital','population','extracted_population']]
print(final)
'''
| | country | extracted_country | capital | extracted_capital | population | extracted_population |
|---:|:----------|:--------------------|:----------|:--------------------|-------------:|-----------------------:|
| 0 | UK | Italy | London | Rome | 1234 | 5555 |
| 1 | France | Czech | Paris | Prague | 321 | 3333 |
'''
| Iterating over data to combine | I'm fairly new to python and just now getting started with working with data. I'm attempting to combine different objects to display the data in a more readable way to view the comparison.
Here is the data i'm working with:
{
"flowDefinitionArn": "arn:aws:sagemaker:us-east-1:2345:flow-definition/definition_name",
"humanAnswers": [
{
"acceptanceTime": "2022-11-15T18:37:50.085Z",
"answerContent": {
"extracted1_1": "Italy",
"extracted1_2": "Rome",
"extracted1_3": "5555",
"extracted2_1": "Czech",
"extracted2_2": "Prague",
"extracted2_3": "3333",
"reportDate": "2022-06-01T08:30",
"reportOwner": "John Smith"
},
"submissionTime": "2022-11-15T18:38:32.791Z",
"timeSpentInSeconds": 42.706,
"workerId": "1234",
"workerMetadata": {
"identityData": {
"identityProviderType": "Cognito",
"issuer": "https://cognito-idp.us-east-1.amazonaws.com/",
"sub": "c"
}
}
}
],
"humanLoopName": "test",
"inputContent": {
"document": {
"documentType": "countryReport",
"fields": [
{
"id": "reportOwner",
"type": "string",
"validation": "",
"value": "John Smith"
},
{
"id": "reportDate",
"type": "date",
"validation": "",
"value": "2022-06-01T08:30"
},
{
"id": "locationList",
"type": "table",
"value": {
"columns": [
{
"id": "country",
"type": "string"
},
{
"id": "capital",
"type": "string"
},
{
"id": "population",
"type": "number"
}
],
"rows": [
[
"UK",
"London",
1234
],
[
"France",
"Paris",
321
]
]
}
}
]
},
"document_types": [
{
"displayName": "Email",
"id": "email"
},
{
"displayName": "Invoice",
"id": "invoice"
},
{
"displayName": "Other",
"id": "other"
}
],
"input_s3_uri": "s3://my-input-bucket/file1.pdf"
}
}
I would like for the data to come out to look something like this:
Input info: country, Original answer: UK, Human answer: extracted1_1: Italy
Input info: capital, Original answer: London, Human answer: extracted1_2: Rome
Input info: population, Original answer: 1234, Human answer: extracted1_3: 5555
Input info: country, Original answer: France, Human answer: extracted2_1: Czech
Input info: capital, Original answer: Paris, Human answer: extracted2_2: Prague
Input info: population, Original answer: 321, Human answer: extracted2_3: 3333
This is a sample of the code i've written so far:
s3_client = boto3.client('s3')
response = s3_client.get_object(Bucket=f'{config["bucket"]}', Key=f'{config["file_name"]}')
data = response['Body'].read()
d = json.loads(data)
column = d['inputContent']['document']['fields'][2]['value']['columns']
row = d['inputContent']['document']['fields'][2]['value']['rows']
answers = d['humanAnswers'][0]['answerContent']
str_row = str(row)
iter_col = iter(column)
iter_row = iter(str_row)
combined = ''
for a in answers.items():
nxt_col = next(iter_col)
for list in row:
for values in list:
v = values
combined += str(v + ", ")
print(f'Input info: {nxt_col}, Original Answer: {str_row}, Human Answer: {a}')
I'm kind of stuck now and looking for some guidance on how to combine the columns (input info), row (original answer), and answerContent (human answers) with the corresponding values.
| [
"You can try something like this:\nd = json.loads(data)\ncols=[i['id'] for i in d['inputContent']['document']['fields'][2]['value']['columns']] # ['country', 'capital', 'population']\n\nextracted=d['humanAnswers'][0]['answerContent']\nextracted_vals=list(dict(filter(lambda e:e[0].startswith('extra'), extracted.items())).values()) \n# output -- > ['Italy', 'Rome', '5555', 'Czech', 'Prague', '3333']\n\ndatacol_rows =[i for i in d['inputContent']['document']['fields'][2]['value']['rows']]\ndatacol_rows = [item for sublist in datacol_rows for item in sublist]\n# output -- > ['UK', 'London', 1234, 'France', 'Paris', 321]\n\nfinal=pd.DataFrame({k: extracted_vals[i::3] for i, k in enumerate(['extracted_' + i for i in cols])})\n'''\n extracted_country extracted_capital extracted_population\n0 Italy Rome 5555\n1 Czech Prague 3333\n\n'''\nfinal2=pd.DataFrame({k: datacol_rows[i::3] for i, k in enumerate(cols)})\n'''\n country capital population\n0 UK London 1234\n1 France Paris 321\n\n'''\nfinal=final.join(final2)\nfinal=final[['country','extracted_country','capital','extracted_capital','population','extracted_population']]\nprint(final)\n'''\n| | country | extracted_country | capital | extracted_capital | population | extracted_population |\n|---:|:----------|:--------------------|:----------|:--------------------|-------------:|-----------------------:|\n| 0 | UK | Italy | London | Rome | 1234 | 5555 |\n| 1 | France | Czech | Paris | Prague | 321 | 3333 |\n'''\n\n"
] | [
0
] | [] | [] | [
"json",
"loops",
"python"
] | stackoverflow_0074619933_json_loops_python.txt |
Q:
my afk system works on its own, but not when i insert it to the main
i am in the middle of making a demo of a game, and as part of it i made a system to check if the player is afk:
while lives != 0:
countdown.start()
while clicked != True:
if int(f"{time.perf_counter() - countdown.time_passed:0.0f}") == 5:
print("time passed")
Break = True
break
wn.update()
clicked = False
if Break:
Break = False
else:
print("you clicked the screen")
here's the code for the timer so far:
class timer():
def start(self):
self.time_passed = time.perf_counter()
def stop(self):
self.time_passed = time.perf_counter() - self.time_passed
(i havent used stop yet, but it has a purpose in a diffrent part of the game)
also, clicked occures every time i click on an object.
i tested this system on its own in this code:
class timer():
def start(self):
self.stop_time = False
self.lengh_of_time = time.perf_counter()
def stop(self, x, y):
self.stop_time = True
self.time_passed = time.perf_counter() - self.lengh_of_time
countdown = timer()
wn.onclick(countdown.stop)
def main():
while True:
wn.update()
if int(f"{time.perf_counter() - countdown.lengh_of_time:0.0f}") >= 2:
print("time has passed")
break
elif countdown.stop_time == True:
print("you stopped time")
break
while True:
countdown.start()
main()
btw wn is just the turtle.Screen i made
the issue is that whenever i press on the screen in the tests, it works. but whenever i press the screen in the main, it doesnt do anything
i tried to make an afk check system
i want it to either tell me a player is afk or tell me if a player has pressed the screen
it works when i separate the system from the code, but not inside the code, can anybody tell me why?
A:
I found out what the problem was. This occurred because I had a global variable in the code, while it was a local variable during testing.
If anybody has a similar issue, make sure that you handle your global variables with caution. Typically, they are constants and should neveer be changed, otherwise, be very intentional about changing anything outside the function.
| my afk system works on its own, but not when i insert it to the main | i am in the middle of making a demo of a game, and as part of it i made a system to check if the player is afk:
while lives != 0:
countdown.start()
while clicked != True:
if int(f"{time.perf_counter() - countdown.time_passed:0.0f}") == 5:
print("time passed")
Break = True
break
wn.update()
clicked = False
if Break:
Break = False
else:
print("you clicked the screen")
here's the code for the timer so far:
class timer():
def start(self):
self.time_passed = time.perf_counter()
def stop(self):
self.time_passed = time.perf_counter() - self.time_passed
(i havent used stop yet, but it has a purpose in a diffrent part of the game)
also, clicked occures every time i click on an object.
i tested this system on its own in this code:
class timer():
def start(self):
self.stop_time = False
self.lengh_of_time = time.perf_counter()
def stop(self, x, y):
self.stop_time = True
self.time_passed = time.perf_counter() - self.lengh_of_time
countdown = timer()
wn.onclick(countdown.stop)
def main():
while True:
wn.update()
if int(f"{time.perf_counter() - countdown.lengh_of_time:0.0f}") >= 2:
print("time has passed")
break
elif countdown.stop_time == True:
print("you stopped time")
break
while True:
countdown.start()
main()
btw wn is just the turtle.Screen i made
the issue is that whenever i press on the screen in the tests, it works. but whenever i press the screen in the main, it doesnt do anything
i tried to make an afk check system
i want it to either tell me a player is afk or tell me if a player has pressed the screen
it works when i separate the system from the code, but not inside the code, can anybody tell me why?
| [
"I found out what the problem was. This occurred because I had a global variable in the code, while it was a local variable during testing.\nIf anybody has a similar issue, make sure that you handle your global variables with caution. Typically, they are constants and should neveer be changed, otherwise, be very intentional about changing anything outside the function.\n"
] | [
0
] | [] | [] | [
"python",
"python_turtle"
] | stackoverflow_0074559183_python_python_turtle.txt |
Q:
How to handle list of string stored in a map in python?
I am from C++ background and i am porting one of our tool to python, i am fairly a begginer with python and i am looking for way to store this data structure to python
i have a map or array with string key and it will have a content of an object or a json like so..
map["key1"] = {
{ 'name': 'user1', 'email': 'something@something'},
{ 'name': 'user1', 'email': 'something@something'},
...
}
map["key2"] = {
{ 'name': 'user1', 'email': 'something@something'},
{ 'name': 'user1', 'email': 'something@something'},
...
}
...
ofcourse the map list should be growing whenevern there is a new key.
how to do this in python?
A:
You want to use a Python dictionary which is denoted by squiggly brackets ({ and }). The data structure inside the dictionary entry is a list denoted by the square brackets ([ and ]).
# Declare 'my_map' dictionary
my_map = {}
# Add list of dictionaries to 'key_1'
my_map["key1"] = [
{ 'name': 'user1', 'email': 'something@something'},
{ 'name': 'user1', 'email': 'something@something'}
]
# Add list of dictionaries to 'key_2'
my_map["key2"] = [
{ 'name': 'user1', 'email': 'something@something'},
{ 'name': 'user1', 'email': 'something@something'}
]
| How to handle list of string stored in a map in python? | I am from C++ background and i am porting one of our tool to python, i am fairly a begginer with python and i am looking for way to store this data structure to python
i have a map or array with string key and it will have a content of an object or a json like so..
map["key1"] = {
{ 'name': 'user1', 'email': 'something@something'},
{ 'name': 'user1', 'email': 'something@something'},
...
}
map["key2"] = {
{ 'name': 'user1', 'email': 'something@something'},
{ 'name': 'user1', 'email': 'something@something'},
...
}
...
ofcourse the map list should be growing whenevern there is a new key.
how to do this in python?
| [
"You want to use a Python dictionary which is denoted by squiggly brackets ({ and }). The data structure inside the dictionary entry is a list denoted by the square brackets ([ and ]).\n\n# Declare 'my_map' dictionary\nmy_map = {}\n\n# Add list of dictionaries to 'key_1'\nmy_map[\"key1\"] = [\n { 'name': 'user1', 'email': 'something@something'},\n { 'name': 'user1', 'email': 'something@something'}\n]\n\n# Add list of dictionaries to 'key_2'\nmy_map[\"key2\"] = [\n { 'name': 'user1', 'email': 'something@something'},\n { 'name': 'user1', 'email': 'something@something'}\n]\n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074626166_python.txt |
Q:
Argument 2 to "join" has incompatible type "Optional[str]"; expected "str"
I'm running mypy pre commit hook to check for any possible type issues and it's keep giving me this error Argument 2 to "join" has incompatible type "Optional[str]"; expected "str" for the code below:
else:
renamed_paths_dict: CustomConnectorRenameDict = {
"old_path": os.path.join(
self.temp_dir, change["file_path"]
),
"new_path": os.path.join(
self.temp_dir,
change["new_file_path"], -> this is the line mypy is talking about
),
}
change["new_file_path"] can be either a string or None but in this specific else block, it'll be never None.
How can I fix this issue?
Thanks
A:
You have multiple options:
ignore errors from mypy for this line by adding the comment # type: ignore:
else:
renamed_paths_dict: CustomConnectorRenameDict = {
"old_path": os.path.join(
self.temp_dir, change["file_path"]
),
"new_path": os.path.join(
self.temp_dir,
change["new_file_path"], # type: ignore
),
}
give a default value to the variable:
else:
renamed_paths_dict: CustomConnectorRenameDict = {
"old_path": os.path.join(
self.temp_dir, change["file_path"]
),
"new_path": os.path.join(
self.temp_dir,
change["new_file_path"] or "",
),
}
add an assertion at the beginning of the else statement (will bring a warning of bandit if you use it):
else:
assert change["new_file_path"] is not None
renamed_paths_dict: CustomConnectorRenameDict = {
"old_path": os.path.join(
self.temp_dir, change["file_path"]
),
"new_path": os.path.join(
self.temp_dir,
change["new_file_path"],
),
}
A:
Allow me to rewrite your question in such a way that gives a proper minimal reproducible example, throws out all the irrelevant things (unrelated to the actual problem) and keeps only the essentials.
Question
If the values in a dictionary are of the type str | None, but I know for certain that one of them is definitely a str (not None), how can I tell a static type checker? The following code produces an error with mypy:
import os
temp_dir = "tmp"
paths: dict[str, str | None] = {}
...
paths["new_file_path"] = "foo"
...
new_path = os.path.join(temp_dir, paths["new_file_path"])
The error:
Argument 2 to "join" has incompatible type "Optional[str]"; expected "str" [arg-type]
Answer
You tell the type checker to expect the value corresponding to the key "new_file_path" to be a str:
...
paths["new_file_path"] = "foo"
...
assert paths["new_file_path"] is not None
new_path = os.path.join(temp_dir, paths["new_file_path"])
Alternatively:
...
assert isinstance(paths["new_file_path"], str)
new_path = os.path.join(temp_dir, paths["new_file_path"])
If you don't want to write that extra type guard, you can always use a type: ignore, but you should always try and make those as narrow as possible by using the correct error code to silence:
new_path = os.path.join(temp_dir, paths["new_file_path"]) # type: ignore[arg-type]
But I would not go that route. The assertion has the added benefit of also giving you a clean and immediately obvious error, if you make a mistake somewhere and the new_file_path value happens to be None.
I would also absolutely not go the route of short-circuiting with paths["new_file_path"] or "some string". This is even more dangerous because it may introduce silent bugs into your code since you said that you expect the new_file_path value to be a string. If you make a mistake, the code would give you a path to tmp/some string without raising an error.
PS
Thanks to @SUTerliakov for pointing out that assertions about specific dictionary values are not entirely safe. If you want to be really precise and safe, you should use an intermediary variable for this:
...
new_file_path = paths["new_file_path"]
assert new_file_path is not None # isinstance(new_file_path, str)
new_path = os.path.join(temp_dir, new_file_path)
For the sake of completeness, you could also use typing.cast like this:
from typing import cast
...
new_path = os.path.join(temp_dir, cast(str, paths["new_file_path"]))
But this has essentially the same effect as a well placed and specific type: ignore, so I would still recommend the assert.
| Argument 2 to "join" has incompatible type "Optional[str]"; expected "str" | I'm running mypy pre commit hook to check for any possible type issues and it's keep giving me this error Argument 2 to "join" has incompatible type "Optional[str]"; expected "str" for the code below:
else:
renamed_paths_dict: CustomConnectorRenameDict = {
"old_path": os.path.join(
self.temp_dir, change["file_path"]
),
"new_path": os.path.join(
self.temp_dir,
change["new_file_path"], -> this is the line mypy is talking about
),
}
change["new_file_path"] can be either a string or None but in this specific else block, it'll be never None.
How can I fix this issue?
Thanks
| [
"You have multiple options:\n\nignore errors from mypy for this line by adding the comment # type: ignore:\n\nelse:\n renamed_paths_dict: CustomConnectorRenameDict = {\n \"old_path\": os.path.join(\n self.temp_dir, change[\"file_path\"]\n ),\n \"new_path\": os.path.join(\n self.temp_dir,\n change[\"new_file_path\"], # type: ignore\n ),\n }\n\n\ngive a default value to the variable:\n\nelse:\n renamed_paths_dict: CustomConnectorRenameDict = {\n \"old_path\": os.path.join(\n self.temp_dir, change[\"file_path\"]\n ),\n \"new_path\": os.path.join(\n self.temp_dir,\n change[\"new_file_path\"] or \"\",\n ),\n }\n\n\nadd an assertion at the beginning of the else statement (will bring a warning of bandit if you use it):\n\nelse:\n assert change[\"new_file_path\"] is not None\n renamed_paths_dict: CustomConnectorRenameDict = {\n \"old_path\": os.path.join(\n self.temp_dir, change[\"file_path\"]\n ),\n \"new_path\": os.path.join(\n self.temp_dir,\n change[\"new_file_path\"],\n ),\n }\n\n",
"Allow me to rewrite your question in such a way that gives a proper minimal reproducible example, throws out all the irrelevant things (unrelated to the actual problem) and keeps only the essentials.\nQuestion\nIf the values in a dictionary are of the type str | None, but I know for certain that one of them is definitely a str (not None), how can I tell a static type checker? The following code produces an error with mypy:\nimport os\n\n\ntemp_dir = \"tmp\"\n\npaths: dict[str, str | None] = {}\n...\npaths[\"new_file_path\"] = \"foo\"\n...\nnew_path = os.path.join(temp_dir, paths[\"new_file_path\"])\n\nThe error:\n\nArgument 2 to \"join\" has incompatible type \"Optional[str]\"; expected \"str\" [arg-type]\n\nAnswer\nYou tell the type checker to expect the value corresponding to the key \"new_file_path\" to be a str:\n...\npaths[\"new_file_path\"] = \"foo\"\n...\nassert paths[\"new_file_path\"] is not None\nnew_path = os.path.join(temp_dir, paths[\"new_file_path\"])\n\nAlternatively:\n...\nassert isinstance(paths[\"new_file_path\"], str)\nnew_path = os.path.join(temp_dir, paths[\"new_file_path\"])\n\nIf you don't want to write that extra type guard, you can always use a type: ignore, but you should always try and make those as narrow as possible by using the correct error code to silence:\nnew_path = os.path.join(temp_dir, paths[\"new_file_path\"]) # type: ignore[arg-type]\n\nBut I would not go that route. The assertion has the added benefit of also giving you a clean and immediately obvious error, if you make a mistake somewhere and the new_file_path value happens to be None.\nI would also absolutely not go the route of short-circuiting with paths[\"new_file_path\"] or \"some string\". This is even more dangerous because it may introduce silent bugs into your code since you said that you expect the new_file_path value to be a string. If you make a mistake, the code would give you a path to tmp/some string without raising an error.\n\nPS\nThanks to @SUTerliakov for pointing out that assertions about specific dictionary values are not entirely safe. If you want to be really precise and safe, you should use an intermediary variable for this:\n...\nnew_file_path = paths[\"new_file_path\"]\nassert new_file_path is not None # isinstance(new_file_path, str)\nnew_path = os.path.join(temp_dir, new_file_path)\n\nFor the sake of completeness, you could also use typing.cast like this:\nfrom typing import cast\n...\nnew_path = os.path.join(temp_dir, cast(str, paths[\"new_file_path\"]))\n\nBut this has essentially the same effect as a well placed and specific type: ignore, so I would still recommend the assert.\n"
] | [
2,
2
] | [] | [] | [
"mypy",
"python",
"type_hinting"
] | stackoverflow_0074625904_mypy_python_type_hinting.txt |
Q:
xlsxwriter - Why shorter strings occupy the same size as twice larger strings?
I'm writing data into xlsx with xlsxwriter. There are columns business_unit, creator_login_sap, etc. Total records in xlsx 130 000. business_unit and creator_login_sap are strings. business_unit has constant length of 4 chars. creator_login_sap has average length of 10 chars.
import xlsxwriter
import io
output = io.BytesIO()
wb = xlsxwriter.Workbook(output)
ws = wb.add_worksheet()
columns = ['business_unit', 'creator_login_sap', ...]
data = [('BU01', 'ALNUDOVAN00'), ...]
for col_idx, column in enumerate(columns):
ws.write(0, col_idx, column)
for row_idx, row in enumerate(data, 1):
for col_idx, value in enumerate(row):
ws.write(row_idx, col_idx, value)
When I was trying to reduce file size I noticed that business_unit and creator_login_sap column weighs almost equal (~450 Kb). This fact confused me.
Why this happens? Maybe there is a way when shorter strings occupy less memory?
A:
The data is already compressed. xlsx is a ZIP package containing XML files. 130K rows in 450KB is less than 4 bytes per row. A text file with the same data would be 1.8MB. That's an impressive compression to 25% of the original size.
That said, it may be possible to reduce size even farther. You can test this by opening the file in Excel and saving it to a different file. If the result is smaller, there's room for improvement. Excel, the application, uses shared strings extensively to ensure the file is as small as possible. Instead of storing possibly repetitive strings in cells, it stores them in a Shared String resource and stores a reference to the shared value in the cell itself.
xlsxwriter already use Shared Strings to reduce the size. Other libraries don't do that, resulting in bigger files.
If you want to reduce the amount of RAM used at the expense of compression size, you can use the the constant_memory mode. This is explained in Working with Memory and Performance. This mode uses less memory by flushing each row and not using shared strings. Another restriction is that it doesn't allow you to modify data after it's written though, which results in formatting restrictions.
wb = xlsxwriter.Workbook(output,{'constant_memory': True})
...
From the docs:
The trade-off when using 'constant_memory' mode is that you won’t be able to take advantage of any new features that manipulate cell data after it is written. Currently the add_table() method doesn’t work in this mode and merge_range() and set_row() only work for the current row.
Please don't "optimize" without reason
I'm currently dealing with files containing 2K rows and 1M empty cells. Somehow, somewhere, someone tried to "optimize" something or other and ended up producing a 5MB file that Pandas has to process fully even though there's almost no data. Resaving such a 10MB file with Excel produces a 50KB file.
So think of the consumers of that file before rushing to "optimize" anything
| xlsxwriter - Why shorter strings occupy the same size as twice larger strings? | I'm writing data into xlsx with xlsxwriter. There are columns business_unit, creator_login_sap, etc. Total records in xlsx 130 000. business_unit and creator_login_sap are strings. business_unit has constant length of 4 chars. creator_login_sap has average length of 10 chars.
import xlsxwriter
import io
output = io.BytesIO()
wb = xlsxwriter.Workbook(output)
ws = wb.add_worksheet()
columns = ['business_unit', 'creator_login_sap', ...]
data = [('BU01', 'ALNUDOVAN00'), ...]
for col_idx, column in enumerate(columns):
ws.write(0, col_idx, column)
for row_idx, row in enumerate(data, 1):
for col_idx, value in enumerate(row):
ws.write(row_idx, col_idx, value)
When I was trying to reduce file size I noticed that business_unit and creator_login_sap column weighs almost equal (~450 Kb). This fact confused me.
Why this happens? Maybe there is a way when shorter strings occupy less memory?
| [
"The data is already compressed. xlsx is a ZIP package containing XML files. 130K rows in 450KB is less than 4 bytes per row. A text file with the same data would be 1.8MB. That's an impressive compression to 25% of the original size.\nThat said, it may be possible to reduce size even farther. You can test this by opening the file in Excel and saving it to a different file. If the result is smaller, there's room for improvement. Excel, the application, uses shared strings extensively to ensure the file is as small as possible. Instead of storing possibly repetitive strings in cells, it stores them in a Shared String resource and stores a reference to the shared value in the cell itself.\nxlsxwriter already use Shared Strings to reduce the size. Other libraries don't do that, resulting in bigger files.\nIf you want to reduce the amount of RAM used at the expense of compression size, you can use the the constant_memory mode. This is explained in Working with Memory and Performance. This mode uses less memory by flushing each row and not using shared strings. Another restriction is that it doesn't allow you to modify data after it's written though, which results in formatting restrictions.\nwb = xlsxwriter.Workbook(output,{'constant_memory': True})\n...\n\nFrom the docs:\n\nThe trade-off when using 'constant_memory' mode is that you won’t be able to take advantage of any new features that manipulate cell data after it is written. Currently the add_table() method doesn’t work in this mode and merge_range() and set_row() only work for the current row.\n\nPlease don't \"optimize\" without reason\nI'm currently dealing with files containing 2K rows and 1M empty cells. Somehow, somewhere, someone tried to \"optimize\" something or other and ended up producing a 5MB file that Pandas has to process fully even though there's almost no data. Resaving such a 10MB file with Excel produces a 50KB file.\nSo think of the consumers of that file before rushing to \"optimize\" anything\n"
] | [
3
] | [] | [] | [
"python",
"xlsxwriter"
] | stackoverflow_0074625551_python_xlsxwriter.txt |
Q:
Does GCP have an API call to check resource availability?
We keep getting ZONE_RESOURCE_POOL_EXHAUSTED when we try to deploy VMs in the zones of the us-central1 region. Apparently, GCP doesn't have enough VMs to fill the request.
We tried other regions one by one us-east1, us-east4, etc. all returned the same error until finally found that us-east5 have VMs available and we're now temporarily using it.
Is there an API call to check GCP resource availability so we can directly deploy VMs in that zone?
For example,
# call
r = get_resources(machine_type, zone)
# returns:
[{
'service': 'compute_engine',
'machine_type': '2-standard4',
'available': True
'stock': 4560
}, ...
]
Note
Machine type we're using e2-custom-2-4096 (Custom E2 family machine).
The desired API call would check if GCP itself has resources in that zone or region, not the project quota!
A:
No such dashboards, API method (or) Feature to give the resource availability in GCP.
But while creating resources if you face any issues like ZONE_RESOURCE_POOL_EXHAUSTED, please follow the guidelines mentioned in the official document for
Troubleshooting errors that you might encounter while creating or updating VMs
| Does GCP have an API call to check resource availability? | We keep getting ZONE_RESOURCE_POOL_EXHAUSTED when we try to deploy VMs in the zones of the us-central1 region. Apparently, GCP doesn't have enough VMs to fill the request.
We tried other regions one by one us-east1, us-east4, etc. all returned the same error until finally found that us-east5 have VMs available and we're now temporarily using it.
Is there an API call to check GCP resource availability so we can directly deploy VMs in that zone?
For example,
# call
r = get_resources(machine_type, zone)
# returns:
[{
'service': 'compute_engine',
'machine_type': '2-standard4',
'available': True
'stock': 4560
}, ...
]
Note
Machine type we're using e2-custom-2-4096 (Custom E2 family machine).
The desired API call would check if GCP itself has resources in that zone or region, not the project quota!
| [
"No such dashboards, API method (or) Feature to give the resource availability in GCP.\nBut while creating resources if you face any issues like ZONE_RESOURCE_POOL_EXHAUSTED, please follow the guidelines mentioned in the official document for\nTroubleshooting errors that you might encounter while creating or updating VMs\n"
] | [
1
] | [] | [] | [
"gcloud",
"google_api_python_client",
"google_cloud_platform",
"google_compute_engine",
"python"
] | stackoverflow_0074624340_gcloud_google_api_python_client_google_cloud_platform_google_compute_engine_python.txt |
Q:
Tweepy (API V2) - Convert Response into dictionary
I want to get the information about the people followed by the Twitter account "POTUS" in a dictionary. My code:
import tweepy, json
client = tweepy.Client(bearer_token=x)
id = client.get_user(username="POTUS").data.id
users = client.get_users_following(id=id, user_fields=['created_at','description','entities','id', 'location', 'name', 'pinned_tweet_id', 'profile_image_url','protected','public_metrics','url','username','verified','withheld'], expansions=['pinned_tweet_id'], max_results=13)
This query returns the type "Response", which in turn stores the type "User":
Response(data=[<User id=7563792 name=U.S. Men's National Soccer Team username=USMNT>, <User id=1352064843432472578 name=White House COVID-19 Response Team username=WHCOVIDResponse>, <User id=1351302423273472012 name=Kate Bedingfield username=WHCommsDir>, <User id=1351293685493878786 name=Susan Rice username=AmbRice46>, ..., <User id=1323730225067339784 name=The White House username=WhiteHouse>], includes={}, errors=[], meta={'result_count': 13})
I've tried ._json and .json() but both didn't work.
Does anyone have any idea how I can convert this response into a dictionary object to work with?
Thanks in advance
A:
Found the soloution! Adding return_type=dict to the client will return everything as a dictionary!
client = tweepy.Client(bearer_token=x, return_type=dict)
However, you then have to change the line to get the User ID a bit:
id = client.get_user(username="POTUS")['data']['id']
A:
You can do
previous_cursor, next_cursor = None, 0
while previous_cursor != next_cursor:
followed_data = api.get_friend_ids(username = "POTUS", cursor = next_cursor)
previous_cursor, next_cursor = next_cursor, followed_data["next_cursor"]
followed_ids = followed_data["id"] #this is a list
# do something with followed_ids like writing them to a file
to get the user ids of the followed accounts.
If you want the usernames and not the ids, you can do something very similar with api.get_friends() but this returns fewer items at a time so if you plan to follow those accounts, using the ids will probably be quicker.
| Tweepy (API V2) - Convert Response into dictionary | I want to get the information about the people followed by the Twitter account "POTUS" in a dictionary. My code:
import tweepy, json
client = tweepy.Client(bearer_token=x)
id = client.get_user(username="POTUS").data.id
users = client.get_users_following(id=id, user_fields=['created_at','description','entities','id', 'location', 'name', 'pinned_tweet_id', 'profile_image_url','protected','public_metrics','url','username','verified','withheld'], expansions=['pinned_tweet_id'], max_results=13)
This query returns the type "Response", which in turn stores the type "User":
Response(data=[<User id=7563792 name=U.S. Men's National Soccer Team username=USMNT>, <User id=1352064843432472578 name=White House COVID-19 Response Team username=WHCOVIDResponse>, <User id=1351302423273472012 name=Kate Bedingfield username=WHCommsDir>, <User id=1351293685493878786 name=Susan Rice username=AmbRice46>, ..., <User id=1323730225067339784 name=The White House username=WhiteHouse>], includes={}, errors=[], meta={'result_count': 13})
I've tried ._json and .json() but both didn't work.
Does anyone have any idea how I can convert this response into a dictionary object to work with?
Thanks in advance
| [
"Found the soloution! Adding return_type=dict to the client will return everything as a dictionary!\nclient = tweepy.Client(bearer_token=x, return_type=dict)\n\nHowever, you then have to change the line to get the User ID a bit:\nid = client.get_user(username=\"POTUS\")['data']['id']\n\n",
"You can do\nprevious_cursor, next_cursor = None, 0\n\nwhile previous_cursor != next_cursor:\n followed_data = api.get_friend_ids(username = \"POTUS\", cursor = next_cursor)\n previous_cursor, next_cursor = next_cursor, followed_data[\"next_cursor\"]\n followed_ids = followed_data[\"id\"] #this is a list\n # do something with followed_ids like writing them to a file\n\nto get the user ids of the followed accounts.\nIf you want the usernames and not the ids, you can do something very similar with api.get_friends() but this returns fewer items at a time so if you plan to follow those accounts, using the ids will probably be quicker.\n"
] | [
0,
0
] | [] | [] | [
"dictionary",
"python",
"tweepy",
"twitter",
"twitter_api_v2"
] | stackoverflow_0074620166_dictionary_python_tweepy_twitter_twitter_api_v2.txt |
Q:
I'm not sure if this solution for my homework is right
So, the homework is: I need to write a code that will let the user to enter 3 numbers(this part is done.). Then my code should compare those numbers with each other(I think I know this too). But the hardest part is: if the first num is greater then code should print 1st: true, if the second one is greater it should print 2nd: true and so on. Also I can't use strings, if else and others the only thing I can use are operators,variables, input and typecasting.
I came up with this idea:
first = int(input('Write first number: '))
second = int(input('Write second number: '))
third = int(input ('Write third number: '))
print (f'1st: {first > second and first> third}')
print (f'2nd: {second > first and second > third}')
print (f'3rd: { third > first and third > second}')
A:
use the code below:
first = int(input('Write first number: '))
second = int(input('Write second number: '))
third = int(input ('Write third number: '))
temp=first > second and first> third and print("1st:True")
temp=second > first and second > third and print("2st:True")
temp=third > first and third > second and print("3st:True")
have fun :)
A:
Looks like a pretty specific and restrictive application. In your case, one could go with a ternary operator to eliminate the if else statement:
(false_value, true_value)[conditional_expression]
In the example below, I am assuming you would want to output False in the case that the conditions are not met. If this is not the case, I would suggest leaving it a blank string, i.e ""
first = int(input('Write first number: '))
second = int(input('Write second number: '))
third = int(input('Write third number: '))
first_result = ("1st: False", "1st: True")[(first > second) and (first > third)]
second_result = ("2nd: False", "2nd: True")[(second > first) and (second > third)]
third_result = ("3rd: False", "3rd: True")[(third > first) and (third > second)]
print(first_result)
print(second_result)
print(third_result)
| I'm not sure if this solution for my homework is right | So, the homework is: I need to write a code that will let the user to enter 3 numbers(this part is done.). Then my code should compare those numbers with each other(I think I know this too). But the hardest part is: if the first num is greater then code should print 1st: true, if the second one is greater it should print 2nd: true and so on. Also I can't use strings, if else and others the only thing I can use are operators,variables, input and typecasting.
I came up with this idea:
first = int(input('Write first number: '))
second = int(input('Write second number: '))
third = int(input ('Write third number: '))
print (f'1st: {first > second and first> third}')
print (f'2nd: {second > first and second > third}')
print (f'3rd: { third > first and third > second}')
| [
"use the code below:\nfirst = int(input('Write first number: '))\nsecond = int(input('Write second number: '))\nthird = int(input ('Write third number: '))\ntemp=first > second and first> third and print(\"1st:True\")\ntemp=second > first and second > third and print(\"2st:True\")\ntemp=third > first and third > second and print(\"3st:True\")\n\nhave fun :)\n",
"Looks like a pretty specific and restrictive application. In your case, one could go with a ternary operator to eliminate the if else statement:\n(false_value, true_value)[conditional_expression]\n\nIn the example below, I am assuming you would want to output False in the case that the conditions are not met. If this is not the case, I would suggest leaving it a blank string, i.e \"\"\nfirst = int(input('Write first number: '))\nsecond = int(input('Write second number: '))\nthird = int(input('Write third number: '))\n\nfirst_result = (\"1st: False\", \"1st: True\")[(first > second) and (first > third)]\nsecond_result = (\"2nd: False\", \"2nd: True\")[(second > first) and (second > third)]\nthird_result = (\"3rd: False\", \"3rd: True\")[(third > first) and (third > second)]\n\nprint(first_result)\nprint(second_result)\nprint(third_result)\n\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074625870_python.txt |
Q:
Fill different pandas columns based upon a list
I want to fill multiple columns with different values.
I have a df that looks as such:
df
'A' 'B' 'C'
0 1 dog red
1 5 cat yellow
2 4 moose blue
I would like to overwrite the columns based upon list values and so would look like this:
overwrite = [0, cat, orange]
df
'A' 'B' 'C'
0 0 cat orange
1 0 cat orange
2 0 cat orange
Is there an easy way to do this?
Thanks
A:
Simply assign the value to the columns, they will be broadcasted:
overwrite = [0, 'cat', 'orange']
df[['A', 'B', 'C']] = overwrite
Or maybe, if the overwrite list can be shorter than the number of columns:
df.iloc[:, :len(overwrite)] = overwrite
# or
df[df.columns[:len(overwrite)]] = overwrite
Output:
A B C
0 0 cat orange
1 0 cat orange
2 0 cat orange
| Fill different pandas columns based upon a list | I want to fill multiple columns with different values.
I have a df that looks as such:
df
'A' 'B' 'C'
0 1 dog red
1 5 cat yellow
2 4 moose blue
I would like to overwrite the columns based upon list values and so would look like this:
overwrite = [0, cat, orange]
df
'A' 'B' 'C'
0 0 cat orange
1 0 cat orange
2 0 cat orange
Is there an easy way to do this?
Thanks
| [
"Simply assign the value to the columns, they will be broadcasted:\noverwrite = [0, 'cat', 'orange']\ndf[['A', 'B', 'C']] = overwrite\n\nOr maybe, if the overwrite list can be shorter than the number of columns:\ndf.iloc[:, :len(overwrite)] = overwrite\n\n# or\ndf[df.columns[:len(overwrite)]] = overwrite\n\nOutput:\n A B C\n0 0 cat orange\n1 0 cat orange\n2 0 cat orange\n\n"
] | [
0
] | [
"Use DataFrame.assign with dictionary to overwrite the columns based upon list:\ndf = df.assign(**dict(zip(df.columns, overwrite)))\nprint (df)\n 'A' 'B' 'C'\n0 0 cat orange\n1 0 cat orange\n2 0 cat orange\n\nOr create DataFrame by constructor - values are not overwritten, but created new one with same columns and index like original DataFrame:\ndf = pd.DataFrame([overwrite], index=df.index, columns=df.columns)\nprint (df)\n 'A' 'B' 'C'\n0 0 cat orange\n1 0 cat orange\n2 0 cat orange\n\n",
"Try this...\ndf = pd.DataFrame({'A':[1,2,3],'B':['cat','dog','bird']})\noverwrite = [0,'Elk']\nfor index,row in df.iterrows():\n df.iloc[index,:] = overwrite\ndf\n\n"
] | [
-1,
-3
] | [
"pandas",
"python"
] | stackoverflow_0074626345_pandas_python.txt |
Q:
How to compute average of image using Numpy and OpenCV
For one of my projects at university, I wish to use Python to select an image based on which is more salient.
To do this I know I will first have to use OpenCv's Saliency Detection. But after the output, where I am left with an image with its saliency map, how do I compute the average saliency in the image? This would allow me to compare two images, and make a definitive decision on which is more salient.
I was advised I could use Numpy for this but unsure of how to actually implement such a thing. (I'm new to Python)
A:
You are probably overthinking this. To the computer, an image is just a integer matrix.
To get an average value, compute the mean: https://numpy.org/doc/stable/reference/generated/numpy.mean.html
a = np.array([[1, 2], [3, 4]]) # this would be your image
m = np.mean(a)
Or count all white pixel and divide by the size of the image for a binary image: https://numpy.org/doc/stable/reference/generated/numpy.count_nonzero.html
a = np.array([[0, 1, 7, 0],[3, 0, 2, 19]]) # your image here
ct = np.count_nonzero(a)
| How to compute average of image using Numpy and OpenCV | For one of my projects at university, I wish to use Python to select an image based on which is more salient.
To do this I know I will first have to use OpenCv's Saliency Detection. But after the output, where I am left with an image with its saliency map, how do I compute the average saliency in the image? This would allow me to compare two images, and make a definitive decision on which is more salient.
I was advised I could use Numpy for this but unsure of how to actually implement such a thing. (I'm new to Python)
| [
"You are probably overthinking this. To the computer, an image is just a integer matrix.\nTo get an average value, compute the mean: https://numpy.org/doc/stable/reference/generated/numpy.mean.html\na = np.array([[1, 2], [3, 4]]) # this would be your image\nm = np.mean(a)\n\nOr count all white pixel and divide by the size of the image for a binary image: https://numpy.org/doc/stable/reference/generated/numpy.count_nonzero.html\na = np.array([[0, 1, 7, 0],[3, 0, 2, 19]]) # your image here\nct = np.count_nonzero(a)\n\n"
] | [
0
] | [] | [] | [
"numpy",
"object_detection",
"opencv",
"python"
] | stackoverflow_0074625896_numpy_object_detection_opencv_python.txt |
Q:
How to limit index column width/height when displaying a pandas dataframe?
I have a dataframe that looks like this:
df = pd.DataFrame(data=list(range(0,10)),
index=pd.MultiIndex.from_product([[str(list(range(0,1000)))],list(range(0,10))],
names=["ind1","ind2"]),
columns=["col1"])
df['col2']=str(list(range(0,1000)))
Unfortunately, the display of the above dataframe looks like this:
If I try to set: pd.options.display.max_colwidth = 5, then col2 behaves and it is displayed in a single row, but ind1 doesn't behave:
Since ind1 is part of a multiindex, I don't care it occupies multiple rows, but I would like to limit itself in width. If I could prescribe for each row to also occupy at most the height of a single line, that would be great as well. I don't care that individual cells are being truncated on display, because I prefer to have to scroll less, in any direction, to see a cell.
I am aware I can create my own HTML display. That's great and all, but I think it's too complex for my use case of just wanting smaller width columns for data analysis in jupyter notebooks. Nevertheless, such a solution might help other similar use cases, if you are inclined to write one.
What I'm looking for is some setting, which I thought it's pd.options.display.max_colwidth, that limits the column width, even if it's an index. Something that would disable wrapping for long texts would probably help with the same issue as well.
I also tried to just print without the index df.style.hide_index(), in combination with pd.options.display.max_colwidth = 5, but then col2 stops behaving:
About now I run out of ideas. Any suggestions?
A:
Here is one way to do it:
import pandas as pd
df = pd.DataFrame(
data=list(range(0, 10)),
index=pd.MultiIndex.from_product(
[[str(list(range(0, 1000)))], list(range(0, 10))], names=["ind1", "ind2"]
),
columns=["col1"],
)
df["col2"] = str(list(range(0, 1000)))
In the next Jupyter cell, run:
df.style.set_properties(**{"width": "10"}).set_table_styles(
[{"selector": "th", "props": [("vertical-align", "top")]}]
)
Which outputs:
| How to limit index column width/height when displaying a pandas dataframe? | I have a dataframe that looks like this:
df = pd.DataFrame(data=list(range(0,10)),
index=pd.MultiIndex.from_product([[str(list(range(0,1000)))],list(range(0,10))],
names=["ind1","ind2"]),
columns=["col1"])
df['col2']=str(list(range(0,1000)))
Unfortunately, the display of the above dataframe looks like this:
If I try to set: pd.options.display.max_colwidth = 5, then col2 behaves and it is displayed in a single row, but ind1 doesn't behave:
Since ind1 is part of a multiindex, I don't care it occupies multiple rows, but I would like to limit itself in width. If I could prescribe for each row to also occupy at most the height of a single line, that would be great as well. I don't care that individual cells are being truncated on display, because I prefer to have to scroll less, in any direction, to see a cell.
I am aware I can create my own HTML display. That's great and all, but I think it's too complex for my use case of just wanting smaller width columns for data analysis in jupyter notebooks. Nevertheless, such a solution might help other similar use cases, if you are inclined to write one.
What I'm looking for is some setting, which I thought it's pd.options.display.max_colwidth, that limits the column width, even if it's an index. Something that would disable wrapping for long texts would probably help with the same issue as well.
I also tried to just print without the index df.style.hide_index(), in combination with pd.options.display.max_colwidth = 5, but then col2 stops behaving:
About now I run out of ideas. Any suggestions?
| [
"Here is one way to do it:\nimport pandas as pd\n\ndf = pd.DataFrame(\n data=list(range(0, 10)),\n index=pd.MultiIndex.from_product(\n [[str(list(range(0, 1000)))], list(range(0, 10))], names=[\"ind1\", \"ind2\"]\n ),\n columns=[\"col1\"],\n)\ndf[\"col2\"] = str(list(range(0, 1000)))\n\nIn the next Jupyter cell, run:\ndf.style.set_properties(**{\"width\": \"10\"}).set_table_styles(\n [{\"selector\": \"th\", \"props\": [(\"vertical-align\", \"top\")]}]\n)\n\nWhich outputs:\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"pandas",
"pandas_styles",
"python",
"visualization"
] | stackoverflow_0074509227_dataframe_pandas_pandas_styles_python_visualization.txt |
Q:
Unable to load the custom template tags in django
templatetags : myapp_extras.py
from django import template
register = template.Library()
@register.simple_tag
def my_url(value,field_name,urlencode=None):
url = '?{}={}'.format(field_name,value)
if urlencode:
querystring = urlencode.split('&')
filtered_querystring = filter(lambda p:p.split('=')[0]!=field_name,querystring)
encoded_querystring = '&'.join(filtered_querystring)
url = '{}&{}'.format(url,encoded_querystring)
return url
home.html
{% load myapp_extras %}
.
.
.
<div class="pagination">
<span class="step-links">
{% if page_obj.has_previous %}
<a href="{% my_url 1 'page' request.GET.urlencode%}">« first</a>
<a href="{% my_url page_obj.previous_page_number 'page' request.GET.urlencode%}">previous</a>
{% endif %}
<span class="current">
Page {{ page_obj.number }} of {{ page_obj.paginator.num_pages }}.
</span>
{% if page_obj.has_next %}
<a href="{% my_url page_obj.next_page_number 'page' request.GET.urlencode%}">next</a>
<a href="{% my_url page_obj.paginator.num_pages 'page' request.GET.urlencode%}">last »</a>
{% endif %}
</span>
</div>
settings.py
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'facligoapp'
]
views.py
def do_paginator(get_records_by_date,request):
paginator = Paginator(get_records_by_date,10)
page_number = request.GET.get('page', 1)
try:
page_obj = paginator.get_page(page_number)
except PageNotAnInteger:
page_obj = paginator.page(1)
except EmptyPage:
page_obj = paginator.page(paginator.num_pages)
return page_obj
:
:
if new_records_check_box_status is None and error_records_check_box_status is None:
get_records_by_date = Scrapper.objects.filter(start_time__date__range=(f_date, t_date))
get_records_by_date = check_drop_down_status(get_records_by_date,drop_down_status)
get_records_by_date = do_paginator(get_records_by_date,request)
Based on the my templates tags when I filter the datas the url should change. But the url is not changing and template tags is not working. I had created the init.py in template tags also. Is there any solution to change the structure of url as by templete tags does. When I change the next page the url is not changing.
A:
instead of this:
@register.simple_tag
try this:
@register.filter
And load it in template:
{% load filter_tags %}
Note: you must create empty init.py file inside templatetags directory.
After making above changes, you need to add this tag in settings.py file:
In settings.py file:
'libraries':{
'filter_tags': 'templatetags.filter',
}
If templatetags directory inside an app then you must add that appname in libraries.
'libraries':{
'filter_tags': 'your_appanme.templatetags.filter',
}
EX:
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
'libraries':{
'filter_tags': 'templatetags.filter', #added here
}
},
},
]
| Unable to load the custom template tags in django | templatetags : myapp_extras.py
from django import template
register = template.Library()
@register.simple_tag
def my_url(value,field_name,urlencode=None):
url = '?{}={}'.format(field_name,value)
if urlencode:
querystring = urlencode.split('&')
filtered_querystring = filter(lambda p:p.split('=')[0]!=field_name,querystring)
encoded_querystring = '&'.join(filtered_querystring)
url = '{}&{}'.format(url,encoded_querystring)
return url
home.html
{% load myapp_extras %}
.
.
.
<div class="pagination">
<span class="step-links">
{% if page_obj.has_previous %}
<a href="{% my_url 1 'page' request.GET.urlencode%}">« first</a>
<a href="{% my_url page_obj.previous_page_number 'page' request.GET.urlencode%}">previous</a>
{% endif %}
<span class="current">
Page {{ page_obj.number }} of {{ page_obj.paginator.num_pages }}.
</span>
{% if page_obj.has_next %}
<a href="{% my_url page_obj.next_page_number 'page' request.GET.urlencode%}">next</a>
<a href="{% my_url page_obj.paginator.num_pages 'page' request.GET.urlencode%}">last »</a>
{% endif %}
</span>
</div>
settings.py
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'facligoapp'
]
views.py
def do_paginator(get_records_by_date,request):
paginator = Paginator(get_records_by_date,10)
page_number = request.GET.get('page', 1)
try:
page_obj = paginator.get_page(page_number)
except PageNotAnInteger:
page_obj = paginator.page(1)
except EmptyPage:
page_obj = paginator.page(paginator.num_pages)
return page_obj
:
:
if new_records_check_box_status is None and error_records_check_box_status is None:
get_records_by_date = Scrapper.objects.filter(start_time__date__range=(f_date, t_date))
get_records_by_date = check_drop_down_status(get_records_by_date,drop_down_status)
get_records_by_date = do_paginator(get_records_by_date,request)
Based on the my templates tags when I filter the datas the url should change. But the url is not changing and template tags is not working. I had created the init.py in template tags also. Is there any solution to change the structure of url as by templete tags does. When I change the next page the url is not changing.
| [
"instead of this:\[email protected]_tag\n\ntry this:\[email protected]\n\nAnd load it in template:\n{% load filter_tags %}\n\nNote: you must create empty init.py file inside templatetags directory.\nAfter making above changes, you need to add this tag in settings.py file:\nIn settings.py file:\n'libraries':{\n 'filter_tags': 'templatetags.filter',\n }\n\nIf templatetags directory inside an app then you must add that appname in libraries.\n 'libraries':{\n 'filter_tags': 'your_appanme.templatetags.filter',\n }\n\nEX:\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [os.path.join(BASE_DIR, 'templates')],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n 'libraries':{\n 'filter_tags': 'templatetags.filter', #added here\n }\n },\n },\n]\n\n"
] | [
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0074626386_django_python.txt |
Q:
I am trying to merge data based on their dates
I am trying to achieve the following data frame format.
The first dataset is the results data with the date in a DateTime format whereas the second dataset is the rank_date in an object format. How do I merge the data based on their dates?
rank = rank.set_index(['rank_date']).groupby(['country_full'], group_keys=False).resample('D').first().fillna(method='ffill').reset_index()
This is the following error I get:
TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'Index'
Thank you in advance
A:
Probably, rank_date's type is string-object, not datetime:
rank['rank_date']=pd.to_datetime(rank['rank_date'])
rank = rank.set_index(['rank_date']).groupby(['country_full'], group_keys=False).resample('D').first().fillna(method='ffill').reset_index()
| I am trying to merge data based on their dates | I am trying to achieve the following data frame format.
The first dataset is the results data with the date in a DateTime format whereas the second dataset is the rank_date in an object format. How do I merge the data based on their dates?
rank = rank.set_index(['rank_date']).groupby(['country_full'], group_keys=False).resample('D').first().fillna(method='ffill').reset_index()
This is the following error I get:
TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'Index'
Thank you in advance
| [
"Probably, rank_date's type is string-object, not datetime:\nrank['rank_date']=pd.to_datetime(rank['rank_date'])\nrank = rank.set_index(['rank_date']).groupby(['country_full'], group_keys=False).resample('D').first().fillna(method='ffill').reset_index()\n\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"merge",
"python"
] | stackoverflow_0074621537_dataframe_merge_python.txt |
Q:
Unsure why output is 0. Trying to count months to pay downpayment
print("Please enter you starting annual salary: ")
annual_salary = float(input())
monthly_salary = annual_salary/12
print("Please enter your portion of salary to be saved: ")
portion_saved = float(input())
print ("Please enter the cost of your dream home: ")
total_cost = float(input())
current_savings = 0
r = 0.04/12
n = 0
portion_down_payment = total_cost*int(.25)
if current_savings < portion_down_payment:
monthly_savings = monthly_salary*portion_saved
interest = monthly_savings*r
current_savings = current_savings + monthly_savings + interest
n =+ 1
else:
print(n)
The above is my code. I keep getting output = 0 but unsure why.
This the problem statement, I am a HS student attempting OCW coursework.
Call the cost of your dream home total_cost.
Call the portion of the cost needed for a down payment portion_down_payment. For simplicity, assume that portion_down_payment = 0.25 (25%).
Call the amount that you have saved thus far current_savings. You start with a current savings of $0.
Assume that you invest your current savings wisely, with an annual return of r (in other words, at the end of each month, you receive an additional current_savings*r/12 funds to put into your savings – the 12 is because r is an annual rate). Assume that your investments earn a return of r = 0.04 (4%).
Assume your annual salary is annual_salary.
Assume you are going to dedicate a certain amount of your salary each month to saving for the down payment. Call that portion_saved. This variable should be in decimal form (i.e. 0.1 for 10%).
At the end of each month, your savings will be increased by the return on your investment, plus a percentage of your monthly salary (annual salary / 12). Write a program to calculate how many months it will take you to save up enough money for a down payment. You will want your main variables to be floats, so you should cast user inputs to floats.
Your program should ask the user to enter the following variables:
The starting annual salary (annual_salary)
The portion of salary to be saved (portion_saved)
The cost of your dream home (total_cost)
Test Case 1
Enter your annual salary: 120000 Enter the percent of your salary to save, as a decimal: .10 Enter the cost of your dream home: 1000000 Number of months: 183
A:
You have n =+ 1 but I think you mean n += 1
Also int(.25) evaluates to 0, I think you want int(total_cost*.25). As your code is, the if statement will always evaluate to False because current_savings == 0 and portion_down_payment == 0
More generally, when your code isn't working as expected, you should put in either print() or assert statements to narrow down where your code is deviating from what you expect. For example, before the if statement you could have it print the two values you are comparing.
| Unsure why output is 0. Trying to count months to pay downpayment | print("Please enter you starting annual salary: ")
annual_salary = float(input())
monthly_salary = annual_salary/12
print("Please enter your portion of salary to be saved: ")
portion_saved = float(input())
print ("Please enter the cost of your dream home: ")
total_cost = float(input())
current_savings = 0
r = 0.04/12
n = 0
portion_down_payment = total_cost*int(.25)
if current_savings < portion_down_payment:
monthly_savings = monthly_salary*portion_saved
interest = monthly_savings*r
current_savings = current_savings + monthly_savings + interest
n =+ 1
else:
print(n)
The above is my code. I keep getting output = 0 but unsure why.
This the problem statement, I am a HS student attempting OCW coursework.
Call the cost of your dream home total_cost.
Call the portion of the cost needed for a down payment portion_down_payment. For simplicity, assume that portion_down_payment = 0.25 (25%).
Call the amount that you have saved thus far current_savings. You start with a current savings of $0.
Assume that you invest your current savings wisely, with an annual return of r (in other words, at the end of each month, you receive an additional current_savings*r/12 funds to put into your savings – the 12 is because r is an annual rate). Assume that your investments earn a return of r = 0.04 (4%).
Assume your annual salary is annual_salary.
Assume you are going to dedicate a certain amount of your salary each month to saving for the down payment. Call that portion_saved. This variable should be in decimal form (i.e. 0.1 for 10%).
At the end of each month, your savings will be increased by the return on your investment, plus a percentage of your monthly salary (annual salary / 12). Write a program to calculate how many months it will take you to save up enough money for a down payment. You will want your main variables to be floats, so you should cast user inputs to floats.
Your program should ask the user to enter the following variables:
The starting annual salary (annual_salary)
The portion of salary to be saved (portion_saved)
The cost of your dream home (total_cost)
Test Case 1
Enter your annual salary: 120000 Enter the percent of your salary to save, as a decimal: .10 Enter the cost of your dream home: 1000000 Number of months: 183
| [
"You have n =+ 1 but I think you mean n += 1\nAlso int(.25) evaluates to 0, I think you want int(total_cost*.25). As your code is, the if statement will always evaluate to False because current_savings == 0 and portion_down_payment == 0\nMore generally, when your code isn't working as expected, you should put in either print() or assert statements to narrow down where your code is deviating from what you expect. For example, before the if statement you could have it print the two values you are comparing.\n"
] | [
1
] | [] | [] | [
"control_flow",
"if_statement",
"python"
] | stackoverflow_0074626524_control_flow_if_statement_python.txt |
Q:
How can I determine which element in a matrix is closest to a given point using numpy?
I have a matrix data of (x,y) coordinates which looks like this:
array([[3,4], [10,4], [1,3], [5,8]])
I want to write a piece of code that, given a numpy array with generic coordinates (x,y), finds the index of the row of the matrix which corresponds to the closest point to (x,y) (in terms of euclidean distance).
So far what I have done is :
point = np.asarray([x, y])
closest_pt_idx = np.argmin(np.linalg.norm(np.subtract(data, point), axis=1))
Which for some reason doesnt seem to work well. What am i doing wrong?
A:
There is probably a single-liner for this.
import numpy as np
data = np.array([[3,4],[10,4],[1,3],[5,8],[2,3]])
point = np.tile([2,3], (len(data),1))
closest_pt_idx = np.argmin(np.linalg.norm(data-point,axis=1))
print(np.linalg.norm(data-point,axis=1),closest_pt_idx)
| How can I determine which element in a matrix is closest to a given point using numpy? | I have a matrix data of (x,y) coordinates which looks like this:
array([[3,4], [10,4], [1,3], [5,8]])
I want to write a piece of code that, given a numpy array with generic coordinates (x,y), finds the index of the row of the matrix which corresponds to the closest point to (x,y) (in terms of euclidean distance).
So far what I have done is :
point = np.asarray([x, y])
closest_pt_idx = np.argmin(np.linalg.norm(np.subtract(data, point), axis=1))
Which for some reason doesnt seem to work well. What am i doing wrong?
| [
"There is probably a single-liner for this.\nimport numpy as np\ndata = np.array([[3,4],[10,4],[1,3],[5,8],[2,3]])\npoint = np.tile([2,3], (len(data),1))\nclosest_pt_idx = np.argmin(np.linalg.norm(data-point,axis=1))\nprint(np.linalg.norm(data-point,axis=1),closest_pt_idx)\n\n"
] | [
1
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074626343_numpy_python.txt |
Q:
How to get an executable Python path out of an Anaconda environment?
I am trying to profile my pyopencl project with CodeXL, and in order to work with .py files. I can't think of anything better than pointing at Python.exe and passing path to script as an argument. What makes things complicated is my use of Anaconda virtual environment to resolve conflicts between modules, because this way it is impossible to simply point CodeXL at a python executable in some virtual environment - as far as I understand, this environment has to be activated first, and CodeXL does not support this.
A:
You can find the location of your python exe using:
where python
Since you're using Anaconda, you can also try:
where anaconda
You'll find the python exe in the parent directory of the result.
If this isn't what you need, you can find more info here.
A:
The Python executable of a conda, virtualenv or venv environment is generally located at
$ENVPATH/bin/python
EDIT: on Windows should be instead (at least for venv)
$ENVPATH\Scripts\python.exe
where $ENVPATH is the environment path. To get a list of the environments you created along with their paths you can run (from the terminal)
conda env list
Alternatively, if you are using a Python interpreter and you want to know where its executable is located, you can run
import sys
print(sys.executable)
A:
This should give you a list of all the environments and their respective paths.
conda env list
copy the path of your desired environment and append /bin to it and call the python version, can be either python3 or python2
In the end, your path should look like for python2 /<your_env_path>/bin/python and this for python3 /<your_env_path>/bin/python
| How to get an executable Python path out of an Anaconda environment? | I am trying to profile my pyopencl project with CodeXL, and in order to work with .py files. I can't think of anything better than pointing at Python.exe and passing path to script as an argument. What makes things complicated is my use of Anaconda virtual environment to resolve conflicts between modules, because this way it is impossible to simply point CodeXL at a python executable in some virtual environment - as far as I understand, this environment has to be activated first, and CodeXL does not support this.
| [
"You can find the location of your python exe using: \nwhere python\n\nSince you're using Anaconda, you can also try:\nwhere anaconda\n\nYou'll find the python exe in the parent directory of the result.\nIf this isn't what you need, you can find more info here.\n",
"The Python executable of a conda, virtualenv or venv environment is generally located at\n$ENVPATH/bin/python\n\nEDIT: on Windows should be instead (at least for venv)\n$ENVPATH\\Scripts\\python.exe\n\nwhere $ENVPATH is the environment path. To get a list of the environments you created along with their paths you can run (from the terminal)\nconda env list\n\nAlternatively, if you are using a Python interpreter and you want to know where its executable is located, you can run\nimport sys\nprint(sys.executable)\n\n",
"\nThis should give you a list of all the environments and their respective paths.\n\nconda env list\n\n\ncopy the path of your desired environment and append /bin to it and call the python version, can be either python3 or python2\n\nIn the end, your path should look like for python2 /<your_env_path>/bin/python and this for python3 /<your_env_path>/bin/python\n\n\n"
] | [
2,
2,
0
] | [] | [] | [
"anaconda",
"codexl",
"python",
"windows_10"
] | stackoverflow_0054921271_anaconda_codexl_python_windows_10.txt |
Q:
How can I remove this specific section of a string without also removing the 'm' in the list
This is my code now:
def extract_categories(line: str):
new_line = re.sub('[ +++$+++]', '', line)
newer_line
print(new_line)
I want it to print this
['action', 'comedy', 'crime', 'drama', 'thriller']
but it prints this:
m448hrs.19826.9022289['action','comedy','crime','drama','thriller']
This is the input I am using:
"m4 +++$+++ 48 hrs. +++$+++ 1982 +++$+++ 6.90 +++$+++ 22289 +++$+++ ['action', 'comedy', 'crime', 'drama', 'thriller']"
I need the first part removed, I tried removing exactly 'm448hrs.19826.9022289' but then also the m's from 'crime' and 'drama' disappeared, what should I do here? Im new to python so any help would be appreciated.
A:
You can try breaking down the string based on pattern later you replace un wanted items as
st = "m4 +++$+++ 48 hrs. +++$+++ 1982 +++$+++ 6.90 +++$+++ 22289 +++$+++ ['action', 'comedy', 'crime', 'drama', 'thriller']"
new = []
mask = st.split(' +++$+++ ')[-1][1:-1].replace("'","").replace(" ","")
new.append(mask.split(','))
print(new[0])
Gives #
['action', 'comedy', 'crime', 'drama', 'thriller']
A:
You can use index() or find() to find index of substring you want to print and slice the string from substring index to end of the string. index() will raise ValueError if the substring is not found, and find() will return -1.
line = "m4 +++$+++ 48 hrs. +++$+++ 1982 +++$+++ 6.90 +++$+++ 22289 +++$+++ ['action', 'comedy', 'crime', 'drama', 'thriller']"
line = line[line.index('['):]
print(line)
To avoid exception, use find() with conditional statement in your function:
line = "m4 +++$+++ 48 hrs. +++$+++ 1982 +++$+++ 6.90 +++$+++ 22289 +++$+++ ['action', 'comedy', 'crime', 'drama', 'thriller']"
def extract_categories(line: str):
return line[found:] if (found := line.find('[')) != -1 else 'Not found'
print(extract_categories(line))
Or:
line = "m4 +++$+++ 48 hrs. +++$+++ 1982 +++$+++ 6.90 +++$+++ 22289 +++$+++ ['action', 'comedy', 'crime', 'drama', 'thriller']"
def extract_categories(line: str):
return ('Not Found',line[(found:=line.find('[')):])[found!=-1]
print(extract_categories(line))
| How can I remove this specific section of a string without also removing the 'm' in the list | This is my code now:
def extract_categories(line: str):
new_line = re.sub('[ +++$+++]', '', line)
newer_line
print(new_line)
I want it to print this
['action', 'comedy', 'crime', 'drama', 'thriller']
but it prints this:
m448hrs.19826.9022289['action','comedy','crime','drama','thriller']
This is the input I am using:
"m4 +++$+++ 48 hrs. +++$+++ 1982 +++$+++ 6.90 +++$+++ 22289 +++$+++ ['action', 'comedy', 'crime', 'drama', 'thriller']"
I need the first part removed, I tried removing exactly 'm448hrs.19826.9022289' but then also the m's from 'crime' and 'drama' disappeared, what should I do here? Im new to python so any help would be appreciated.
| [
"You can try breaking down the string based on pattern later you replace un wanted items as\nst = \"m4 +++$+++ 48 hrs. +++$+++ 1982 +++$+++ 6.90 +++$+++ 22289 +++$+++ ['action', 'comedy', 'crime', 'drama', 'thriller']\"\nnew = []\nmask = st.split(' +++$+++ ')[-1][1:-1].replace(\"'\",\"\").replace(\" \",\"\")\nnew.append(mask.split(','))\nprint(new[0])\n\nGives #\n['action', 'comedy', 'crime', 'drama', 'thriller']\n\n",
"You can use index() or find() to find index of substring you want to print and slice the string from substring index to end of the string. index() will raise ValueError if the substring is not found, and find() will return -1.\nline = \"m4 +++$+++ 48 hrs. +++$+++ 1982 +++$+++ 6.90 +++$+++ 22289 +++$+++ ['action', 'comedy', 'crime', 'drama', 'thriller']\"\nline = line[line.index('['):]\nprint(line)\n\nTo avoid exception, use find() with conditional statement in your function:\nline = \"m4 +++$+++ 48 hrs. +++$+++ 1982 +++$+++ 6.90 +++$+++ 22289 +++$+++ ['action', 'comedy', 'crime', 'drama', 'thriller']\"\n\ndef extract_categories(line: str):\n return line[found:] if (found := line.find('[')) != -1 else 'Not found'\n\nprint(extract_categories(line))\n\nOr:\nline = \"m4 +++$+++ 48 hrs. +++$+++ 1982 +++$+++ 6.90 +++$+++ 22289 +++$+++ ['action', 'comedy', 'crime', 'drama', 'thriller']\"\n\ndef extract_categories(line: str):\n return ('Not Found',line[(found:=line.find('[')):])[found!=-1]\n\nprint(extract_categories(line))\n\n"
] | [
0,
0
] | [] | [] | [
"list",
"python",
"string"
] | stackoverflow_0074625294_list_python_string.txt |
Q:
Is there any way to downgrade my python and all the package to 3.8?
I install python 3.10 in my new laptop, i used python 3.10 for a long time and i installed lot of package on it, but i need to downgrade it to python 3.8 because python 3.10 cannot support a package, and i found this post but if i remove the whole python, it will also remove all the package, that mean i need to install all the package after i do it. Is there any way to just downgrade the python interpreter?
A:
You can use it pyenv for working with multiple python versions;
curl https://pyenv.run | bash
Look at the available versions;
pyenv install --list
Installing selected version;
pyenv install -v 3.8.1
for more details;
https://realpython.com/intro-to-pyenv/
| Is there any way to downgrade my python and all the package to 3.8? | I install python 3.10 in my new laptop, i used python 3.10 for a long time and i installed lot of package on it, but i need to downgrade it to python 3.8 because python 3.10 cannot support a package, and i found this post but if i remove the whole python, it will also remove all the package, that mean i need to install all the package after i do it. Is there any way to just downgrade the python interpreter?
| [
"You can use it pyenv for working with multiple python versions;\ncurl https://pyenv.run | bash\n\nLook at the available versions;\npyenv install --list\n\nInstalling selected version;\npyenv install -v 3.8.1\n\nfor more details;\nhttps://realpython.com/intro-to-pyenv/\n"
] | [
0
] | [] | [] | [
"downgrade",
"interpreter",
"python",
"python_3.x"
] | stackoverflow_0074626433_downgrade_interpreter_python_python_3.x.txt |
Q:
I can't scrape div ''some text" class = "" I think text cause to error
How can I scrape html like (<div data-v-28872a74="" class="col-lg-10 col-md-10 col-sm-12 col-12 offset-lg-1 offset-md-1 offset-sm-0 offset-0">).
I've tried soup.find_all('div', class_ = 'col-lg-10 col-md-10 col-sm-12 col-12 offset-lg-1 offset-md-1 offset-sm-0 offset-0') but output is just [].
Actually code:
div data-v-28872a74="" class="col-lg-10 col-md-10 col-sm-12 col-12 offset-lg-1 offset-md-1 offset-sm-0 offset-0'
import requests
from bs4 import BeautifulSoup as bs
url = 'https://remart.az/yasayis-kompleksi?cities=1&districts='
result = requests.get(url)
soup = bs(result.text, 'html.parser')
code= soup.find_all('div', class_ = 'col-lg-10 col-md-10 col-sm-12 col-12 offset-lg-1 offset-md-1 offset-sm-0 offset-0')
print(code)
This second code scrape the urls but in next one I see the same problem.
driver = webdriver.Chrome(r'C:\Program Files (x86)\chromedriver_win32\chromedriver.exe')
driver.get('https://remart.az/yasayis-kompleksi?cities=1&districts=')
time.sleep(3)
aze = driver.find_element(By.XPATH, '//*[@id="app"]/div[2]/div[1]/div[2]/div[6]/button')
for a in range(1,2):
aze.click()
time.sleep(1)
soup = bs(driver.page_source, "html.parser")
aezexx = soup.find_all('div', class_ = 'bitem')
for parent in aezexx:
a_tag = parent.find("a")
URRL = a_tag.attrs['href']
print(URRL)
soup = bs(driver.page_source, "html.parser")
aezexx = soup.find_all('div', class_ = 'bitem')
for parent in aezexx:
a_tag = parent.find("a")
URRL = a_tag.attrs['href']
result = requests.get(URRL)
soup = bs(result.text, 'html.parser')
are = soup.find_all("div", class_ = 'bottom-panel-descripton cut-text')
for aes in are:
azzzz = aes.find_all('p')
print(azzzz)
A:
Try:
import re
import json
import requests
import pandas as pd
from ast import literal_eval
url = "https://remart.az/yasayis-kompleksi?cities=1&districts="
html_doc = requests.get(url).text
data = re.search(r'window\.__INITIAL_STATE__ = (".*")', html_doc).group(1)
data = json.loads(literal_eval(data))
df = pd.DataFrame(data)
del df["descr"]
df["city"] = df["city"].str["name"]
df["district"] = df["district"].str["name"]
print(df.head())
Prints:
id name status company_id end_date land_area contact_person website email phones housing_count block_count floor_count apartment_count apartments_on_floor_count elevator_count address city_id district_id orient_ids lat lng underground_garage underground_garage_floor_count underground_garage_place_count objects_floor_count objects_area infr_items infr_additional_items credit interest_rate maximum_installment_period minimum_initial_deposit payment_graph mortgage mortgage_interest_rate mortgage_duration mortgage_initial_deposit partner_banks created_at updated_at seen title description keywords kupcha currency village_id metro_id recommended country_id foreign_price image_cover image_condo image_construction location_info infrastructure_info full_payment_comment credit_comment mortgage_comment documents_comment slug min_price min_price_apart city district
0 464 Golden Rose Boutique 1 296 2022-03-31 0.20 NaN None None +994 50 241 21 12 3.0 3.0 7 42.0 2 1 Necef Nerimanov küç., 1979 məhellə 1 3 None None None 0 None NaN None None None None 1 0.00 12.0 20.00 1 0 8.00 240.0 40.00 None None None 0.0 Golden Rose Boutique Golden Rose Boutique - premium menziller golden rose boutique, golden_rose_boutique, golden-rose-boutique, lalafo, korter, bina, yeniemlak, residence yaşayış kompleksi, yeniemlak, kreditlə yaşayış kompleksindəki mənzillər, yaşayış kompleksindəki mənzillərin qiyməti, yaşayış kompleksində mənzillərin alınması, kredit, ipoteka, Bakı, satış, yeni bina, yeni tikililər, mənzillər, mənzillər, otaqlar , modern, layihe 0 1 NaN 1.0 0 1 None condos/September2021/xMhblXRcyffXvVk90AaK.jpg ["condos\/September2021\/RadK9ymfGUCGzJqQsrpT.jpg","condos\/September2021\/F1JEiifMCxfUAPTFQATP.jpg","condos\/September2021\/qTRHpWZS9O3PczUBPMDr.jpg","condos\/September2021\/mzJp8VNyylp1GsPRyCWb.jpg","condos\/September2021\/59vJ3NLfluOjNmndQcj2.jpg","condos\/September2021\/hNrBMAuI04cyzxUrXHxc.jpg","condos\/September2021\/kzbi7vDumwFdvdprzS4i.jpg","condos\/September2021\/r90BM5Du1EGRzoQ6c3i5.jpg","condos\/September2021\/LVXvZeCNjOFij11wi5ag.jpg"] None <p>Mənzillərin 1 kvadrat metrinin nağd qiyməti 2450 manatdan başlayır</p> golden-rose-boutique 2450.00 237037.5000 Bakı Nərimanov r.
1 463 Central Towers 1 9 2024-12-12 None NaN srconstruction.az [email protected] *1144, 050 988 11 44 3.0 3.0 16 178.0 4-5 2 1 12 None 40.38121100 49.82461600 0 None NaN None None None None 0 None NaN None 1 1 10.00 20.0 30.00 None None None 0.0 0 1 NaN 11.0 0 1 None condos/September2021/6o8EAc6jDh99QjedN4ao.jpg ["condos\/September2021\/zpLYa1KtFyZ7O56HyifJ.jpg","condos\/September2021\/qZctiglmI8WAPnTbjxAa.jpg","condos\/September2021\/FZvCsxXwtY6IGm1Mckkr.jpg","condos\/September2021\/ZIK9JgbiPr2k7p2pPRqk.jpg","condos\/September2021\/TTevd9TqrF6Zas25WIYl.jpg","condos\/September2021\/avYtaaEoT7cBkRADL5B7.jpg","condos\/September2021\/ocFd2JG7LbstKI12uCTY.jpg","condos\/September2021\/SF1p6C9nDzCBay94aBvl.jpg","condos\/September2021\/BeAlNNHu2Om5Btw4Od3p.jpg"] [] central-towers 1500.00 122830.0000 Bakı Yasamal r.
2 462 SkyHome 1 18 2023-12-12 1.00 NaN kristalabsheron.az/az/project/index/53/skyhome [email protected] *1544 3.0 3.0 16-18 NaN None 2 1 5 None 40.38811800 49.81547200 0 None NaN None None None None 0 1.00 1.0 1.00 1 1 10.00 20.0 20.00 None None None 0.0 0 1 NaN 2.0 0 1 None condos/August2021/KLh7WNZqsWizOytX6ABU.jpg ["condos\/August2021\/YkSXADyK9Q75mjasBwvJ.jpg","condos\/August2021\/hem8X1Mhq6loKwzPTkab.jpg","condos\/August2021\/nZY36EpVaixrOBNZKm26.jpg"] None skyhome 1750.00 94675.0000 Bakı Nizami r.
3 461 Yuqa MTK 1 271 2021-08-08 None NaN resant.az [email protected] *4445, +994 50 505 13 33 1.0 1.0 16 96.0 6 2 1 3 None 40.40601900 49.86819800 0 None NaN None None None None 1 0.00 24.0 30.00 1 0 None NaN None None None None 1.0 1 1 NaN NaN 0 1 None condos/August2021/HtY9SPYpvCMy2AzSIYo6.jpg ["condos\/August2021\/rSB7TAGpKy5bWG4YlpMa.jpg","condos\/August2021\/moXSg5i7ovKaM4Mxaxog.jpg","condos\/August2021\/HfTYzb3miKSLUp3nx6ZK.jpg","condos\/August2021\/WVQErnMZWlNIm08aCZxE.jpg","condos\/August2021\/AABftVGeTNkAcLOPjk20.jpg","condos\/August2021\/oj8f2wmWDmTxK2TvuJaX.jpg","condos\/August2021\/7VVuckWYC1pEquPmKX1c.jpg","condos\/August2021\/JC2T54WRukjFNoPRp63y.jpg","condos\/August2021\/erAitfcyK2LAdzZUuYwY.jpg","condos\/August2021\/hrMR8D5hlk0EAHmk18Y4.jpg","condos\/August2021\/jVldUFH35AssuMj0ZBdx.jpg"] None yuqa-mtk 2000.00 180000.0000 Bakı Nərimanov r.
4 460 Zəfər 1 1 211 2023-09-09 None NaN zefer1.rezidens.az [email protected] +994 50 292 11 11, +994 55 292 11 11, +994 70 292 11 11 1.0 1.0 14 65.0 5 2 Bakıxanov qəsəbəsi, S. Mehmandarov küçəsi, 5 1 7 None 40.38286700 49.96533800 0 None NaN None None None None 1 0.00 36.0 50.00 1 0 None NaN None None None None 1.0 Zəfər 1 Az mənzilli bina - Zəfər 1 layihəsi zəfər 1, zəfər_1, zəfər-1, biznes klass kompleksi, lalafo, korter, bina, yeniemlak, residence yaşayış kompleksi, yeniemlak, kreditlə yaşayış kompleksindəki mənzillər, yaşayış kompleksindəki mənzillərin qiyməti, yaşayış kompleksində mənzillərin alınması, kredit, ipoteka, Bakı, satış, yeni bina, yeni tikililər, mənzillər, mənzillər, otaqlar , modern, layihe 0 1 NaN 15.0 0 1 None condos/August2021/7FKhwnY5qCd4W3owxKV1.jpg ["condos\/August2021\/8XZe2oTNyYEOUR6xDmx9.jpg","condos\/August2021\/WHfgTRvFM2lyYuPtvdSy.jpg","condos\/August2021\/ypBFEfcJLzjirVYs3QtD.jpg","condos\/August2021\/QrNYHSt3BvB7uRZ8SBFV.jpg","condos\/August2021\/QjXVTS03mGrSAwbJbknM.jpg","condos\/August2021\/uKtJRJoB9H2bQuWYqMcO.jpg","condos\/August2021\/pp7xqqrGoLznhxZ8pPJu.jpg"] None <p>Baxış istiqamətindən və mərtəbədən asılı olmayaraq qiymətlər 1,250 manatdan başlayır</p>\n<p> </p> zefer-1 0.00 0.0000 Bakı Sabunçu r.
| I can't scrape div ''some text" class = "" I think text cause to error | How can I scrape html like (<div data-v-28872a74="" class="col-lg-10 col-md-10 col-sm-12 col-12 offset-lg-1 offset-md-1 offset-sm-0 offset-0">).
I've tried soup.find_all('div', class_ = 'col-lg-10 col-md-10 col-sm-12 col-12 offset-lg-1 offset-md-1 offset-sm-0 offset-0') but output is just [].
Actually code:
div data-v-28872a74="" class="col-lg-10 col-md-10 col-sm-12 col-12 offset-lg-1 offset-md-1 offset-sm-0 offset-0'
import requests
from bs4 import BeautifulSoup as bs
url = 'https://remart.az/yasayis-kompleksi?cities=1&districts='
result = requests.get(url)
soup = bs(result.text, 'html.parser')
code= soup.find_all('div', class_ = 'col-lg-10 col-md-10 col-sm-12 col-12 offset-lg-1 offset-md-1 offset-sm-0 offset-0')
print(code)
This second code scrape the urls but in next one I see the same problem.
driver = webdriver.Chrome(r'C:\Program Files (x86)\chromedriver_win32\chromedriver.exe')
driver.get('https://remart.az/yasayis-kompleksi?cities=1&districts=')
time.sleep(3)
aze = driver.find_element(By.XPATH, '//*[@id="app"]/div[2]/div[1]/div[2]/div[6]/button')
for a in range(1,2):
aze.click()
time.sleep(1)
soup = bs(driver.page_source, "html.parser")
aezexx = soup.find_all('div', class_ = 'bitem')
for parent in aezexx:
a_tag = parent.find("a")
URRL = a_tag.attrs['href']
print(URRL)
soup = bs(driver.page_source, "html.parser")
aezexx = soup.find_all('div', class_ = 'bitem')
for parent in aezexx:
a_tag = parent.find("a")
URRL = a_tag.attrs['href']
result = requests.get(URRL)
soup = bs(result.text, 'html.parser')
are = soup.find_all("div", class_ = 'bottom-panel-descripton cut-text')
for aes in are:
azzzz = aes.find_all('p')
print(azzzz)
| [
"Try:\nimport re\nimport json\nimport requests\nimport pandas as pd\nfrom ast import literal_eval\n\nurl = \"https://remart.az/yasayis-kompleksi?cities=1&districts=\"\nhtml_doc = requests.get(url).text\n\ndata = re.search(r'window\\.__INITIAL_STATE__ = (\".*\")', html_doc).group(1)\ndata = json.loads(literal_eval(data))\n\ndf = pd.DataFrame(data)\ndel df[\"descr\"]\ndf[\"city\"] = df[\"city\"].str[\"name\"]\ndf[\"district\"] = df[\"district\"].str[\"name\"]\n\nprint(df.head())\n\nPrints:\n id name status company_id end_date land_area contact_person website email phones housing_count block_count floor_count apartment_count apartments_on_floor_count elevator_count address city_id district_id orient_ids lat lng underground_garage underground_garage_floor_count underground_garage_place_count objects_floor_count objects_area infr_items infr_additional_items credit interest_rate maximum_installment_period minimum_initial_deposit payment_graph mortgage mortgage_interest_rate mortgage_duration mortgage_initial_deposit partner_banks created_at updated_at seen title description keywords kupcha currency village_id metro_id recommended country_id foreign_price image_cover image_condo image_construction location_info infrastructure_info full_payment_comment credit_comment mortgage_comment documents_comment slug min_price min_price_apart city district\n0 464 Golden Rose Boutique 1 296 2022-03-31 0.20 NaN None None +994 50 241 21 12 3.0 3.0 7 42.0 2 1 Necef Nerimanov küç., 1979 məhellə 1 3 None None None 0 None NaN None None None None 1 0.00 12.0 20.00 1 0 8.00 240.0 40.00 None None None 0.0 Golden Rose Boutique Golden Rose Boutique - premium menziller golden rose boutique, golden_rose_boutique, golden-rose-boutique, lalafo, korter, bina, yeniemlak, residence yaşayış kompleksi, yeniemlak, kreditlə yaşayış kompleksindəki mənzillər, yaşayış kompleksindəki mənzillərin qiyməti, yaşayış kompleksində mənzillərin alınması, kredit, ipoteka, Bakı, satış, yeni bina, yeni tikililər, mənzillər, mənzillər, otaqlar , modern, layihe 0 1 NaN 1.0 0 1 None condos/September2021/xMhblXRcyffXvVk90AaK.jpg [\"condos\\/September2021\\/RadK9ymfGUCGzJqQsrpT.jpg\",\"condos\\/September2021\\/F1JEiifMCxfUAPTFQATP.jpg\",\"condos\\/September2021\\/qTRHpWZS9O3PczUBPMDr.jpg\",\"condos\\/September2021\\/mzJp8VNyylp1GsPRyCWb.jpg\",\"condos\\/September2021\\/59vJ3NLfluOjNmndQcj2.jpg\",\"condos\\/September2021\\/hNrBMAuI04cyzxUrXHxc.jpg\",\"condos\\/September2021\\/kzbi7vDumwFdvdprzS4i.jpg\",\"condos\\/September2021\\/r90BM5Du1EGRzoQ6c3i5.jpg\",\"condos\\/September2021\\/LVXvZeCNjOFij11wi5ag.jpg\"] None <p>Mənzillərin 1 kvadrat metrinin nağd qiyməti 2450 manatdan başlayır</p> golden-rose-boutique 2450.00 237037.5000 Bakı Nərimanov r.\n1 463 Central Towers 1 9 2024-12-12 None NaN srconstruction.az [email protected] *1144, 050 988 11 44 3.0 3.0 16 178.0 4-5 2 1 12 None 40.38121100 49.82461600 0 None NaN None None None None 0 None NaN None 1 1 10.00 20.0 30.00 None None None 0.0 0 1 NaN 11.0 0 1 None condos/September2021/6o8EAc6jDh99QjedN4ao.jpg [\"condos\\/September2021\\/zpLYa1KtFyZ7O56HyifJ.jpg\",\"condos\\/September2021\\/qZctiglmI8WAPnTbjxAa.jpg\",\"condos\\/September2021\\/FZvCsxXwtY6IGm1Mckkr.jpg\",\"condos\\/September2021\\/ZIK9JgbiPr2k7p2pPRqk.jpg\",\"condos\\/September2021\\/TTevd9TqrF6Zas25WIYl.jpg\",\"condos\\/September2021\\/avYtaaEoT7cBkRADL5B7.jpg\",\"condos\\/September2021\\/ocFd2JG7LbstKI12uCTY.jpg\",\"condos\\/September2021\\/SF1p6C9nDzCBay94aBvl.jpg\",\"condos\\/September2021\\/BeAlNNHu2Om5Btw4Od3p.jpg\"] [] central-towers 1500.00 122830.0000 Bakı Yasamal r.\n2 462 SkyHome 1 18 2023-12-12 1.00 NaN kristalabsheron.az/az/project/index/53/skyhome [email protected] *1544 3.0 3.0 16-18 NaN None 2 1 5 None 40.38811800 49.81547200 0 None NaN None None None None 0 1.00 1.0 1.00 1 1 10.00 20.0 20.00 None None None 0.0 0 1 NaN 2.0 0 1 None condos/August2021/KLh7WNZqsWizOytX6ABU.jpg [\"condos\\/August2021\\/YkSXADyK9Q75mjasBwvJ.jpg\",\"condos\\/August2021\\/hem8X1Mhq6loKwzPTkab.jpg\",\"condos\\/August2021\\/nZY36EpVaixrOBNZKm26.jpg\"] None skyhome 1750.00 94675.0000 Bakı Nizami r.\n3 461 Yuqa MTK 1 271 2021-08-08 None NaN resant.az [email protected] *4445, +994 50 505 13 33 1.0 1.0 16 96.0 6 2 1 3 None 40.40601900 49.86819800 0 None NaN None None None None 1 0.00 24.0 30.00 1 0 None NaN None None None None 1.0 1 1 NaN NaN 0 1 None condos/August2021/HtY9SPYpvCMy2AzSIYo6.jpg [\"condos\\/August2021\\/rSB7TAGpKy5bWG4YlpMa.jpg\",\"condos\\/August2021\\/moXSg5i7ovKaM4Mxaxog.jpg\",\"condos\\/August2021\\/HfTYzb3miKSLUp3nx6ZK.jpg\",\"condos\\/August2021\\/WVQErnMZWlNIm08aCZxE.jpg\",\"condos\\/August2021\\/AABftVGeTNkAcLOPjk20.jpg\",\"condos\\/August2021\\/oj8f2wmWDmTxK2TvuJaX.jpg\",\"condos\\/August2021\\/7VVuckWYC1pEquPmKX1c.jpg\",\"condos\\/August2021\\/JC2T54WRukjFNoPRp63y.jpg\",\"condos\\/August2021\\/erAitfcyK2LAdzZUuYwY.jpg\",\"condos\\/August2021\\/hrMR8D5hlk0EAHmk18Y4.jpg\",\"condos\\/August2021\\/jVldUFH35AssuMj0ZBdx.jpg\"] None yuqa-mtk 2000.00 180000.0000 Bakı Nərimanov r.\n4 460 Zəfər 1 1 211 2023-09-09 None NaN zefer1.rezidens.az [email protected] +994 50 292 11 11, +994 55 292 11 11, +994 70 292 11 11 1.0 1.0 14 65.0 5 2 Bakıxanov qəsəbəsi, S. Mehmandarov küçəsi, 5 1 7 None 40.38286700 49.96533800 0 None NaN None None None None 1 0.00 36.0 50.00 1 0 None NaN None None None None 1.0 Zəfər 1 Az mənzilli bina - Zəfər 1 layihəsi zəfər 1, zəfər_1, zəfər-1, biznes klass kompleksi, lalafo, korter, bina, yeniemlak, residence yaşayış kompleksi, yeniemlak, kreditlə yaşayış kompleksindəki mənzillər, yaşayış kompleksindəki mənzillərin qiyməti, yaşayış kompleksində mənzillərin alınması, kredit, ipoteka, Bakı, satış, yeni bina, yeni tikililər, mənzillər, mənzillər, otaqlar , modern, layihe 0 1 NaN 15.0 0 1 None condos/August2021/7FKhwnY5qCd4W3owxKV1.jpg [\"condos\\/August2021\\/8XZe2oTNyYEOUR6xDmx9.jpg\",\"condos\\/August2021\\/WHfgTRvFM2lyYuPtvdSy.jpg\",\"condos\\/August2021\\/ypBFEfcJLzjirVYs3QtD.jpg\",\"condos\\/August2021\\/QrNYHSt3BvB7uRZ8SBFV.jpg\",\"condos\\/August2021\\/QjXVTS03mGrSAwbJbknM.jpg\",\"condos\\/August2021\\/uKtJRJoB9H2bQuWYqMcO.jpg\",\"condos\\/August2021\\/pp7xqqrGoLznhxZ8pPJu.jpg\"] None <p>Baxış istiqamətindən və mərtəbədən asılı olmayaraq qiymətlər 1,250 manatdan başlayır</p>\\n<p> </p> zefer-1 0.00 0.0000 Bakı Sabunçu r.\n\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"html",
"python",
"web_scraping"
] | stackoverflow_0074624370_beautifulsoup_html_python_web_scraping.txt |
Q:
How to find the change of text based on a unique value on another column in an excel file using Python
I have a excel file containing three columns as shown below,
ID
Name
Date
117
Laspringe
2019-04-08
117
Laspringe (FT)
2020-06-16
117
Laspringe (Ftp)
2020-07-24
999
Angelo
2020-04-15
999
Angelo(FT)
2021-03-05
999
Angelo(Ftp)
2021-09-13
999
Angelo
2022-02-20
I wanted to find out that based on each ID which has the name changed from original name and changed back to the same original name. For example Angelo is changed to Angelo(FT), Angelo(Ftp) and changed back to original Angelo.
Whereas Laspringe is not changed back to the original name.
Is it possible to find out which of the ID's have changed the name back to original using python ??
Expecting the result to be like,
ID
999
A:
A simple way might be to check if the Name has any duplicate per group:
s = df.duplicated(['ID', 'Name']).groupby(df['ID']).any()
out = s[s].index.tolist()
Output: [999]
If you can have duplicates on successive dates (A -> A -> B shouldn't be a match):
s = (df
.sort_values(by='Date')
.groupby('ID')['Name']
.agg(lambda s: s[s.ne(s.shift())].duplicated().any())
)
out = s[s].index.tolist()
The two code will behave differently on this input:
ID Name Date
0 117 Laspringe 2019-04-08
1 117 Laspringe 2019-04-09 # duplicated but no intermediate name
2 117 Laspringe (FT) 2020-06-16
3 117 Laspringe (Ftp) 2020-07-24
4 999 Angelo 2020-04-15
5 999 Angelo(FT) 2021-03-05
6 999 Angelo(Ftp) 2021-09-13
7 999 Angelo 2022-02-29
A:
You can iterate over the columns in an excel sheet using openpyxl. In this case I've used a defaultdict to build a list of names for each id, and then the final check is that the first and last item in each list are the same.
import openpyxl, collections
ws = openpyxl.load_workbook('Book1.xlsx').active
name_dict = collections.defaultdict(list)
ids, names = ([cell.value for cell in col] for col in ws.iter_cols(1,2))
for id_, name in zip(ids[1:],names[1:]): # [1:] to ignore the header row
name_dict[id_].append(name)
print(*[k for k,v in name_dict.items() if v[0]==v[-1]])
| How to find the change of text based on a unique value on another column in an excel file using Python | I have a excel file containing three columns as shown below,
ID
Name
Date
117
Laspringe
2019-04-08
117
Laspringe (FT)
2020-06-16
117
Laspringe (Ftp)
2020-07-24
999
Angelo
2020-04-15
999
Angelo(FT)
2021-03-05
999
Angelo(Ftp)
2021-09-13
999
Angelo
2022-02-20
I wanted to find out that based on each ID which has the name changed from original name and changed back to the same original name. For example Angelo is changed to Angelo(FT), Angelo(Ftp) and changed back to original Angelo.
Whereas Laspringe is not changed back to the original name.
Is it possible to find out which of the ID's have changed the name back to original using python ??
Expecting the result to be like,
ID
999
| [
"A simple way might be to check if the Name has any duplicate per group:\ns = df.duplicated(['ID', 'Name']).groupby(df['ID']).any()\nout = s[s].index.tolist()\n\nOutput: [999]\nIf you can have duplicates on successive dates (A -> A -> B shouldn't be a match):\ns = (df\n .sort_values(by='Date')\n .groupby('ID')['Name']\n .agg(lambda s: s[s.ne(s.shift())].duplicated().any())\n)\nout = s[s].index.tolist()\n\nThe two code will behave differently on this input:\n ID Name Date\n0 117 Laspringe 2019-04-08\n1 117 Laspringe 2019-04-09 # duplicated but no intermediate name\n2 117 Laspringe (FT) 2020-06-16\n3 117 Laspringe (Ftp) 2020-07-24\n4 999 Angelo 2020-04-15\n5 999 Angelo(FT) 2021-03-05\n6 999 Angelo(Ftp) 2021-09-13\n7 999 Angelo 2022-02-29\n\n",
"You can iterate over the columns in an excel sheet using openpyxl. In this case I've used a defaultdict to build a list of names for each id, and then the final check is that the first and last item in each list are the same.\nimport openpyxl, collections\n\nws = openpyxl.load_workbook('Book1.xlsx').active\nname_dict = collections.defaultdict(list)\nids, names = ([cell.value for cell in col] for col in ws.iter_cols(1,2))\nfor id_, name in zip(ids[1:],names[1:]): # [1:] to ignore the header row\n name_dict[id_].append(name)\nprint(*[k for k,v in name_dict.items() if v[0]==v[-1]])\n\n"
] | [
2,
0
] | [] | [] | [
"csv",
"pandas",
"python",
"python_3.x"
] | stackoverflow_0074626179_csv_pandas_python_python_3.x.txt |
Q:
How to create a 2D array from 1D with the algorithm specified in the description?
Good afternoon,
I need to create a 2D array from 1D , according to the following rules:\
The 2d array must not contain
[["A1", "A1"], ["A2", "A2"], ["A3", "A3"], ["A4", "A4"]...]
The array should not repeat, it's same for me
[["A1", "A2"], ["A2", "A1"], ....]\
For example
Input array
A ["A1", "A2", "A3", "A4"]
Output array
B [['A1' 'A2'] ['A1' 'A3']['A1' 'A4']['A2' 'A1']['A2' 'A3']['A2' 'A4']['A3' 'A1'] ['A3' 'A2'] ['A3' 'A4']['A4' 'A1'] ['A4' 'A2']['A4' 'A3']]
I need
[['A1' 'A2']['A1' 'A3']['A1' 'A4']['A2' 'A3']['A2' 'A4'] ['A3' 'A4']
import numpy as np
x = ("A1", "A2", "A3", "A4")
arr = []
for i in range(0, len(x)):
for j in range(0, len(x)):
if x[i] != x[j]:
arr.append((x[i], x[j]))
mylist = np.unique(arr, axis=0)
print(mylist)
how to do it?
Thanks in advance.
A:
A simple if statement to check if the tuple already exists should be all you need:
import numpy as np
x = ("A1", "A2", "A3", "A4")
arr = []
for i in range(0, len(x)):
for j in range(0, len(x)):
if x[i] != x[j]:
if not (x[j], x[i]) in arr: // If the pair already exists, it would be the
//flipped version of it
arr.append((x[i], x[j]))
mylist = np.unique(arr, axis=0)
print(mylist)
A:
Python's standard library has a function that does exactly this, itertools.combinations.
from itertools import combinations
print( list(combinations(['A1', 'A2', 'A3', 'A4'], 2)) )
# [('A1', 'A2'), ('A1', 'A3'), ('A1', 'A4'), ('A2', 'A3'), ('A2', 'A4'), ('A3', 'A4')]
You can also write your own, using nested loops to iterate on the array:
def all_pairs(arr):
for i, x in enumerate(arr):
for y in arr[i+1:]:
yield (x, y)
print( list(all_pairs(['A1', 'A2', 'A3', 'A4'])) )
# [('A1', 'A2'), ('A1', 'A3'), ('A1', 'A4'), ('A2', 'A3'), ('A2', 'A4'), ('A3', 'A4')]
| How to create a 2D array from 1D with the algorithm specified in the description? | Good afternoon,
I need to create a 2D array from 1D , according to the following rules:\
The 2d array must not contain
[["A1", "A1"], ["A2", "A2"], ["A3", "A3"], ["A4", "A4"]...]
The array should not repeat, it's same for me
[["A1", "A2"], ["A2", "A1"], ....]\
For example
Input array
A ["A1", "A2", "A3", "A4"]
Output array
B [['A1' 'A2'] ['A1' 'A3']['A1' 'A4']['A2' 'A1']['A2' 'A3']['A2' 'A4']['A3' 'A1'] ['A3' 'A2'] ['A3' 'A4']['A4' 'A1'] ['A4' 'A2']['A4' 'A3']]
I need
[['A1' 'A2']['A1' 'A3']['A1' 'A4']['A2' 'A3']['A2' 'A4'] ['A3' 'A4']
import numpy as np
x = ("A1", "A2", "A3", "A4")
arr = []
for i in range(0, len(x)):
for j in range(0, len(x)):
if x[i] != x[j]:
arr.append((x[i], x[j]))
mylist = np.unique(arr, axis=0)
print(mylist)
how to do it?
Thanks in advance.
| [
"A simple if statement to check if the tuple already exists should be all you need:\n import numpy as np\n \n x = (\"A1\", \"A2\", \"A3\", \"A4\")\n \n arr = []\n for i in range(0, len(x)):\n for j in range(0, len(x)):\n if x[i] != x[j]:\n if not (x[j], x[i]) in arr: // If the pair already exists, it would be the\n //flipped version of it\n arr.append((x[i], x[j]))\n \n mylist = np.unique(arr, axis=0)\n print(mylist)\n\n",
"Python's standard library has a function that does exactly this, itertools.combinations.\nfrom itertools import combinations\n\nprint( list(combinations(['A1', 'A2', 'A3', 'A4'], 2)) )\n# [('A1', 'A2'), ('A1', 'A3'), ('A1', 'A4'), ('A2', 'A3'), ('A2', 'A4'), ('A3', 'A4')]\n\nYou can also write your own, using nested loops to iterate on the array:\ndef all_pairs(arr):\n for i, x in enumerate(arr):\n for y in arr[i+1:]:\n yield (x, y)\n\nprint( list(all_pairs(['A1', 'A2', 'A3', 'A4'])) )\n# [('A1', 'A2'), ('A1', 'A3'), ('A1', 'A4'), ('A2', 'A3'), ('A2', 'A4'), ('A3', 'A4')]\n\n"
] | [
2,
2
] | [] | [] | [
"algorithm",
"arrays",
"numpy",
"python"
] | stackoverflow_0074626544_algorithm_arrays_numpy_python.txt |
Q:
How do I print the string of tag that has multiple ?
firstHeader = mclarenHTML.find_all(re.compile('^h[2]'))[0] #finding header titles
print(firstHeader)
Output
<h2><strong><strong>1950-1953: </strong>Formula 1 begins: the super-charger years</strong></h2>
How do i get the string "1950-1953:Formula 1 begins: the super-charger years"?
Tried using .string but it returns none
A:
Use .text:
from bs4 import BeautifulSoup
soup = BeautifulSoup(
"<h2><strong><strong>1950-1953: </strong>Formula 1 begins: the super-charger years</strong></h2>",
"html.parser",
)
header = soup.h2
print(header.text)
Prints:
1950-1953: Formula 1 begins: the super-charger years
Or use .get_text() - you can use then strip= and separator= parameters:
print(header.get_text(strip=True, separator=" "))
Prints:
1950-1953: Formula 1 begins: the super-charger years
| How do I print the string of tag that has multiple ? | firstHeader = mclarenHTML.find_all(re.compile('^h[2]'))[0] #finding header titles
print(firstHeader)
Output
<h2><strong><strong>1950-1953: </strong>Formula 1 begins: the super-charger years</strong></h2>
How do i get the string "1950-1953:Formula 1 begins: the super-charger years"?
Tried using .string but it returns none
| [
"Use .text:\nfrom bs4 import BeautifulSoup\n\nsoup = BeautifulSoup(\n \"<h2><strong><strong>1950-1953: </strong>Formula 1 begins: the super-charger years</strong></h2>\",\n \"html.parser\",\n)\n\nheader = soup.h2\n\nprint(header.text)\n\nPrints:\n1950-1953: Formula 1 begins: the super-charger years\n\n\nOr use .get_text() - you can use then strip= and separator= parameters:\nprint(header.get_text(strip=True, separator=\" \"))\n\nPrints:\n1950-1953: Formula 1 begins: the super-charger years\n\n"
] | [
1
] | [] | [] | [
"beautifulsoup",
"jupyter_notebook",
"python",
"web_scraping"
] | stackoverflow_0074626624_beautifulsoup_jupyter_notebook_python_web_scraping.txt |
Q:
How to make dynamic imports in Python?
I have come across the following problem with the following code:
`
import MODULE as sem
import MODULE as mv
def find_group_day(enclave, day):
source = sem
EXTRA CODE
if num_week_year == sem.num_week_year:
source = f"PATH/{mv.year}.py"
EXTRA CODE
x = list(source.__dict__.items())
for i in range(len(x)):
EXTRA CODE
`
If I specify the variable source to be a specific module which has been imported previously, the script works as expected, being able to iterate through its contents to get the specific variables with the getattribute() function. Nevertheless, provided the condition is True, I get an error on the x = list(source.__dict__.items()) line, that returns that a String object has no dicts in it.
Why this error is returned is quite obvious, so my question is, how can I make this sort of "dynamic import". I need to access a variable module, defined by a year. File "2022.py" has some dicts I need to access to, but I also need to change this file whenever the year changes. In 2023, I need to access "2023.py" and so on.
If I do import "2022.py" it will work fine, until we get to 2023, when I would need to change the import, but as I said, I need it to be dynamic, not needing to change the code everytime.
I have checked all documentation about imports in python, but either I did not find anything, either I did not quite understand how to do it.
| How to make dynamic imports in Python? | I have come across the following problem with the following code:
`
import MODULE as sem
import MODULE as mv
def find_group_day(enclave, day):
source = sem
EXTRA CODE
if num_week_year == sem.num_week_year:
source = f"PATH/{mv.year}.py"
EXTRA CODE
x = list(source.__dict__.items())
for i in range(len(x)):
EXTRA CODE
`
If I specify the variable source to be a specific module which has been imported previously, the script works as expected, being able to iterate through its contents to get the specific variables with the getattribute() function. Nevertheless, provided the condition is True, I get an error on the x = list(source.__dict__.items()) line, that returns that a String object has no dicts in it.
Why this error is returned is quite obvious, so my question is, how can I make this sort of "dynamic import". I need to access a variable module, defined by a year. File "2022.py" has some dicts I need to access to, but I also need to change this file whenever the year changes. In 2023, I need to access "2023.py" and so on.
If I do import "2022.py" it will work fine, until we get to 2023, when I would need to change the import, but as I said, I need it to be dynamic, not needing to change the code everytime.
I have checked all documentation about imports in python, but either I did not find anything, either I did not quite understand how to do it.
| [] | [] | [
"exec(...) runs specific code, where you can put variables. Example:\nmodule = 're'\nexec(f'import {module}')\n\nhttps://www.w3schools.com/python/ref_func_exec.asp\n"
] | [
-1
] | [
"dynamic",
"getattribute",
"import",
"python"
] | stackoverflow_0074626475_dynamic_getattribute_import_python.txt |
Q:
Installing BeautifulSoup4
I am running into problems installing BeautifulSoup4. This is the code I am using in a Jupiter notebook to import beautifulsoup
from selenium import webdriver
import beautifulsoup4
import pandas as pd
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In [12], line 2
1 from selenium import webdriver
----> 2 import beautifulsoup4
3 import pandas as pd
ModuleNotFoundError: No module named 'beautifulsoup4'
When pip installing in terminal I get following output which states that beautiful soup should be installed:
(CodingFolder) user ~ % pip install beautifulsoup4
Requirement already satisfied: beautifulsoup4 in ./opt/anaconda3/envs/CodingFolder/lib/python3.9/site-packages (4.11.1)
Requirement already satisfied: soupsieve>1.2 in ./opt/anaconda3/envs/CodingFolder/lib/python3.9/site-packages (from beautifulsoup4) (2.3.2.post1)
What am I missing ?
A:
Install with:
$ pip install beautifulsoup4
and then you should be using this import statement:
from bs4 import BeautifulSoup
not:
import beautifulsoup4
Installing and importing BeautifulSoup.
| Installing BeautifulSoup4 | I am running into problems installing BeautifulSoup4. This is the code I am using in a Jupiter notebook to import beautifulsoup
from selenium import webdriver
import beautifulsoup4
import pandas as pd
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In [12], line 2
1 from selenium import webdriver
----> 2 import beautifulsoup4
3 import pandas as pd
ModuleNotFoundError: No module named 'beautifulsoup4'
When pip installing in terminal I get following output which states that beautiful soup should be installed:
(CodingFolder) user ~ % pip install beautifulsoup4
Requirement already satisfied: beautifulsoup4 in ./opt/anaconda3/envs/CodingFolder/lib/python3.9/site-packages (4.11.1)
Requirement already satisfied: soupsieve>1.2 in ./opt/anaconda3/envs/CodingFolder/lib/python3.9/site-packages (from beautifulsoup4) (2.3.2.post1)
What am I missing ?
| [
"Install with:\n$ pip install beautifulsoup4\n\nand then you should be using this import statement:\nfrom bs4 import BeautifulSoup\n\nnot:\nimport beautifulsoup4\n\nInstalling and importing BeautifulSoup.\n"
] | [
1
] | [
"To Install write\npip install beautifulsoup4\n\nand then import as\nfrom bs4 import BeautifulSoup\n\nFor more information refer https://www.crummy.com/software/BeautifulSoup/bs4/doc/\n"
] | [
-1
] | [
"beautifulsoup",
"jupyter",
"python"
] | stackoverflow_0074626656_beautifulsoup_jupyter_python.txt |
Q:
Django - AppRegistryNotReady("Models aren't loaded yet.") using cities_light library
I have installed the cities_light library in Django and populated the db with the cities as instructed in the docs. I added the app in INSTALLED_APPS and I have been able to pull the data in this simple view. All cities load as expected:
def index(request):
cities = City.objects.all()
context = {
'cities': cities
}
return render(request,'templates/index.html',context)
However, I am trying to create a model which has City as a foreign key, but when I run the app or try to make the migrations I get
'django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.'.
from cities_light.admin import City
from django.db import models
class Home(models.Model):
location = models.ForeignKey(City, on_delete=models.CASCADE)
I suspect I might need to override the model. Would that be the case?
A:
In settings.py file add below code at the end of file:
SOUTH_MIGRATION_MODULES = {
'cities_light': 'cities_light.south_migrations',
}
I think you did not add above code in settings.py file that's why you got that error.
Data update
Finally, populate your database with command:
./manage.py cities_light
A:
The same error appears after adding the south migration modules. I already had the database populated and I used it successfully for my view.
Here is the settings.py content:
from pathlib import Path
BASE_DIR = Path(__file__).resolve().parent.parent
SECRET_KEY = 'django-insecure-n=_c6^ui9$e&t+jsy(&eewc%n1rux91z3f94jwrwrq91w7^c43'
DEBUG = True
ALLOWED_HOSTS = []
INSTALLED_APPS = [
'cities_light',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'CBV_practice.web',
'CBV_practice.auth_app',
]
CITIES_LIGHT_INCLUDE_COUNTRIES = ['BG']
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'CBV_practice.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [BASE_DIR / 'templates']
,
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'CBV_practice.wsgi.application'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'cbv_practice_db_try_2',
'USER': 'postgres-user',
'PASSWORD': 'password',
'HOST': '127.0.0.1',
'PORT': '5432'
}
}
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
STATIC_URL = 'static/'
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
SOUTH_MIGRATION_MODULES = {
'cities_light': 'cities_light.south_migrations',
}
The whole error:
Traceback (most recent call last):
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\manage.py", line 22, in <module>
main()
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\venv\lib\site-packages\django\core\management\__init__.py", line 446, in execute_from_command_line
utility.execute()
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\venv\lib\site-packages\django\core\management\__init__.py", line 420, in execute
django.setup()
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\venv\lib\site-packages\django\__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\venv\lib\site-packages\django\apps\registry.py", line 116, in populate
app_config.import_models()
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\venv\lib\site-packages\django\apps\config.py", line 269, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\marti\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\CBV_practice\web\models.py", line 1, in <module>
from cities_light.admin import City
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\venv\lib\site-packages\cities_light\admin.py", line 7, in <module>
from . import forms
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\venv\lib\site-packages\cities_light\forms.py", line 5, in <module>
Country, Region, SubRegion, City = get_cities_models()
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\venv\lib\site-packages\cities_light\loading.py", line 17, in get_cities_models
return [get_cities_model(model_name) for model_name in model_names]
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\venv\lib\site-packages\cities_light\loading.py", line 17, in <listcomp>
return [get_cities_model(model_name) for model_name in model_names]
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\venv\lib\site-packages\cities_light\loading.py", line 12, in get_cities_model
return get_model(CITIES_LIGHT_APP_NAME, model_name, *args, **kwargs)
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\venv\lib\site-packages\django\apps\registry.py", line 201, in get_model
self.check_models_ready()
File "C:\Users\marti\PycharmProjects\WebDevelopmentCourseDjango\WEB FRAMEWORK\CBV_practice\venv\lib\site-packages\django\apps\registry.py", line 143, in check_models_ready
raise AppRegistryNotReady("Models aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.
| Django - AppRegistryNotReady("Models aren't loaded yet.") using cities_light library | I have installed the cities_light library in Django and populated the db with the cities as instructed in the docs. I added the app in INSTALLED_APPS and I have been able to pull the data in this simple view. All cities load as expected:
def index(request):
cities = City.objects.all()
context = {
'cities': cities
}
return render(request,'templates/index.html',context)
However, I am trying to create a model which has City as a foreign key, but when I run the app or try to make the migrations I get
'django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.'.
from cities_light.admin import City
from django.db import models
class Home(models.Model):
location = models.ForeignKey(City, on_delete=models.CASCADE)
I suspect I might need to override the model. Would that be the case?
| [
"In settings.py file add below code at the end of file:\nSOUTH_MIGRATION_MODULES = {\n 'cities_light': 'cities_light.south_migrations',\n}\n\nI think you did not add above code in settings.py file that's why you got that error.\nData update\nFinally, populate your database with command:\n./manage.py cities_light\n\n",
"The same error appears after adding the south migration modules. I already had the database populated and I used it successfully for my view.\nHere is the settings.py content:\nfrom pathlib import Path\n\nBASE_DIR = Path(__file__).resolve().parent.parent\n\nSECRET_KEY = 'django-insecure-n=_c6^ui9$e&t+jsy(&eewc%n1rux91z3f94jwrwrq91w7^c43'\n\nDEBUG = True\n\nALLOWED_HOSTS = []\n\nINSTALLED_APPS = [\n 'cities_light',\n\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n\n 'CBV_practice.web',\n 'CBV_practice.auth_app',\n\n]\n\nCITIES_LIGHT_INCLUDE_COUNTRIES = ['BG']\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'CBV_practice.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [BASE_DIR / 'templates']\n ,\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'CBV_practice.wsgi.application'\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.postgresql',\n 'NAME': 'cbv_practice_db_try_2',\n 'USER': 'postgres-user',\n 'PASSWORD': 'password',\n 'HOST': '127.0.0.1',\n 'PORT': '5432'\n }\n}\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',\n },\n]\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_TZ = True\n\nSTATIC_URL = 'static/'\n\nDEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'\n\nSOUTH_MIGRATION_MODULES = {\n 'cities_light': 'cities_light.south_migrations',\n}\n\nThe whole error:\nTraceback (most recent call last):\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\manage.py\", line 22, in <module>\n main()\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\manage.py\", line 18, in main\n execute_from_command_line(sys.argv)\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\venv\\lib\\site-packages\\django\\core\\management\\__init__.py\", line 446, in execute_from_command_line\n utility.execute()\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\venv\\lib\\site-packages\\django\\core\\management\\__init__.py\", line 420, in execute\n django.setup()\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\venv\\lib\\site-packages\\django\\__init__.py\", line 24, in setup\n apps.populate(settings.INSTALLED_APPS)\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\venv\\lib\\site-packages\\django\\apps\\registry.py\", line 116, in populate\n app_config.import_models()\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\venv\\lib\\site-packages\\django\\apps\\config.py\", line 269, in import_models\n self.models_module = import_module(models_module_name)\n File \"C:\\Users\\marti\\AppData\\Local\\Programs\\Python\\Python39\\lib\\importlib\\__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\n File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\n File \"<frozen importlib._bootstrap_external>\", line 790, in exec_module\n File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\CBV_practice\\web\\models.py\", line 1, in <module>\n from cities_light.admin import City\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\venv\\lib\\site-packages\\cities_light\\admin.py\", line 7, in <module>\n from . import forms\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\venv\\lib\\site-packages\\cities_light\\forms.py\", line 5, in <module>\n Country, Region, SubRegion, City = get_cities_models()\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\venv\\lib\\site-packages\\cities_light\\loading.py\", line 17, in get_cities_models\n return [get_cities_model(model_name) for model_name in model_names]\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\venv\\lib\\site-packages\\cities_light\\loading.py\", line 17, in <listcomp>\n return [get_cities_model(model_name) for model_name in model_names]\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\venv\\lib\\site-packages\\cities_light\\loading.py\", line 12, in get_cities_model\n return get_model(CITIES_LIGHT_APP_NAME, model_name, *args, **kwargs)\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\venv\\lib\\site-packages\\django\\apps\\registry.py\", line 201, in get_model\n self.check_models_ready()\n File \"C:\\Users\\marti\\PycharmProjects\\WebDevelopmentCourseDjango\\WEB FRAMEWORK\\CBV_practice\\venv\\lib\\site-packages\\django\\apps\\registry.py\", line 143, in check_models_ready\n raise AppRegistryNotReady(\"Models aren't loaded yet.\")\ndjango.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.\n\n"
] | [
0,
0
] | [] | [] | [
"django",
"model",
"pip",
"python"
] | stackoverflow_0074625993_django_model_pip_python.txt |
Q:
Google OAuth error 400: redirect_uri_mismatch in Python
first time using OAuth here and I am stuck. I am building a web app that needs to make authorized calls to the YouTube Data API. I am testing the OAuth flow from my local computer.
I am stuck receiving Error 400: redirect_uri_mismatch when I try to run my Google OAuth flow in Python. The error occurs when I access the link generated by flow.run_console()
Here is my code:
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
client_secrets_file="./client_secret.json"
scopes = ["https://www.googleapis.com/auth/youtube.readonly"]
flow = google_auth_oauthlib.flow.InstalledAppFlow.from_client_secrets_file(
client_secrets_file, scopes)
flow.redirect_uri = "http://127.0.0.1:8080" # Authorized in my client ID
credentials = flow.run_console()
This code returns the message:
Please visit this URL to authorize this application: ***google oauth url ***
Enter the authorization code:
Visiting the link results in the following error:
I tried setting the Authorized Redirect URI in my OAuth Client ID to http://127.0.0.1:8080 since I am testing from my local machine. I also set flow.redirect_uri to http://127.0.0.1:8080 in Python. Using http://127.0.0.1:8080 is currently my only option since the front end has not been set up yet.
I expected the code to authorize my request, since the Authorized URI matches the redirect_uri. But I am still receiving the error.
I have had no issues running the flow from Google's OAuth Playground, if that means anything.
Any help is appreciated, thank you.
A:
Change redirect_uri to http://127.0.0.1/ or http://localhost/. I have faced a similar issue before with Google Drive API, and removing the port number worked for me.
A:
The library seems to have a bug.
I know it is not so good but in this case the hack is
flow._OOB_REDIRECT_URI = = "http://127.0.0.1:8080"
| Google OAuth error 400: redirect_uri_mismatch in Python | first time using OAuth here and I am stuck. I am building a web app that needs to make authorized calls to the YouTube Data API. I am testing the OAuth flow from my local computer.
I am stuck receiving Error 400: redirect_uri_mismatch when I try to run my Google OAuth flow in Python. The error occurs when I access the link generated by flow.run_console()
Here is my code:
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
client_secrets_file="./client_secret.json"
scopes = ["https://www.googleapis.com/auth/youtube.readonly"]
flow = google_auth_oauthlib.flow.InstalledAppFlow.from_client_secrets_file(
client_secrets_file, scopes)
flow.redirect_uri = "http://127.0.0.1:8080" # Authorized in my client ID
credentials = flow.run_console()
This code returns the message:
Please visit this URL to authorize this application: ***google oauth url ***
Enter the authorization code:
Visiting the link results in the following error:
I tried setting the Authorized Redirect URI in my OAuth Client ID to http://127.0.0.1:8080 since I am testing from my local machine. I also set flow.redirect_uri to http://127.0.0.1:8080 in Python. Using http://127.0.0.1:8080 is currently my only option since the front end has not been set up yet.
I expected the code to authorize my request, since the Authorized URI matches the redirect_uri. But I am still receiving the error.
I have had no issues running the flow from Google's OAuth Playground, if that means anything.
Any help is appreciated, thank you.
| [
"Change redirect_uri to http://127.0.0.1/ or http://localhost/. I have faced a similar issue before with Google Drive API, and removing the port number worked for me.\n",
"The library seems to have a bug.\nI know it is not so good but in this case the hack is\nflow._OOB_REDIRECT_URI = = \"http://127.0.0.1:8080\"\n\n"
] | [
0,
0
] | [] | [] | [
"google_api_python_client",
"google_oauth",
"oauth",
"python",
"youtube_data_api"
] | stackoverflow_0074320021_google_api_python_client_google_oauth_oauth_python_youtube_data_api.txt |
Q:
unable to create autoincrementing primary key with flask-sqlalchemy
I want my model's primary key to be an autoincrementing integer. Here is how my model looks like
class Region(db.Model):
__tablename__ = 'regions'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
name = db.Column(db.String(100))
parent_id = db.Column(db.Integer, db.ForeignKey('regions.id'))
parent = db.relationship('Region', remote_side=id, primaryjoin=('Region.parent_id==Region.id'), backref='sub-regions')
created_at = db.Column(db.DateTime, default=db.func.now())
deleted_at = db.Column(db.DateTime)
The above code creates my table but does not make id autoincrementing. So if in my insert query I miss the id field it gives me this error
ERROR: null value in column "id" violates not-null constraint
So I changed the id declaration to look like this
id = db.Column(db.Integer, db.Sequence('seq_reg_id', start=1, increment=1),
primary_key=True)
Still the same error. What is wrong with the code above?
A:
Nothing is wrong with the above code. In fact, you don't even need autoincrement=True or db.Sequence('seq_reg_id', start=1, increment=1), as SQLAlchemy will automatically set the first Integer PK column that's not marked as a FK as autoincrement=True.
Here, I've put together a working setup based on yours. SQLAlechemy's ORM will take care of generating id's and populating objects with them if you use the Declarative Base based class that you've defined to create instances of your object.
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.debug = True
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://user:password@localhost/testdb'
app.config['SQLALCHEMY_ECHO'] = True
db = SQLAlchemy(app)
class Region(db.Model):
__tablename__ = 'regions'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100))
db.drop_all()
db.create_all()
region = Region(name='Over Yonder Thar')
app.logger.info(region.id) # currently None, before persistence
db.session.add(region)
db.session.commit()
app.logger.info(region.id) # gets assigned an id of 1 after being persisted
region2 = Region(name='Yet Another Up Yar')
db.session.add(region2)
db.session.commit()
app.logger.info(region2.id) # and 2
if __name__ == '__main__':
app.run(port=9001)
A:
So I landed here with an issue that my SQLite table wasn't auto-incrementing the primary key. I have a slightly complex use case where I want to use postgres in production but sqlite for testing to make life a bit easier when continuously deploying.
It turns out SQLite doesn't like columns defined as BigIntegers, and for incrementing to work they should be set as Integers. Remarkably SQLAlchemy can handle this scenario as follows using the with_variant function. Thought this may be useful for someone:
id = db.Column(db.BigInteger().with_variant(db.Integer, "sqlite"), primary_key=True)
Further details here https://docs.sqlalchemy.org/en/13/dialects/sqlite.html
A:
I think you do not need the autoincrement once you set ,
id = db.Column(db.Integer , primary_key=True , autoincrement=True)
I think that it should be ,
id = db.Column(db.Integer , primary_key=True)
it will give you the uniqueness your looking for .
A:
I had this issue declaring Composite Keys on a model class.
If you are wanting an auto-incrementing id field for a composite key (ie. more than 1 db.Column(..) definition with primary_key=True, then adding autoincrement=True fixed the issue for me.
class S3Object(db.Model):
__tablename__ = 's3_object'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
# composite keys
bucket_name = db.Column(db.String(), primary_key=True)
key = db.Column(db.String(), primary_key=True)
So the statements above about not requiring autoincrement=True should be :
you don't even need autoincrement=True, as SQLAlchemy will automatically set the first
Integer PK column that's not marked as a FK as autoincrement=True unless you are defining a composite key with more than one primary_key=True
A:
Your id auto increments by default even without setting the autoincrement=True flag.
So there's nothing wrong with using
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
The error you're getting is as a result of attempting to populate the table with an id attribute. Your insert query shouldn't at any point contain an id attribute otherwise you'll get that error.
A:
I had the same error, even after adding autoincrement=True.
The problem was I already had the migration created. So I downgraded to the previous migration, deleted the migration, created the migration again and upgraded.
Then the error was gone.
Hope it helps someone stuck on this.
Wrapping off: Add autoincrement=True, and ensure your migration is updated and applied.
A:
You cannot add "autoincrement" flag in column definition, moreover add "__table__args" attribute just after __table__name.
Something like this:
__tablename__ = 'table-name'
__table_args__ = {'sqlite_autoincrement': True} -> This adds autoincrement to your primary key.
Try it, I hope this work for you ;) !
| unable to create autoincrementing primary key with flask-sqlalchemy | I want my model's primary key to be an autoincrementing integer. Here is how my model looks like
class Region(db.Model):
__tablename__ = 'regions'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
name = db.Column(db.String(100))
parent_id = db.Column(db.Integer, db.ForeignKey('regions.id'))
parent = db.relationship('Region', remote_side=id, primaryjoin=('Region.parent_id==Region.id'), backref='sub-regions')
created_at = db.Column(db.DateTime, default=db.func.now())
deleted_at = db.Column(db.DateTime)
The above code creates my table but does not make id autoincrementing. So if in my insert query I miss the id field it gives me this error
ERROR: null value in column "id" violates not-null constraint
So I changed the id declaration to look like this
id = db.Column(db.Integer, db.Sequence('seq_reg_id', start=1, increment=1),
primary_key=True)
Still the same error. What is wrong with the code above?
| [
"Nothing is wrong with the above code. In fact, you don't even need autoincrement=True or db.Sequence('seq_reg_id', start=1, increment=1), as SQLAlchemy will automatically set the first Integer PK column that's not marked as a FK as autoincrement=True.\nHere, I've put together a working setup based on yours. SQLAlechemy's ORM will take care of generating id's and populating objects with them if you use the Declarative Base based class that you've defined to create instances of your object.\nfrom flask import Flask\nfrom flask.ext.sqlalchemy import SQLAlchemy\n\napp = Flask(__name__)\napp.debug = True\napp.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://user:password@localhost/testdb'\napp.config['SQLALCHEMY_ECHO'] = True\ndb = SQLAlchemy(app)\n\nclass Region(db.Model):\n __tablename__ = 'regions'\n id = db.Column(db.Integer, primary_key=True)\n name = db.Column(db.String(100))\n\ndb.drop_all()\ndb.create_all()\n\nregion = Region(name='Over Yonder Thar')\napp.logger.info(region.id) # currently None, before persistence\n\ndb.session.add(region)\ndb.session.commit()\napp.logger.info(region.id) # gets assigned an id of 1 after being persisted\n\nregion2 = Region(name='Yet Another Up Yar')\ndb.session.add(region2)\ndb.session.commit()\napp.logger.info(region2.id) # and 2\n\nif __name__ == '__main__':\n app.run(port=9001)\n\n",
"So I landed here with an issue that my SQLite table wasn't auto-incrementing the primary key. I have a slightly complex use case where I want to use postgres in production but sqlite for testing to make life a bit easier when continuously deploying.\nIt turns out SQLite doesn't like columns defined as BigIntegers, and for incrementing to work they should be set as Integers. Remarkably SQLAlchemy can handle this scenario as follows using the with_variant function. Thought this may be useful for someone:\nid = db.Column(db.BigInteger().with_variant(db.Integer, \"sqlite\"), primary_key=True)\n\nFurther details here https://docs.sqlalchemy.org/en/13/dialects/sqlite.html\n",
"I think you do not need the autoincrement once you set ,\nid = db.Column(db.Integer , primary_key=True , autoincrement=True)\n\nI think that it should be ,\nid = db.Column(db.Integer , primary_key=True)\n\nit will give you the uniqueness your looking for .\n",
"I had this issue declaring Composite Keys on a model class.\nIf you are wanting an auto-incrementing id field for a composite key (ie. more than 1 db.Column(..) definition with primary_key=True, then adding autoincrement=True fixed the issue for me.\nclass S3Object(db.Model):\n __tablename__ = 's3_object'\n\n id = db.Column(db.Integer, primary_key=True, autoincrement=True)\n\n # composite keys\n bucket_name = db.Column(db.String(), primary_key=True)\n key = db.Column(db.String(), primary_key=True)\n\nSo the statements above about not requiring autoincrement=True should be :\n\nyou don't even need autoincrement=True, as SQLAlchemy will automatically set the first\nInteger PK column that's not marked as a FK as autoincrement=True unless you are defining a composite key with more than one primary_key=True\n\n",
"Your id auto increments by default even without setting the autoincrement=True flag.\nSo there's nothing wrong with using\nid = db.Column(db.Integer, primary_key=True, autoincrement=True)\n\nThe error you're getting is as a result of attempting to populate the table with an id attribute. Your insert query shouldn't at any point contain an id attribute otherwise you'll get that error.\n",
"I had the same error, even after adding autoincrement=True.\nThe problem was I already had the migration created. So I downgraded to the previous migration, deleted the migration, created the migration again and upgraded.\nThen the error was gone.\nHope it helps someone stuck on this.\nWrapping off: Add autoincrement=True, and ensure your migration is updated and applied.\n",
"You cannot add \"autoincrement\" flag in column definition, moreover add \"__table__args\" attribute just after __table__name.\nSomething like this:\n\n __tablename__ = 'table-name'\n __table_args__ = {'sqlite_autoincrement': True} -> This adds autoincrement to your primary key.\n\n\nTry it, I hope this work for you ;) !\n"
] | [
78,
15,
13,
8,
4,
2,
1
] | [
"In my case, I just added the id as external parameter, without relying on sqlalchemy\n",
"Try this code out, it worked for me.\nWithin the __init__ function don't specify the id, so when you create a new \"User\" object SQLAlchemy will automatically generate an id number for you uniquely.\nfrom flask import Flask\nfrom flask_sqlalchemy import SQLAlchemy\n\napp = Flask(__name__)\n\napp.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///users.sqlite3'\napp.config['SQLAlCHEMY_TRACK_MODIFICATIONS'] = False\n\ndb = SQLAlchemy(app)\n\nclass User(db.Model):\n _id = db.Column(db.Integer, primary_key = True, autoincrement = True)\n username = db.Column(db.String(80), unique = True, nullable = False)\n email = db.Column(db.String(120), unique = True, nullable = False)\n\ndef __init__(self, username, email):\n self.username = username\n self.email = email\n\nThis line of code will create our intended table inside our database.\nwith app.app_context():\n db.create_all()\n\n\nadmin = User(username = 'another admin', email='[email protected]')\nguest = User(username = 'another guest', email='[email protected]')\n\nThis code below will push our data into our table.\nwith app.app_context():\n db.session.add(admin)\n db.session.add(guest)\n db.session.commit()\n\n"
] | [
-1,
-1
] | [
"flask",
"flask_sqlalchemy",
"postgresql",
"python",
"sqlalchemy"
] | stackoverflow_0020848300_flask_flask_sqlalchemy_postgresql_python_sqlalchemy.txt |
Q:
How to create a function that search a list for a value that can be contained in a variable called key and print the array position of the key?
Write a function called find that will take a list of numbers, my_list, along with one other number, key. Have it search the list for the value contained in key. Each time your function finds the key value, print the array position of the key. You will need to juggle three variables, one for the list, one for the key, and one for the position of where you are in the list.
Copy/paste this code to test it:
my_list = [36, 31, 79, 96, 36, 91, 77, 33, 19, 3, 34, 12, 70, 12, 54, 98, 86, 11, 17, 17]
find(my_list, 12)
find(my_list, 91)
find(my_list, 80)
check for this output:
Found 12 at position 11
Found 12 at position 13
Found 91 at position 5
Use a for loop with an index variable and a range. Inside the loop use an if statement. The function can be written in about four lines of code.
I tried this:
def find(my_list, key):
index = 0
for element in my_list:
if key == element:
print(index)
index += 1
my_list = [36, 31, 79, 96, 36, 91, 77, 33, 19, 3, 34, 12, 70, 12, 54, 98, 86, 11, 17, 17]
find(my_list, 5)
But nothing really happened, no error, no result.
I've been struggling with this problem for while now, some help is really appreciated!
A:
The function you have written should work if it is properly indented:
my_list = [36, 31, 79, 96, 36, 91, 77, 33, 19, 3, 34, 12, 70, 12, 54, 98, 86, 11, 17, 17]
def find(my_list, key):
index = 0
for element in my_list:
if key == element:
print(index)
index += 1
find(my_list, 12)
>>> 11
>>> 13
find(my_list, 91)
>>> 5
find(my_list, 80)
>>>
However, since they ask you to use a for loop with index variable and a range, I suggest the following:
def find(my_list, key):
for i in range(len(my_list)):
if my_list[i] == key:
print('Found', key, 'in position', i)
find(my_list, 12)
>>> Found 12 in position 11
>>> Found 12 in position 13
find(my_list, 91)
>>> Found 91 in position 5
find(my_list, 80)
>>>
A:
enumerate():- function adds a counter as the key of the enumerate object.
Code:-
def find(my_list,num):
lis=[]
for index,value in enumerate(my_list):
if value==num:
lis.append(index)
return lis if len(lis)>0 else "In list there is no number "+str(num)
my_list = [36, 31, 79, 96, 36, 91, 77, 33, 19, 3, 34, 12, 70, 12, 54, 98, 86, 11, 17, 17]
# Testcase1
print(find(my_list, 12))
#Testcase2
print(find(my_list, 91))
#Testcase3
print(find(my_list, 80))
Output:-
[11, 13]
[5]
In list there is no number 80
# List Comprehension [One liner]:-
def find(my_list,num):
return [index for index,value in enumerate(my_list) if value==num]
my_list = [36, 31, 79, 96, 36, 91, 77, 33, 19, 3, 34, 12, 70, 12, 54, 98, 86, 11, 17, 17]
# Testcase1
print(find(my_list, 12))
#Testcase2
print(find(my_list, 91))
#Testcase3
print(find(my_list, 80))
Output:-
[11, 13]
[5]
[]
| How to create a function that search a list for a value that can be contained in a variable called key and print the array position of the key? | Write a function called find that will take a list of numbers, my_list, along with one other number, key. Have it search the list for the value contained in key. Each time your function finds the key value, print the array position of the key. You will need to juggle three variables, one for the list, one for the key, and one for the position of where you are in the list.
Copy/paste this code to test it:
my_list = [36, 31, 79, 96, 36, 91, 77, 33, 19, 3, 34, 12, 70, 12, 54, 98, 86, 11, 17, 17]
find(my_list, 12)
find(my_list, 91)
find(my_list, 80)
check for this output:
Found 12 at position 11
Found 12 at position 13
Found 91 at position 5
Use a for loop with an index variable and a range. Inside the loop use an if statement. The function can be written in about four lines of code.
I tried this:
def find(my_list, key):
index = 0
for element in my_list:
if key == element:
print(index)
index += 1
my_list = [36, 31, 79, 96, 36, 91, 77, 33, 19, 3, 34, 12, 70, 12, 54, 98, 86, 11, 17, 17]
find(my_list, 5)
But nothing really happened, no error, no result.
I've been struggling with this problem for while now, some help is really appreciated!
| [
"The function you have written should work if it is properly indented:\nmy_list = [36, 31, 79, 96, 36, 91, 77, 33, 19, 3, 34, 12, 70, 12, 54, 98, 86, 11, 17, 17]\n\n\ndef find(my_list, key):\n index = 0\n for element in my_list:\n if key == element:\n print(index)\n index += 1\n\nfind(my_list, 12)\n>>> 11\n>>> 13\nfind(my_list, 91) \n>>> 5\nfind(my_list, 80)\n>>>\n\nHowever, since they ask you to use a for loop with index variable and a range, I suggest the following:\ndef find(my_list, key):\n for i in range(len(my_list)):\n if my_list[i] == key:\n print('Found', key, 'in position', i)\n\nfind(my_list, 12)\n>>> Found 12 in position 11\n>>> Found 12 in position 13\nfind(my_list, 91)\n>>> Found 91 in position 5\nfind(my_list, 80)\n>>>\n\n",
"enumerate():- function adds a counter as the key of the enumerate object.\nCode:-\ndef find(my_list,num):\n lis=[]\n for index,value in enumerate(my_list):\n if value==num:\n lis.append(index)\n return lis if len(lis)>0 else \"In list there is no number \"+str(num)\n\n\nmy_list = [36, 31, 79, 96, 36, 91, 77, 33, 19, 3, 34, 12, 70, 12, 54, 98, 86, 11, 17, 17]\n# Testcase1\nprint(find(my_list, 12)) \n#Testcase2\nprint(find(my_list, 91)) \n#Testcase3\nprint(find(my_list, 80))\n\nOutput:-\n[11, 13]\n[5]\nIn list there is no number 80\n\n# List Comprehension [One liner]:-\ndef find(my_list,num):\n return [index for index,value in enumerate(my_list) if value==num]\n \nmy_list = [36, 31, 79, 96, 36, 91, 77, 33, 19, 3, 34, 12, 70, 12, 54, 98, 86, 11, 17, 17]\n# Testcase1\nprint(find(my_list, 12)) \n#Testcase2\nprint(find(my_list, 91)) \n#Testcase3\nprint(find(my_list, 80))\n\nOutput:-\n[11, 13]\n[5]\n[]\n\n"
] | [
0,
0
] | [] | [] | [
"for_loop",
"python"
] | stackoverflow_0074622631_for_loop_python.txt |
Q:
while importing fiona module getting error
I have already install the Fiona using the command
pip3 install Fiona
Now in my .py file I'm trying to import Fiona using
import fiona
it gave me this error:
SBCs-MacBook-Pro:gis-python sbc$ python practice.py
Traceback (most recent call last):
File "/Users/sbc/Desktop/project_tudip/upl_tudip/gis-python/practice.py", line 3, in <module>
import fiona
File "/Users/sbc/opt/anaconda3/envs/uniweed/lib/python3.9/site-packages/fiona/__init__.py", line 86, in <module>
from fiona.collection import BytesCollection, Collection
File "/Users/sbc/opt/anaconda3/envs/uniweed/lib/python3.9/site-packages/fiona/collection.py", line 11, in <module>
from fiona.ogrext import Iterator, ItemsIterator, KeysIterator
ImportError: dlopen(/Users/sbc/opt/anaconda3/envs/uniweed/lib/python3.9/site-packages/fiona/ogrext.cpython-39-darwin.so, 2): Symbol not found: ____chkstk_darwin
Referenced from: /Users/sbc/opt/anaconda3/envs/uniweed/lib/python3.9/site-packages/fiona/.dylibs/liblz4.1.9.3.dylib (which was built for Mac OS X 11.0)
Expected in: /usr/lib/libSystem.B.dylib
in /Users/sbc/opt/anaconda3/envs/uniweed/lib/python3.9/site-packages/fiona/.dylibs/liblz4.1.9.3.dylib
same code is running in my other laptop. but in This I'm not able to run my code.
configuration of laptop in which error is coming is
macOS High Sierra
MacBook Pro (13-inch, Early 2011)
Processor 2.3 GHz Intel Core i5
Memory 16 GB 1600 MHz DDR3
In my env. I have GDAL and our are already installed.
A:
This error arises because MacOS High Sierra (10.13.6) doesn't have ____chkstk_darwin function.
You can locally force a specific Fiona version with pip, in particular the last one that supports High Sierra:
pip install fiona==1.6.4
| while importing fiona module getting error | I have already install the Fiona using the command
pip3 install Fiona
Now in my .py file I'm trying to import Fiona using
import fiona
it gave me this error:
SBCs-MacBook-Pro:gis-python sbc$ python practice.py
Traceback (most recent call last):
File "/Users/sbc/Desktop/project_tudip/upl_tudip/gis-python/practice.py", line 3, in <module>
import fiona
File "/Users/sbc/opt/anaconda3/envs/uniweed/lib/python3.9/site-packages/fiona/__init__.py", line 86, in <module>
from fiona.collection import BytesCollection, Collection
File "/Users/sbc/opt/anaconda3/envs/uniweed/lib/python3.9/site-packages/fiona/collection.py", line 11, in <module>
from fiona.ogrext import Iterator, ItemsIterator, KeysIterator
ImportError: dlopen(/Users/sbc/opt/anaconda3/envs/uniweed/lib/python3.9/site-packages/fiona/ogrext.cpython-39-darwin.so, 2): Symbol not found: ____chkstk_darwin
Referenced from: /Users/sbc/opt/anaconda3/envs/uniweed/lib/python3.9/site-packages/fiona/.dylibs/liblz4.1.9.3.dylib (which was built for Mac OS X 11.0)
Expected in: /usr/lib/libSystem.B.dylib
in /Users/sbc/opt/anaconda3/envs/uniweed/lib/python3.9/site-packages/fiona/.dylibs/liblz4.1.9.3.dylib
same code is running in my other laptop. but in This I'm not able to run my code.
configuration of laptop in which error is coming is
macOS High Sierra
MacBook Pro (13-inch, Early 2011)
Processor 2.3 GHz Intel Core i5
Memory 16 GB 1600 MHz DDR3
In my env. I have GDAL and our are already installed.
| [
"This error arises because MacOS High Sierra (10.13.6) doesn't have ____chkstk_darwin function.\nYou can locally force a specific Fiona version with pip, in particular the last one that supports High Sierra:\npip install fiona==1.6.4\n\n"
] | [
0
] | [] | [] | [
"fiona",
"python"
] | stackoverflow_0071826025_fiona_python.txt |
Q:
How to fix : Exception has occurred: ZeroDivisionError division by zero
Currently working on ML project for testing and training models and I got this zero division error on this line.
p_bar.set_description('{}. Testing Data of phoneme "{}" against all models \nResult: {}/{}
correct prediction;\n accuracy: {:.2f}%'.format(
i+1,fc.get39Phon(i),count,len(test_lengths[i]),(count/len(test_lengths[i]))*100) #LINE ERROR
I couldn't figure it out why it generates the exception zero. How can i solve this?
Training the models:
try:
for i in range(39):
p_bar.set_description('{}. Training "{}" Phoneme Model'.format(i,fc.get39Phon(i)))
models[i].fit(features[i].reshape(-1,1),lengths[i] )#Expected 2D array, got 1D array instead, I reshaped the data as suggested
traceback.print_stack()
p_bar.update()
except Exception:
print(traceback.format_exc())
Testing the models
for i in range(39):
# --- adding missing length at end
tfeat_len = test_features[i].shape[0]
tlen_len = np.sum(test_lengths[i])
if tfeat_len != tlen_len:
test_lengths[i].append(tfeat_len-tlen_len)
predictions = []
for i in range(39):
#for each phon data
count = 0
s = 0
p_bar = tqdm(range(len(test_lengths[i])))
p_bar.set_description('{}. Testing Data of phoneme "{}" against all models'.format(i,fc.get39Phon(i)))
for j in test_lengths[i]:
# test in each phon model
max_prediction = -999999999999
max_index = 0
t_feat = test_features[i][s:j+s]
for k in range(39):
try:
score = math.floor(models[k].score(t_feat)*1000)
if(score > max_prediction):
max_prediction = score
max_index = k
if max_index > i:
break
except:
continue
p_bar.update()
count+= 1 if max_index == i else 0
s=j
predictions.append((count,len(test_lengths[i])))
TRACEBACK
traceback (most recent call last):
File "d:\github-space\Phoneme-Recognizer-\r.py", line 392, in <module>
models[i].fit(features[i].reshape(-1,1),lengths[i])
File "C:\Python\Python39\lib\site-packages\hmmlearn\base.py", line 496, in fit
X = check_array(X)
File "C:\Users\Acer\AppData\Roaming\Python\Python39\site-packages\sklearn\utils\validation.py", line 909, in check_array
raise ValueError(
ValueError: Found array with 0 sample(s) (shape=(0, 1)) while a minimum of 1 is required.
A:
It is giving you a division by zero error because len(test_lengths[i]) in count/len(test_lengths[i])*100 is 0, and you know that a number divided by zero is undefined, so it's giving you the error.
| How to fix : Exception has occurred: ZeroDivisionError division by zero | Currently working on ML project for testing and training models and I got this zero division error on this line.
p_bar.set_description('{}. Testing Data of phoneme "{}" against all models \nResult: {}/{}
correct prediction;\n accuracy: {:.2f}%'.format(
i+1,fc.get39Phon(i),count,len(test_lengths[i]),(count/len(test_lengths[i]))*100) #LINE ERROR
I couldn't figure it out why it generates the exception zero. How can i solve this?
Training the models:
try:
for i in range(39):
p_bar.set_description('{}. Training "{}" Phoneme Model'.format(i,fc.get39Phon(i)))
models[i].fit(features[i].reshape(-1,1),lengths[i] )#Expected 2D array, got 1D array instead, I reshaped the data as suggested
traceback.print_stack()
p_bar.update()
except Exception:
print(traceback.format_exc())
Testing the models
for i in range(39):
# --- adding missing length at end
tfeat_len = test_features[i].shape[0]
tlen_len = np.sum(test_lengths[i])
if tfeat_len != tlen_len:
test_lengths[i].append(tfeat_len-tlen_len)
predictions = []
for i in range(39):
#for each phon data
count = 0
s = 0
p_bar = tqdm(range(len(test_lengths[i])))
p_bar.set_description('{}. Testing Data of phoneme "{}" against all models'.format(i,fc.get39Phon(i)))
for j in test_lengths[i]:
# test in each phon model
max_prediction = -999999999999
max_index = 0
t_feat = test_features[i][s:j+s]
for k in range(39):
try:
score = math.floor(models[k].score(t_feat)*1000)
if(score > max_prediction):
max_prediction = score
max_index = k
if max_index > i:
break
except:
continue
p_bar.update()
count+= 1 if max_index == i else 0
s=j
predictions.append((count,len(test_lengths[i])))
TRACEBACK
traceback (most recent call last):
File "d:\github-space\Phoneme-Recognizer-\r.py", line 392, in <module>
models[i].fit(features[i].reshape(-1,1),lengths[i])
File "C:\Python\Python39\lib\site-packages\hmmlearn\base.py", line 496, in fit
X = check_array(X)
File "C:\Users\Acer\AppData\Roaming\Python\Python39\site-packages\sklearn\utils\validation.py", line 909, in check_array
raise ValueError(
ValueError: Found array with 0 sample(s) (shape=(0, 1)) while a minimum of 1 is required.
| [
"It is giving you a division by zero error because len(test_lengths[i]) in count/len(test_lengths[i])*100 is 0, and you know that a number divided by zero is undefined, so it's giving you the error.\n"
] | [
1
] | [] | [] | [
"hmmlearn",
"machine_learning",
"python",
"scikit_learn",
"training_data"
] | stackoverflow_0074626848_hmmlearn_machine_learning_python_scikit_learn_training_data.txt |
Q:
How to fix locking failed in pipenv?
I'm using pipenv inside a docker container. I tried installing a package and found that the installation succeeds (gets added to the Pipfile), but the locking keeps failing. Everything was fine until yesterday. Here's the error:
(app) root@7284b7892266:/usr/src/app# pipenv install scrapy-djangoitem
Installing scrapy-djangoitem…
Adding scrapy-djangoitem to Pipfile's [packages]…
✔ Installation Succeeded
Pipfile.lock (6d808e) out of date, updating to (27ac89)…
Locking [dev-packages] dependencies…
Building requirements...
Resolving dependencies...
✘ Locking Failed!
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 807, in <module>
main()
File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 803, in main
parsed.requirements_dir, parsed.packages, parse_only=parsed.parse_only)
File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 785, in _main
resolve_packages(pre, clear, verbose, system, write, requirements_dir, packages)
File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 758, in resolve_packages
results = clean_results(results, resolver, project)
File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 634, in clean_results
reverse_deps = project.environment.reverse_dependencies()
File "/usr/local/lib/python3.7/site-packages/pipenv/project.py", line 376, in environment
self._environment = self.get_environment(allow_global=allow_global)
File "/usr/local/lib/python3.7/site-packages/pipenv/project.py", line 366, in get_environment
environment.extend_dists(pipenv_dist)
File "/usr/local/lib/python3.7/site-packages/pipenv/environment.py", line 127, in extend_dists
extras = self.resolve_dist(dist, self.base_working_set)
File "/usr/local/lib/python3.7/site-packages/pipenv/environment.py", line 122, in resolve_dist
deps |= cls.resolve_dist(dist, working_set)
File "/usr/local/lib/python3.7/site-packages/pipenv/environment.py", line 121, in resolve_dist
dist = working_set.find(req)
File "/root/.local/share/virtualenvs/app-lp47FrbD/lib/python3.7/site-packages/pkg_resources/__init__.py", line 642, in find
raise VersionConflict(dist, req)
pkg_resources.VersionConflict: (importlib-metadata 2.0.0 (/root/.local/share/virtualenvs/app-lp47FrbD/lib/python3.7/site-packages), Requirement.parse('importlib-metadata<2,>=0.12; python_version < "3.8"'))
(app) root@7284b7892266:/usr/src/app#
What could be wrong?
EDIT
After removing Pipfile.lock and trying to install a package, I got:
(app) root@ef80787b5c42:/usr/src/app# pipenv install httpx
Installing httpx…
Adding httpx to Pipfile's [packages]…
✔ Installation Succeeded
Pipfile.lock not found, creating…
Locking [dev-packages] dependencies…
Building requirements...
Resolving dependencies...
✔ Success!
Locking [packages] dependencies…
Building requirements...
⠏ Locking...Resolving dependencies...
Traceback (most recent call last):
File "/usr/local/bin/pipenv", line 8, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pipenv/cli/command.py", line 252, in install
site_packages=state.site_packages
File "/usr/local/lib/python3.7/site-packages/pipenv/core.py", line 2202, in do_install
skip_lock=skip_lock,
File "/usr/local/lib/python3.7/site-packages/pipenv/core.py", line 1303, in do_init
pypi_mirror=pypi_mirror,
File "/usr/local/lib/python3.7/site-packages/pipenv/core.py", line 1113, in do_lock
keep_outdated=keep_outdated
File "/usr/local/lib/python3.7/site-packages/pipenv/utils.py", line 1323, in venv_resolve_deps
c = resolve(cmd, sp)
File "/usr/local/lib/python3.7/site-packages/pipenv/utils.py", line 1136, in resolve
result = c.expect(u"\n", timeout=environments.PIPENV_INSTALL_TIMEOUT)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/delegator.py", line 215, in expect
self.subprocess.expect(pattern=pattern, timeout=timeout)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/pexpect/spawnbase.py", line 344, in expect
timeout, searchwindowsize, async_)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/pexpect/spawnbase.py", line 372, in expect_list
return exp.expect_loop(timeout)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/pexpect/expect.py", line 181, in expect_loop
return self.timeout(e)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/pexpect/expect.py", line 144, in timeout
raise exc
pexpect.exceptions.TIMEOUT: <pexpect.popen_spawn.PopenSpawn object at 0x7f81e99bec90>
searcher: searcher_re:
0: re.compile('\n')
<pexpect.popen_spawn.PopenSpawn object at 0x7f81e99bec90>
searcher: searcher_re:
0: re.compile('\n')
(app) root@ef80787b5c42:/usr/src/app#
A:
Here are my debugging notes. Still not sure which package is causing the problem, but this does seem to fix it.
The error you get when you first run pipenv install with pipenv version 2020.8.13.
Traceback (most recent call last):
File "/usr/local/bin/pipenv", line 8, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pipenv/cli/command.py", line 252, in install
site_packages=state.site_packages
File "/usr/local/lib/python3.6/site-packages/pipenv/core.py", line 1928, in do_install
site_packages=site_packages,
File "/usr/local/lib/python3.6/site-packages/pipenv/core.py", line 580, in ensure_project
pypi_mirror=pypi_mirror,
File "/usr/local/lib/python3.6/site-packages/pipenv/core.py", line 512, in ensure_virtualenv
python=python, site_packages=site_packages, pypi_mirror=pypi_mirror
File "/usr/local/lib/python3.6/site-packages/pipenv/core.py", line 999, in do_create_virtualenv
project._environment.add_dist("pipenv")
File "/usr/local/lib/python3.6/site-packages/pipenv/environment.py", line 135, in add_dist
self.extend_dists(dist)
File "/usr/local/lib/python3.6/site-packages/pipenv/environment.py", line 127, in extend_dists
extras = self.resolve_dist(dist, self.base_working_set)
File "/usr/local/lib/python3.6/site-packages/pipenv/environment.py", line 122, in resolve_dist
deps |= cls.resolve_dist(dist, working_set)
File "/usr/local/lib/python3.6/site-packages/pipenv/environment.py", line 121, in resolve_dist
dist = working_set.find(req)
File "/usr/local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 642, in find
raise VersionConflict(dist, req)
pkg_resources.VersionConflict: (importlib-metadata 2.0.0 (/usr/local/lib/python3.6/site-packages), Requirement.parse('importlib-metadata<2,>=0.12; python_version < "3.8"'))
If you run pip install -U pipenv it seems to change the importlib-metadata version:
Installing collected packages: importlib-metadata
Attempting uninstall: importlib-metadata
Found existing installation: importlib-metadata 2.0.0
Uninstalling importlib-metadata-2.0.0:
Successfully uninstalled importlib-metadata-2.0.0
Successfully installed importlib-metadata-1.7.0
Now if you run pipenv install -d --skip-lock it will finish. It seems like a library is requiring a version >= importlib-metadata 2.0.
When I pinned the following dependencies it didn't work at first when running pipenv lock, however, if I removed the lock file (rm Pipenv.lock) then it worked when I ran pipenv lock again.
virtualenv = "==20.0.31"
importlib-metadata = "==1.7.0"
A:
Try to remove Pipefile.lock before installing a package
A:
I had the same problem when creating a virtual environment using python 3.7.12. The problem is gone using python 3.8.10. On Ubuntu 20.04.4 LTS.
A:
Here are some steps we followed, while we faced "locking failed"
1.make env
2.source /etc/pyenv
3.pipenv install
4.pipenv graph
5. Shoot the respective testcase execution (it may vary based on requirements)
Hope it works!!!:)
A:
Just delete the Pipfile.lock then rerun pipenv lock.
| How to fix locking failed in pipenv? | I'm using pipenv inside a docker container. I tried installing a package and found that the installation succeeds (gets added to the Pipfile), but the locking keeps failing. Everything was fine until yesterday. Here's the error:
(app) root@7284b7892266:/usr/src/app# pipenv install scrapy-djangoitem
Installing scrapy-djangoitem…
Adding scrapy-djangoitem to Pipfile's [packages]…
✔ Installation Succeeded
Pipfile.lock (6d808e) out of date, updating to (27ac89)…
Locking [dev-packages] dependencies…
Building requirements...
Resolving dependencies...
✘ Locking Failed!
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 807, in <module>
main()
File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 803, in main
parsed.requirements_dir, parsed.packages, parse_only=parsed.parse_only)
File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 785, in _main
resolve_packages(pre, clear, verbose, system, write, requirements_dir, packages)
File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 758, in resolve_packages
results = clean_results(results, resolver, project)
File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 634, in clean_results
reverse_deps = project.environment.reverse_dependencies()
File "/usr/local/lib/python3.7/site-packages/pipenv/project.py", line 376, in environment
self._environment = self.get_environment(allow_global=allow_global)
File "/usr/local/lib/python3.7/site-packages/pipenv/project.py", line 366, in get_environment
environment.extend_dists(pipenv_dist)
File "/usr/local/lib/python3.7/site-packages/pipenv/environment.py", line 127, in extend_dists
extras = self.resolve_dist(dist, self.base_working_set)
File "/usr/local/lib/python3.7/site-packages/pipenv/environment.py", line 122, in resolve_dist
deps |= cls.resolve_dist(dist, working_set)
File "/usr/local/lib/python3.7/site-packages/pipenv/environment.py", line 121, in resolve_dist
dist = working_set.find(req)
File "/root/.local/share/virtualenvs/app-lp47FrbD/lib/python3.7/site-packages/pkg_resources/__init__.py", line 642, in find
raise VersionConflict(dist, req)
pkg_resources.VersionConflict: (importlib-metadata 2.0.0 (/root/.local/share/virtualenvs/app-lp47FrbD/lib/python3.7/site-packages), Requirement.parse('importlib-metadata<2,>=0.12; python_version < "3.8"'))
(app) root@7284b7892266:/usr/src/app#
What could be wrong?
EDIT
After removing Pipfile.lock and trying to install a package, I got:
(app) root@ef80787b5c42:/usr/src/app# pipenv install httpx
Installing httpx…
Adding httpx to Pipfile's [packages]…
✔ Installation Succeeded
Pipfile.lock not found, creating…
Locking [dev-packages] dependencies…
Building requirements...
Resolving dependencies...
✔ Success!
Locking [packages] dependencies…
Building requirements...
⠏ Locking...Resolving dependencies...
Traceback (most recent call last):
File "/usr/local/bin/pipenv", line 8, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pipenv/cli/command.py", line 252, in install
site_packages=state.site_packages
File "/usr/local/lib/python3.7/site-packages/pipenv/core.py", line 2202, in do_install
skip_lock=skip_lock,
File "/usr/local/lib/python3.7/site-packages/pipenv/core.py", line 1303, in do_init
pypi_mirror=pypi_mirror,
File "/usr/local/lib/python3.7/site-packages/pipenv/core.py", line 1113, in do_lock
keep_outdated=keep_outdated
File "/usr/local/lib/python3.7/site-packages/pipenv/utils.py", line 1323, in venv_resolve_deps
c = resolve(cmd, sp)
File "/usr/local/lib/python3.7/site-packages/pipenv/utils.py", line 1136, in resolve
result = c.expect(u"\n", timeout=environments.PIPENV_INSTALL_TIMEOUT)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/delegator.py", line 215, in expect
self.subprocess.expect(pattern=pattern, timeout=timeout)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/pexpect/spawnbase.py", line 344, in expect
timeout, searchwindowsize, async_)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/pexpect/spawnbase.py", line 372, in expect_list
return exp.expect_loop(timeout)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/pexpect/expect.py", line 181, in expect_loop
return self.timeout(e)
File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/pexpect/expect.py", line 144, in timeout
raise exc
pexpect.exceptions.TIMEOUT: <pexpect.popen_spawn.PopenSpawn object at 0x7f81e99bec90>
searcher: searcher_re:
0: re.compile('\n')
<pexpect.popen_spawn.PopenSpawn object at 0x7f81e99bec90>
searcher: searcher_re:
0: re.compile('\n')
(app) root@ef80787b5c42:/usr/src/app#
| [
"Here are my debugging notes. Still not sure which package is causing the problem, but this does seem to fix it.\nThe error you get when you first run pipenv install with pipenv version 2020.8.13.\nTraceback (most recent call last):\n File \"/usr/local/bin/pipenv\", line 8, in <module>\n sys.exit(cli())\n File \"/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py\", line 829, in __call__\n return self.main(*args, **kwargs)\n File \"/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py\", line 782, in main\n rv = self.invoke(ctx)\n File \"/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py\", line 1259, in invoke\n return _process_result(sub_ctx.command.invoke(sub_ctx))\n File \"/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py\", line 1066, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File \"/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py\", line 610, in invoke\n return callback(*args, **kwargs)\n File \"/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/decorators.py\", line 73, in new_func\n return ctx.invoke(f, obj, *args, **kwargs)\n File \"/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py\", line 610, in invoke\n return callback(*args, **kwargs)\n File \"/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/decorators.py\", line 21, in new_func\n return f(get_current_context(), *args, **kwargs)\n File \"/usr/local/lib/python3.6/site-packages/pipenv/cli/command.py\", line 252, in install\n site_packages=state.site_packages\n File \"/usr/local/lib/python3.6/site-packages/pipenv/core.py\", line 1928, in do_install\n site_packages=site_packages,\n File \"/usr/local/lib/python3.6/site-packages/pipenv/core.py\", line 580, in ensure_project\n pypi_mirror=pypi_mirror,\n File \"/usr/local/lib/python3.6/site-packages/pipenv/core.py\", line 512, in ensure_virtualenv\n python=python, site_packages=site_packages, pypi_mirror=pypi_mirror\n File \"/usr/local/lib/python3.6/site-packages/pipenv/core.py\", line 999, in do_create_virtualenv\n project._environment.add_dist(\"pipenv\")\n File \"/usr/local/lib/python3.6/site-packages/pipenv/environment.py\", line 135, in add_dist\n self.extend_dists(dist)\n File \"/usr/local/lib/python3.6/site-packages/pipenv/environment.py\", line 127, in extend_dists\n extras = self.resolve_dist(dist, self.base_working_set)\n File \"/usr/local/lib/python3.6/site-packages/pipenv/environment.py\", line 122, in resolve_dist\n deps |= cls.resolve_dist(dist, working_set)\n File \"/usr/local/lib/python3.6/site-packages/pipenv/environment.py\", line 121, in resolve_dist\n dist = working_set.find(req)\n File \"/usr/local/lib/python3.6/site-packages/pkg_resources/__init__.py\", line 642, in find\n raise VersionConflict(dist, req)\npkg_resources.VersionConflict: (importlib-metadata 2.0.0 (/usr/local/lib/python3.6/site-packages), Requirement.parse('importlib-metadata<2,>=0.12; python_version < \"3.8\"'))\n\nIf you run pip install -U pipenv it seems to change the importlib-metadata version:\nInstalling collected packages: importlib-metadata\n Attempting uninstall: importlib-metadata\n Found existing installation: importlib-metadata 2.0.0\n Uninstalling importlib-metadata-2.0.0:\n Successfully uninstalled importlib-metadata-2.0.0\nSuccessfully installed importlib-metadata-1.7.0\n\nNow if you run pipenv install -d --skip-lock it will finish. It seems like a library is requiring a version >= importlib-metadata 2.0.\nWhen I pinned the following dependencies it didn't work at first when running pipenv lock, however, if I removed the lock file (rm Pipenv.lock) then it worked when I ran pipenv lock again.\nvirtualenv = \"==20.0.31\"\nimportlib-metadata = \"==1.7.0\"\n\n",
"Try to remove Pipefile.lock before installing a package\n",
"I had the same problem when creating a virtual environment using python 3.7.12. The problem is gone using python 3.8.10. On Ubuntu 20.04.4 LTS.\n",
"Here are some steps we followed, while we faced \"locking failed\"\n1.make env\n2.source /etc/pyenv\n3.pipenv install\n4.pipenv graph\n5. Shoot the respective testcase execution (it may vary based on requirements)\nHope it works!!!:)\n",
"Just delete the Pipfile.lock then rerun pipenv lock.\n"
] | [
15,
10,
0,
0,
0
] | [] | [] | [
"django",
"docker_compose",
"pipenv",
"pipenv_install",
"python"
] | stackoverflow_0064124931_django_docker_compose_pipenv_pipenv_install_python.txt |
Q:
python iterate from 0 to any Integer, positive or negative
I have to iterate from 0 to any Integer (call it x) that can be positive or negative (0 and x both included) (whether I iterate from x to 0 or from 0 to x does not matter)
I know I can use an if-else statement to first check if x is positive or negative and then use range(x+1) if x>0 or range(x, 1) if x<0 (both will work when x=0) like:
if x >= 0:
for i in range(x+1):
pass
else:
for i in range(x, 1):
pass
but I want a better way especially since I will actually be iterating over 2 Integers and this code is messy (and here also whether I iterate from y to 0 or from 0 to y does not matter)
if (x >= 0) and (y >= 0):
for i in range(x+1):
for j in range(y+1):
pass
elif (x >= 0) and (y < 0):
for i in range(x+1):
for j in range(y, 1):
pass
elif (x < 0) and (y >= 0):
for i in range(x, 1):
for j in range(y+1):
pass
else:
for i in range(x, 1):
for j in range(y, 1):
pass
A:
You can simplify it by defining a function.
get_range_args = lambda x: (0, x+1) if x > 0 else (x, 1)
for i in range(*get_range_args(x)):
for j in range(*get_range_args(y)):
pass
A:
A simple solution that requires no functions
for i in range(min(x, 0), max(x, 0) + 1):
for j in range(min(y, 0), max(y, 0) + 1):
pass
A:
Approach 1:
x = -9
y = 1
def getPosNeg(num):
if num >= 0:
return f"0, {num+1}"
return f"{num}, 1"
x_eval = eval(getPosNeg(x)) # use eval
y_eval = eval(getPosNeg(y)) # use eval
for i in range(*x_eval):
for j in range(*y_eval):
print(i, j)
Approach 2:
x = -9
y = 1
def getPosNeg(num):
if num >= 0:
return (0, num+1)
return (num, 1)
x_eval = getPosNeg(x)
y_eval = getPosNeg(y)
for i in range(*x_eval):
for j in range(*y_eval):
print(i, j)
If you only want positive integers in range, use abs():
x = -9
y = 1
for i in range(abs(x+1)):
for j in range(abs(y+1)):
print(i, j)
Docs:
eval()
| python iterate from 0 to any Integer, positive or negative | I have to iterate from 0 to any Integer (call it x) that can be positive or negative (0 and x both included) (whether I iterate from x to 0 or from 0 to x does not matter)
I know I can use an if-else statement to first check if x is positive or negative and then use range(x+1) if x>0 or range(x, 1) if x<0 (both will work when x=0) like:
if x >= 0:
for i in range(x+1):
pass
else:
for i in range(x, 1):
pass
but I want a better way especially since I will actually be iterating over 2 Integers and this code is messy (and here also whether I iterate from y to 0 or from 0 to y does not matter)
if (x >= 0) and (y >= 0):
for i in range(x+1):
for j in range(y+1):
pass
elif (x >= 0) and (y < 0):
for i in range(x+1):
for j in range(y, 1):
pass
elif (x < 0) and (y >= 0):
for i in range(x, 1):
for j in range(y+1):
pass
else:
for i in range(x, 1):
for j in range(y, 1):
pass
| [
"You can simplify it by defining a function.\nget_range_args = lambda x: (0, x+1) if x > 0 else (x, 1)\nfor i in range(*get_range_args(x)):\n for j in range(*get_range_args(y)):\n pass\n\n",
"A simple solution that requires no functions\nfor i in range(min(x, 0), max(x, 0) + 1):\n for j in range(min(y, 0), max(y, 0) + 1):\n pass\n\n",
"Approach 1:\nx = -9\ny = 1\ndef getPosNeg(num):\n if num >= 0:\n return f\"0, {num+1}\"\n return f\"{num}, 1\"\nx_eval = eval(getPosNeg(x)) # use eval\ny_eval = eval(getPosNeg(y)) # use eval\nfor i in range(*x_eval):\n for j in range(*y_eval):\n print(i, j) \n\nApproach 2:\nx = -9\ny = 1\n\ndef getPosNeg(num):\n if num >= 0:\n return (0, num+1)\n return (num, 1)\nx_eval = getPosNeg(x)\ny_eval = getPosNeg(y)\nfor i in range(*x_eval):\n for j in range(*y_eval):\n print(i, j) \n\nIf you only want positive integers in range, use abs():\nx = -9\ny = 1\n\nfor i in range(abs(x+1)):\n for j in range(abs(y+1)):\n print(i, j)\n\nDocs:\neval()\n"
] | [
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074626812_python.txt |
Q:
Calculating the difference between the first non-na value and the last na-value based on a grouped condition
I am looking to calculate the percentage increase or decrease between the first and last non-na value for the following dataset:
Year
Company
Data
2019
X
341976.00
2020
X
1.000
2021
X
282872.00
2019
Y
NaN
2020
Y
NaN
2021
Y
NaN
2019
Z
4394.00
2020
Z
173.70
2021
Z
518478.00
As I want the relative change I would expect the formula to do something like:
(last non-na value)/(first non-na value)-1
This should return something like:
Year
Company
Data
Data
2019
X
341976.00
NaN
2020
X
1.000
NaN
2021
X
282872.00
-0.17
2019
Y
NaN
NaN
2020
Y
NaN
NaN
2021
Y
NaN
NaN
2019
Z
4394.00
NaN
2020
Z
173.70
NaN
2021
Z
518478.00
11.700
I have tried to combine groupby based on the company field with the first_valid_index but havent had any luck finding a solution. What is the most efficient way of calculating the relative change as above?
A:
If aggregate GroupBy.first and
GroupBy.last it omit missing values, so is possible divide values and subtract 1:
s = df.groupby('Company')['Data'].agg(['last','first']).eval('last / first').sub(1)
Then found index values for last non missing values per Company:
idx = df.dropna(subset=['Data']).drop_duplicates(['Company'], keep='last').index
And mapping only matchded rows by Series.map:
df.loc[idx, 'Date'] = df.loc[idx, 'Company'].map(s)
print (df)
Year Company Data Date
0 2019 X 341976.0 NaN
1 2020 X 1.0 NaN
2 2021 X 282872.0 -0.172831
3 2019 Y NaN NaN
4 2020 Y NaN NaN
5 2021 Y NaN NaN
6 2019 Z 4394.0 NaN
7 2020 Z 173.7 NaN
8 2021 Z 518478.0 116.996814
A:
To find first non-na value u can:
iterate from first to last element of column and break if u value is not np.nan,
use .dropna method on dataframe and gets 1st element from the result df.
To find last:
iterate from the last to first and (just like above),
use dropna and gets value from last row
| Calculating the difference between the first non-na value and the last na-value based on a grouped condition | I am looking to calculate the percentage increase or decrease between the first and last non-na value for the following dataset:
Year
Company
Data
2019
X
341976.00
2020
X
1.000
2021
X
282872.00
2019
Y
NaN
2020
Y
NaN
2021
Y
NaN
2019
Z
4394.00
2020
Z
173.70
2021
Z
518478.00
As I want the relative change I would expect the formula to do something like:
(last non-na value)/(first non-na value)-1
This should return something like:
Year
Company
Data
Data
2019
X
341976.00
NaN
2020
X
1.000
NaN
2021
X
282872.00
-0.17
2019
Y
NaN
NaN
2020
Y
NaN
NaN
2021
Y
NaN
NaN
2019
Z
4394.00
NaN
2020
Z
173.70
NaN
2021
Z
518478.00
11.700
I have tried to combine groupby based on the company field with the first_valid_index but havent had any luck finding a solution. What is the most efficient way of calculating the relative change as above?
| [
"If aggregate GroupBy.first and\nGroupBy.last it omit missing values, so is possible divide values and subtract 1:\ns = df.groupby('Company')['Data'].agg(['last','first']).eval('last / first').sub(1)\n\nThen found index values for last non missing values per Company:\nidx = df.dropna(subset=['Data']).drop_duplicates(['Company'], keep='last').index\n\nAnd mapping only matchded rows by Series.map:\ndf.loc[idx, 'Date'] = df.loc[idx, 'Company'].map(s)\nprint (df)\n\n Year Company Data Date\n0 2019 X 341976.0 NaN\n1 2020 X 1.0 NaN\n2 2021 X 282872.0 -0.172831\n3 2019 Y NaN NaN\n4 2020 Y NaN NaN\n5 2021 Y NaN NaN\n6 2019 Z 4394.0 NaN\n7 2020 Z 173.7 NaN\n8 2021 Z 518478.0 116.996814\n\n",
"To find first non-na value u can:\n\niterate from first to last element of column and break if u value is not np.nan,\nuse .dropna method on dataframe and gets 1st element from the result df.\n\nTo find last:\n\niterate from the last to first and (just like above),\nuse dropna and gets value from last row\n\n"
] | [
1,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074626911_pandas_python.txt |
Q:
What is the point in using PySpark over Pandas?
I've been learning Spark recently (PySpark to be more precise) and at first it seemed really useful and powerful to me. Like you can process Gb of data in parallel so it can me much faster than processing it with classical tool... right ? So I wanted to try by myself to be convinced.
So I downloaded a csv file of almost 1GB, ~ten millions of rows (link :https://github.com/DataTalksClub/nyc-tlc-data/releases/download/fhvhv/fhvhv_tripdata_2021-01.csv.gz) and wanted to try to process it with Spark and with Pandas to see the difference.
So the goal was just to read the file and count of many rows were there for a certain date. I tried with PySpark :
Preprocess with PySpark
and with pandas :
Preprocess with Pandas
Which obviously gives the same result, but it take about 1mn30 for PySpark and only (!) about 30s for Pandas.
I feel like I missed something but I don't know what. Why does it take much more time with PySpark ? Shouldn't be the contrary ?
EDIT : I did not show my Spark configuration, but I am just using it locally so maybe this can be the explanation ?
A:
Spark is a distributed processing framework. That means that, in order to use it at it's full potential, you must deploy it on a cluster of machines (called nodes): the processing is then parallelized and distributed across them. This usually happens on cloud platforms like Google Cloud or AWS. Another interesting option to check out is Databricks.
If you use it on your local machine it would run on a single node, therefore it will be just a worse version of Pandas. That's fine for learning purposes but it's not the way it is meant to be used.
For more informations about how a Spark cluster works check the documentation: https://spark.apache.org/docs/latest/cluster-overview.html
Keep in mind that is a very deep topic, and it would take a while to decently understand everything...
| What is the point in using PySpark over Pandas? | I've been learning Spark recently (PySpark to be more precise) and at first it seemed really useful and powerful to me. Like you can process Gb of data in parallel so it can me much faster than processing it with classical tool... right ? So I wanted to try by myself to be convinced.
So I downloaded a csv file of almost 1GB, ~ten millions of rows (link :https://github.com/DataTalksClub/nyc-tlc-data/releases/download/fhvhv/fhvhv_tripdata_2021-01.csv.gz) and wanted to try to process it with Spark and with Pandas to see the difference.
So the goal was just to read the file and count of many rows were there for a certain date. I tried with PySpark :
Preprocess with PySpark
and with pandas :
Preprocess with Pandas
Which obviously gives the same result, but it take about 1mn30 for PySpark and only (!) about 30s for Pandas.
I feel like I missed something but I don't know what. Why does it take much more time with PySpark ? Shouldn't be the contrary ?
EDIT : I did not show my Spark configuration, but I am just using it locally so maybe this can be the explanation ?
| [
"Spark is a distributed processing framework. That means that, in order to use it at it's full potential, you must deploy it on a cluster of machines (called nodes): the processing is then parallelized and distributed across them. This usually happens on cloud platforms like Google Cloud or AWS. Another interesting option to check out is Databricks.\nIf you use it on your local machine it would run on a single node, therefore it will be just a worse version of Pandas. That's fine for learning purposes but it's not the way it is meant to be used.\nFor more informations about how a Spark cluster works check the documentation: https://spark.apache.org/docs/latest/cluster-overview.html\nKeep in mind that is a very deep topic, and it would take a while to decently understand everything...\n"
] | [
2
] | [] | [] | [
"pandas",
"preprocessor",
"pyspark",
"python"
] | stackoverflow_0074626809_pandas_preprocessor_pyspark_python.txt |
Q:
Missing 'path' argument in get() call
I am trying to test my views in Django, and when I run this i get the error
from django.test import TestCase, Client
from django.urls import reverse
from foodsystem_app.models import discount,menu
import json
class TestViews(TestCase):
def test_login_GET(self):
client = Client
response = client.get(reverse('login'))
self.assertEquals(response.status_code,200)
self.assertTemplateUsed(response,'foodsystem/login.html')
response = client.get(reverse('login'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Client.get() missing 1 required positional argument: 'path'
----------------------------------------------------------------------
Ran 4 tests in 0.005s
FAILED (errors=1)
I'm not sure what I am supposed to pass as the path name. This is the code for what I am testing
def login_request(request):
if request.method == "POST":
form = AuthenticationForm(request, data=request.POST)
if form.is_valid():
username = form.cleaned_data.get('username')
password = form.cleaned_data.get('password')
user = authenticate(username=username, password=password)
if user is not None:
login(request, user)
messages.info(request, f"You are now logged in as {username}.")
return redirect("main:homepage")
else:
messages.error(request,"Invalid username or password.")
else:
messages.error(request,"Invalid username or password.")
form = AuthenticationForm()
return render(request=request, template_name="login.html", context={"login_form":form})
A:
You need to instantiate the Client class, you are currently just referencing the class directly.
client = Client()
| Missing 'path' argument in get() call | I am trying to test my views in Django, and when I run this i get the error
from django.test import TestCase, Client
from django.urls import reverse
from foodsystem_app.models import discount,menu
import json
class TestViews(TestCase):
def test_login_GET(self):
client = Client
response = client.get(reverse('login'))
self.assertEquals(response.status_code,200)
self.assertTemplateUsed(response,'foodsystem/login.html')
response = client.get(reverse('login'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Client.get() missing 1 required positional argument: 'path'
----------------------------------------------------------------------
Ran 4 tests in 0.005s
FAILED (errors=1)
I'm not sure what I am supposed to pass as the path name. This is the code for what I am testing
def login_request(request):
if request.method == "POST":
form = AuthenticationForm(request, data=request.POST)
if form.is_valid():
username = form.cleaned_data.get('username')
password = form.cleaned_data.get('password')
user = authenticate(username=username, password=password)
if user is not None:
login(request, user)
messages.info(request, f"You are now logged in as {username}.")
return redirect("main:homepage")
else:
messages.error(request,"Invalid username or password.")
else:
messages.error(request,"Invalid username or password.")
form = AuthenticationForm()
return render(request=request, template_name="login.html", context={"login_form":form})
| [
"You need to instantiate the Client class, you are currently just referencing the class directly.\nclient = Client()\n\n"
] | [
2
] | [] | [] | [
"django",
"python"
] | stackoverflow_0074626915_django_python.txt |
Q:
Checking value inside Sqlalchemy queried data
I am querying Tags table and storing its values into varialble
all_tags = Tag.query.all() # <- Query all existing tags
Output:
>>> all_tags
[<Tag>: STM32, <Tag>: Linux, <Tag>: Unix, <Tag>: Skype, <Tag>: MCU, <Tag>: CPU, <Tag>: Silk, <Tag>: WAN]
I am receiving tag values from json client, after I want to skip existiing tags and add only news to database.
for tag in json_data['tags']:
tag = Tag(tag_name=tag)
if tag in all_tags: # <- If tag from json query does exists in table skip
pass
myPost.tags.append(tag) # < - or add it
Seems this code doesnot work and throws the error:
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: tags.tag_name
Please, advice how can I implement this task
A:
You can query for every id, if exists skip the append operation, otherwise append it.
for tag in json_data['tags']:
tag_q = Tag.query.filter_by(id=tag["id"]).first()
if tag_q is not None:
continue
myPost.tags.append(tag_q) # < - or add it
| Checking value inside Sqlalchemy queried data | I am querying Tags table and storing its values into varialble
all_tags = Tag.query.all() # <- Query all existing tags
Output:
>>> all_tags
[<Tag>: STM32, <Tag>: Linux, <Tag>: Unix, <Tag>: Skype, <Tag>: MCU, <Tag>: CPU, <Tag>: Silk, <Tag>: WAN]
I am receiving tag values from json client, after I want to skip existiing tags and add only news to database.
for tag in json_data['tags']:
tag = Tag(tag_name=tag)
if tag in all_tags: # <- If tag from json query does exists in table skip
pass
myPost.tags.append(tag) # < - or add it
Seems this code doesnot work and throws the error:
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: tags.tag_name
Please, advice how can I implement this task
| [
"You can query for every id, if exists skip the append operation, otherwise append it.\nfor tag in json_data['tags']:\n tag_q = Tag.query.filter_by(id=tag[\"id\"]).first()\n if tag_q is not None:\n continue\n myPost.tags.append(tag_q) # < - or add it\n\n"
] | [
0
] | [] | [] | [
"flask",
"flask_sqlalchemy",
"python"
] | stackoverflow_0074626275_flask_flask_sqlalchemy_python.txt |
Q:
CLOSED (Thank you)
Another total noob question: I am not sure why my answer is printing out as a decimal. Also, in the lab the dimes are expected to be listed first, not sure how I screwed that up? I appreciate the help!
Define a function called exact_change that takes the total change amount in cents and calculates the change using the fewest coins. The coin types are pennies, nickels, dimes, and quarters. Then write a main program that reads the total change amount as an integer input, calls exact_change(), and outputs the change, one coin type per line. Use singular and plural coin names as appropriate, like 1 penny vs. 2 pennies. Output "no change" if the input is 0 or less.
Your program must define and call the following function. The function exact_change() should return num_pennies, num_nickels, num_dimes, and num_quarters.
def exact_change(user_total)
def exact_change(user_total):
return(num_dollars, num_quarters, num_dimes, num_nickles, num_pennies)
if __name__ == '__main__':
input_val = float(input())
num_dollars = input_val // 100
rem=input_val % 100
num_quarters = rem // 25
rem = rem % 25
num_dimes = rem // 10
rem = rem % 10
num_nickles = rem // 5
rem = rem % 5
num_pennies = rem
if input_val <= 0:
print("no change")
else:
num_dollars = input_val // 100
conv_dollar = str(num_dollars)
rem = input_val % 100
if num_dollars == 1:
print(conv_dollar + ' dollar')
elif num_dollars > 1:
print(conv_dollar + ' dollars')
num_quarters = rem // 25
conv_quarter = str(num_quarters)
rem = rem % 25
if num_quarters == 1:
print(conv_quarter + ' quarter')
elif num_quarters > 1:
print(conv_quarter + ' quarters')
num_dimes = rem // 10
conv_dime = str(num_dimes)
rem = rem % 10
if num_dimes == 1:
print(conv_dime + ' dime')
elif num_dimes > 1:
print(conv_dime + ' dimes')
num_nickels = rem // 5
conv_nickel = str(num_nickels)
rem = rem % 5
if num_nickels == 1:
print(conv_nickel + ' nickel')
elif num_nickels > 1:
print(conv_nickel + ' nickels')
num_pennies = rem
conv_penny = str(num_pennies)
rem = rem % 1
if num_pennies == 1:
print(conv_penny + ' penny')
elif num_pennies > 1:
print(conv_penny + ' pennies')
1:Compare output
0 / 1
Output differs. See highlights below.
Special character legend
Input
45
Your output
1.0 quarter
2.0 dimes
Expected output
2 dimes
1 quarter
2:Compare output
1 / 1
Input
0
Your output
no change
3:Compare output
0 / 2
Output differs. See highlights below.
Special character legend
Input
156
Your output
1.0 dollar
2.0 quarters
1.0 nickel
1.0 penny
Expected output
1 penny
1 nickel
6 quarters
4:Unit test
0 / 3
exact_change(300). Should return 0, 0, 0, 12
NameError: name 'input_val' is not defined
5:Unit test
0 / 3
exact_change(141). Should return 1, 1, 1, 5
NameError: name 'input_val' is not defined
A:
I didn't run it, but it seems the code shouldn't produce "floats" on output, yet there is some room for improvement:
Your program is not calling the function exact_change, it only defines it at the top of the module, but it's never called.
Use f-string, not string concatenation and you don't have to explicitly convert to string. You can also use it for adding the "plural" ending or not (this will work for all except pennies, since they change the word, not only append "s").
e.g.
num_quarters = rem // 25
plural = "s" if num_quarters > 1 else ""
print(f"{num_quarters} quarter{plural}"
This function exact_change does not make too much sense, it has no logic and only prints. All the logic is happening under main entry point of the program (this one-> if __name__="__main__"). The function should do the logic and be called instead. Also it's good to use verb for functions, nouns for objects. So get_exact_change, calculate_exact_change and so on makes more sense (just for the future, not for your current assignment).
The initial calculations are redundant since you do the again in the else block.
input_val = float(input()) - this without any validation / try...except block is problematic. If user hits enter and does NOT input anything you'll end up with ValueError, since empty string cannot be converted to float.
A:
input_val is a float. // floors the value, but doesn't convert it to an int.
When you are about to print it you need to convert it to an int first.
conv_dime = str(int(num_dimes))
| CLOSED (Thank you) | Another total noob question: I am not sure why my answer is printing out as a decimal. Also, in the lab the dimes are expected to be listed first, not sure how I screwed that up? I appreciate the help!
Define a function called exact_change that takes the total change amount in cents and calculates the change using the fewest coins. The coin types are pennies, nickels, dimes, and quarters. Then write a main program that reads the total change amount as an integer input, calls exact_change(), and outputs the change, one coin type per line. Use singular and plural coin names as appropriate, like 1 penny vs. 2 pennies. Output "no change" if the input is 0 or less.
Your program must define and call the following function. The function exact_change() should return num_pennies, num_nickels, num_dimes, and num_quarters.
def exact_change(user_total)
def exact_change(user_total):
return(num_dollars, num_quarters, num_dimes, num_nickles, num_pennies)
if __name__ == '__main__':
input_val = float(input())
num_dollars = input_val // 100
rem=input_val % 100
num_quarters = rem // 25
rem = rem % 25
num_dimes = rem // 10
rem = rem % 10
num_nickles = rem // 5
rem = rem % 5
num_pennies = rem
if input_val <= 0:
print("no change")
else:
num_dollars = input_val // 100
conv_dollar = str(num_dollars)
rem = input_val % 100
if num_dollars == 1:
print(conv_dollar + ' dollar')
elif num_dollars > 1:
print(conv_dollar + ' dollars')
num_quarters = rem // 25
conv_quarter = str(num_quarters)
rem = rem % 25
if num_quarters == 1:
print(conv_quarter + ' quarter')
elif num_quarters > 1:
print(conv_quarter + ' quarters')
num_dimes = rem // 10
conv_dime = str(num_dimes)
rem = rem % 10
if num_dimes == 1:
print(conv_dime + ' dime')
elif num_dimes > 1:
print(conv_dime + ' dimes')
num_nickels = rem // 5
conv_nickel = str(num_nickels)
rem = rem % 5
if num_nickels == 1:
print(conv_nickel + ' nickel')
elif num_nickels > 1:
print(conv_nickel + ' nickels')
num_pennies = rem
conv_penny = str(num_pennies)
rem = rem % 1
if num_pennies == 1:
print(conv_penny + ' penny')
elif num_pennies > 1:
print(conv_penny + ' pennies')
1:Compare output
0 / 1
Output differs. See highlights below.
Special character legend
Input
45
Your output
1.0 quarter
2.0 dimes
Expected output
2 dimes
1 quarter
2:Compare output
1 / 1
Input
0
Your output
no change
3:Compare output
0 / 2
Output differs. See highlights below.
Special character legend
Input
156
Your output
1.0 dollar
2.0 quarters
1.0 nickel
1.0 penny
Expected output
1 penny
1 nickel
6 quarters
4:Unit test
0 / 3
exact_change(300). Should return 0, 0, 0, 12
NameError: name 'input_val' is not defined
5:Unit test
0 / 3
exact_change(141). Should return 1, 1, 1, 5
NameError: name 'input_val' is not defined
| [
"I didn't run it, but it seems the code shouldn't produce \"floats\" on output, yet there is some room for improvement:\n\nYour program is not calling the function exact_change, it only defines it at the top of the module, but it's never called.\n\nUse f-string, not string concatenation and you don't have to explicitly convert to string. You can also use it for adding the \"plural\" ending or not (this will work for all except pennies, since they change the word, not only append \"s\").\ne.g.\n num_quarters = rem // 25\n plural = \"s\" if num_quarters > 1 else \"\"\n print(f\"{num_quarters} quarter{plural}\"\n\n\nThis function exact_change does not make too much sense, it has no logic and only prints. All the logic is happening under main entry point of the program (this one-> if __name__=\"__main__\"). The function should do the logic and be called instead. Also it's good to use verb for functions, nouns for objects. So get_exact_change, calculate_exact_change and so on makes more sense (just for the future, not for your current assignment).\n\nThe initial calculations are redundant since you do the again in the else block.\n\ninput_val = float(input()) - this without any validation / try...except block is problematic. If user hits enter and does NOT input anything you'll end up with ValueError, since empty string cannot be converted to float.\n\n\n",
"input_val is a float. // floors the value, but doesn't convert it to an int.\nWhen you are about to print it you need to convert it to an int first.\nconv_dime = str(int(num_dimes))\n\n"
] | [
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0074623637_python.txt |
Q:
Isolate rows containing IDs in a column based on another column value, yet keeping all the records of original ID
I'd prefer to explain it grafically as it's hard for me to sum it up in the title.
Given a dataframe like this one below:
id type
1 new
2 new
2 new repeater
2 repeater
3 repeater
4 new
4 new repeater
5 new repeater
5 repeater
6 new
I would like to filter it so it just returns me the values in the column id that appear in type at least as new, yet once this condition is fulfilled I want the remaining records belonging to this ID to stay in the outcoming DF. In other words, it should look like follows:
id type
1 new
2 new
2 new repeater
2 repeater
4 new
4 new repeater
6 new
A:
Use GroupBy.cummax with bollean mask for test first match condition and filter in boolean indexing:
df = df[df['type'].eq('new').groupby(df['id']).cummax()]
print (df)
id type
0 1 new
1 2 new
2 2 new repeater
3 2 repeater
5 4 new
6 4 new repeater
9 6 new
| Isolate rows containing IDs in a column based on another column value, yet keeping all the records of original ID | I'd prefer to explain it grafically as it's hard for me to sum it up in the title.
Given a dataframe like this one below:
id type
1 new
2 new
2 new repeater
2 repeater
3 repeater
4 new
4 new repeater
5 new repeater
5 repeater
6 new
I would like to filter it so it just returns me the values in the column id that appear in type at least as new, yet once this condition is fulfilled I want the remaining records belonging to this ID to stay in the outcoming DF. In other words, it should look like follows:
id type
1 new
2 new
2 new repeater
2 repeater
4 new
4 new repeater
6 new
| [
"Use GroupBy.cummax with bollean mask for test first match condition and filter in boolean indexing:\ndf = df[df['type'].eq('new').groupby(df['id']).cummax()]\nprint (df)\n id type\n0 1 new\n1 2 new\n2 2 new repeater\n3 2 repeater\n5 4 new\n6 4 new repeater\n9 6 new\n\n"
] | [
1
] | [] | [] | [
"jupyter_lab",
"numpy",
"pandas",
"python"
] | stackoverflow_0074627059_jupyter_lab_numpy_pandas_python.txt |
Q:
How to update cells if row name is X And if column header is Y using openpyxl
I have an excel report and I want to update cells if row name is X And if column header is Y.
I have 53 columns with date, and 102 rows with names, so it's impossible to use 53 lines of code for each column, and 102 lines of code for each row, so I need code that checks if the row's value is for example SFR BOX HBD ECO THD and column header is 2022,10,31 then update the cell in this position.
from openpyxl import load_workbook
Wb = load_workbook('file.xlsx')
Ws = wb['VD CONQUETE DC']
for rownum in range(2, Ws.max_rows):
statusCol = Ws.cell(row=rownum, column=3).value
if statusCol == 'SFR BOX HBD ECO THD':
Ws.cell(row=rownum, column='2022,10,31', value=vente1date1)
file photo
the code
A:
The screenshot of your Excel file shows the sheet "CUMUL" while your code describes "VD CONQUETE DC", but anyway, you can find below a proposition to update a value that match a given conditon. Feel free to readapt the code to fit your actual dataset.
from openpyxl import load_workbook
from datetime import datetime
# --- Loading the spreadsheet
wb = load_workbook("file.xlsx")
ws = wb["CUMUL"]
# --- Defining the filters/coordinates
date_name= "31/10/2022"
sales_name = "SFR BOX HBD ECO THD"
# --- Updating the matched value
vente1date1 = "new value"
for num_row in range(2, ws.max_row+1):
sales_row = ws.cell(row=num_row, column=2) #Assuming that the sales are located in column B (thus, column=2)
if sales_row.value == sales_name:
for num_col in range(1, ws.max_column+1):
date_col = ws.cell(row=1, column=num_col) #Assuming that the dates are located in the first row (thus, row=1)
if isinstance(date_col.value, datetime):
date_col_str = date_col.value.strftime("%d/%m/%Y") #Converting parsed date to string
if date_col_str == date_name:
val_row = ws.cell(row=num_row, column=num_col)
val_row.value= vente1date1 #Updating the cell value
wb.save("file.xlsx")
If the dates are stored as text in Excel, use this bloc for the update:
for num_row in range(2, ws.max_row+1):
sales_row = ws.cell(row=num_row, column=2) #Assuming that the sales are in column B (thus, column=2)
if sales_row.value == sales_name:
for num_col in range(1, ws.max_column+1):
date_col = ws.cell(row=1, column=num_col) #Assuming that the dates are located in the first row (thus, row=1)
if date_col.value == date_name:
val_row = ws.cell(row=num_row, column=num_col)
val_row.value= vente1date1 #Updating the cell value
NB: Make sure to keep always a backup/copy of your original Excel file before running any kind of python/openpyxl's script.
| How to update cells if row name is X And if column header is Y using openpyxl | I have an excel report and I want to update cells if row name is X And if column header is Y.
I have 53 columns with date, and 102 rows with names, so it's impossible to use 53 lines of code for each column, and 102 lines of code for each row, so I need code that checks if the row's value is for example SFR BOX HBD ECO THD and column header is 2022,10,31 then update the cell in this position.
from openpyxl import load_workbook
Wb = load_workbook('file.xlsx')
Ws = wb['VD CONQUETE DC']
for rownum in range(2, Ws.max_rows):
statusCol = Ws.cell(row=rownum, column=3).value
if statusCol == 'SFR BOX HBD ECO THD':
Ws.cell(row=rownum, column='2022,10,31', value=vente1date1)
file photo
the code
| [
"The screenshot of your Excel file shows the sheet \"CUMUL\" while your code describes \"VD CONQUETE DC\", but anyway, you can find below a proposition to update a value that match a given conditon. Feel free to readapt the code to fit your actual dataset.\nfrom openpyxl import load_workbook\nfrom datetime import datetime\n\n# --- Loading the spreadsheet\nwb = load_workbook(\"file.xlsx\")\nws = wb[\"CUMUL\"]\n\n# --- Defining the filters/coordinates\ndate_name= \"31/10/2022\"\nsales_name = \"SFR BOX HBD ECO THD\"\n\n# --- Updating the matched value\nvente1date1 = \"new value\"\nfor num_row in range(2, ws.max_row+1):\n sales_row = ws.cell(row=num_row, column=2) #Assuming that the sales are located in column B (thus, column=2)\n if sales_row.value == sales_name:\n for num_col in range(1, ws.max_column+1):\n date_col = ws.cell(row=1, column=num_col) #Assuming that the dates are located in the first row (thus, row=1)\n if isinstance(date_col.value, datetime):\n date_col_str = date_col.value.strftime(\"%d/%m/%Y\") #Converting parsed date to string\n if date_col_str == date_name:\n val_row = ws.cell(row=num_row, column=num_col)\n val_row.value= vente1date1 #Updating the cell value\n \nwb.save(\"file.xlsx\")\n\nIf the dates are stored as text in Excel, use this bloc for the update:\nfor num_row in range(2, ws.max_row+1):\n sales_row = ws.cell(row=num_row, column=2) #Assuming that the sales are in column B (thus, column=2)\n if sales_row.value == sales_name:\n for num_col in range(1, ws.max_column+1):\n date_col = ws.cell(row=1, column=num_col) #Assuming that the dates are located in the first row (thus, row=1)\n if date_col.value == date_name:\n val_row = ws.cell(row=num_row, column=num_col)\n val_row.value= vente1date1 #Updating the cell value\n\nNB: Make sure to keep always a backup/copy of your original Excel file before running any kind of python/openpyxl's script.\n"
] | [
0
] | [] | [] | [
"openpyxl",
"python"
] | stackoverflow_0074626535_openpyxl_python.txt |
Q:
Python - Plot every three columns from dataframe in one figure for multiple figures
I have a dataframe with 150 columns and I want to plot every three together (the variable plus minus the standarddeviation) against the date, which means that I want to end up with 50 plots. Those 50 I want to have together in a X by X matrix (whats best possible).
The pandas dataframe looks like this:
I also have three dataframes with the separate variable/stdev minus/stdev plus if thats easier to plot.
What I tried so far:
for colname in df.columns:
plt.figure()
plt.plot(df["date"], df[colname])
plt.plot(df["date"], df[colname+"_minus_stdev"])
plt.plot(df["date"], df[colname+"_plus_stdev"])
plt.savefig(colname+".png")
plt.show()
But this doesn´t work and then I would have duplicate plots.
Help would be appreciated!
Best,
Lena
A:
Try putting the plt.figure() and plt.plot() outside the for loop.
e.g.
plt.figure()
for colname in df.columns:
plt.plot(df["date"], df[colname])
plt.plot(df["date"], df[colname+"_minus_stdev"])
plt.plot(df["date"], df[colname+"_plus_stdev"])
plt.savefig(colname+".png")
plt.show()
| Python - Plot every three columns from dataframe in one figure for multiple figures | I have a dataframe with 150 columns and I want to plot every three together (the variable plus minus the standarddeviation) against the date, which means that I want to end up with 50 plots. Those 50 I want to have together in a X by X matrix (whats best possible).
The pandas dataframe looks like this:
I also have three dataframes with the separate variable/stdev minus/stdev plus if thats easier to plot.
What I tried so far:
for colname in df.columns:
plt.figure()
plt.plot(df["date"], df[colname])
plt.plot(df["date"], df[colname+"_minus_stdev"])
plt.plot(df["date"], df[colname+"_plus_stdev"])
plt.savefig(colname+".png")
plt.show()
But this doesn´t work and then I would have duplicate plots.
Help would be appreciated!
Best,
Lena
| [
"Try putting the plt.figure() and plt.plot() outside the for loop.\ne.g.\nplt.figure()\n\nfor colname in df.columns:\n plt.plot(df[\"date\"], df[colname])\n plt.plot(df[\"date\"], df[colname+\"_minus_stdev\"])\n plt.plot(df[\"date\"], df[colname+\"_plus_stdev\"])\n plt.savefig(colname+\".png\")\n\nplt.show()\n\n"
] | [
0
] | [] | [] | [
"matplotlib",
"pandas",
"python",
"pythonplotter"
] | stackoverflow_0074622054_matplotlib_pandas_python_pythonplotter.txt |
Q:
Why is this function returning a list when called within another function?
My function is set to return a dictionary. When called, it returns the dictionary. However, if I call the function from within another function, it returns a list.
`
def draw(self, num: int) -> dict:
drawn_dict = {}
if num > len(self.contents):
return self.contents
else:
while num >= 1:
drawn_num = self.contents.pop(random.randint(0, len(self.contents) - 1))
drawn_dict.setdefault(drawn_num, 0)
drawn_dict[drawn_num] +=1
num -= 1
return drawn_dict
def experiment(hat, expected_balls, num_balls_drawn, num_experiments):
matches = 0
full_match = 0
count = 0
print(hat.draw(num_balls_drawn))
print(hat.draw(5))
`
When I call the draw function and print the result, I get the dictionary as expected. But when the draw function is called and result is printed within the experiment function, I get a list.
A:
What is a type of the self.contents? I thing it is the list and this is answer to your question :-)
def draw(self, num: int) -> dict:
drawn_dict = {}
if num > len(self.contents):
return self.contents # <- THIS
else:
while num >= 1:
drawn_num = self.contents.pop(random.randint(0, len(self.contents) - 1))
drawn_dict.setdefault(drawn_num, 0)
drawn_dict[drawn_num] +=1
num -= 1
return drawn_dict
A:
I realized the issue. I was calling the draw function before experiment function, and by calling draw, I was editing the self.contents list which affected its length thereby triggering the "if num> len(self.contents)". So function works as expected when I don't modify the list before actually using it!
| Why is this function returning a list when called within another function? | My function is set to return a dictionary. When called, it returns the dictionary. However, if I call the function from within another function, it returns a list.
`
def draw(self, num: int) -> dict:
drawn_dict = {}
if num > len(self.contents):
return self.contents
else:
while num >= 1:
drawn_num = self.contents.pop(random.randint(0, len(self.contents) - 1))
drawn_dict.setdefault(drawn_num, 0)
drawn_dict[drawn_num] +=1
num -= 1
return drawn_dict
def experiment(hat, expected_balls, num_balls_drawn, num_experiments):
matches = 0
full_match = 0
count = 0
print(hat.draw(num_balls_drawn))
print(hat.draw(5))
`
When I call the draw function and print the result, I get the dictionary as expected. But when the draw function is called and result is printed within the experiment function, I get a list.
| [
"What is a type of the self.contents? I thing it is the list and this is answer to your question :-)\n\n def draw(self, num: int) -> dict:\n drawn_dict = {}\n if num > len(self.contents):\n return self.contents # <- THIS\n else:\n while num >= 1:\n drawn_num = self.contents.pop(random.randint(0, len(self.contents) - 1))\n drawn_dict.setdefault(drawn_num, 0)\n drawn_dict[drawn_num] +=1\n num -= 1\n return drawn_dict\n\n",
"I realized the issue. I was calling the draw function before experiment function, and by calling draw, I was editing the self.contents list which affected its length thereby triggering the \"if num> len(self.contents)\". So function works as expected when I don't modify the list before actually using it!\n"
] | [
0,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074625995_python_python_3.x.txt |
Q:
use javascript to display django object
I want to implement below using javascript so row click it will get index and display object of this index.
in django template this is working.
<div>{{ project.0.customer_name}}</div>
<div>{{ project.1.customer_name}}</div>
but the below javascript are not working even I get the correct ID.
var cell = row.getElementsByTagName("td")[0];
var id= parseInt(cell.innerHTML);
// not working
document.getElementById('lblname').innerHTML = '{{ project.id.customer_name}}';
// this is also working but what I want is dynamic base on row click
document.getElementById('lblname').innerHTML = '{{ project.1.customer_name}}';
display django object using index in javascript.
A:
You have to understand what is happening with your code:
Templates like this are processed on the server:
'{{ project.id.customer_name}}'
I believe you do not have project.id on your server side, so you get None in the above line, and the moustache tag becomes smth like an empty string, and actual JavaScript code is like this:
document.getElementById('lblname').innerHTML = '';
It is only now that JS code is executed, and you can imagine what it will do.
What you want is processing moustache tags after the id variable has been set in JS, which is not how stuff works (at least, if you don't have some crazy tool chain).
One way of achieving what you want is to provide the whole project object (or array) to JavaScript by doing the following:
<script>
const project = {{ project|safe }};
</script>
A complete Django template could look like this (I used <span>s instead of table cells:
<!doctype html>
<html>
<head>
<meta charset="utf-8" />
<title>django test</title>
</head>
<body>
{% block content %}
{% for item in project %}
<span data-id="{{ forloop.counter }}">{{ forloop.counter }}</span>
{% endfor %}
<div id="output" style="display: flex; flex-direction: column-reverse;">
</div>
<script>
const project = {{ project|safe }};
const spans = document.getElementsByTagName('span');
const output = document.getElementById('output');
const onSpanClick = (event) => {
const id = parseInt(event.target.getAttribute('data-id'), 10) - 1; // forloop.counter is 1-based, JS arrays are 0-based
const div = document.createElement('div');
div.innerHTML = project[id].customer_name;
output.appendChild(div);
}
Array.from(spans).forEach(span => {
span.addEventListener('click', onSpanClick);
})
</script>
{% endblock %}
</body>
</html>
Another way is the AJAX way: you create an API endpoint on your server side, so that an URL like example.com/api/customer_name/?id=999 responds to you with the name of customer id=999 when you click on some element and trigger an XMLHttpRequest with param id=999.
| use javascript to display django object | I want to implement below using javascript so row click it will get index and display object of this index.
in django template this is working.
<div>{{ project.0.customer_name}}</div>
<div>{{ project.1.customer_name}}</div>
but the below javascript are not working even I get the correct ID.
var cell = row.getElementsByTagName("td")[0];
var id= parseInt(cell.innerHTML);
// not working
document.getElementById('lblname').innerHTML = '{{ project.id.customer_name}}';
// this is also working but what I want is dynamic base on row click
document.getElementById('lblname').innerHTML = '{{ project.1.customer_name}}';
display django object using index in javascript.
| [
"You have to understand what is happening with your code:\nTemplates like this are processed on the server:\n'{{ project.id.customer_name}}'\n\nI believe you do not have project.id on your server side, so you get None in the above line, and the moustache tag becomes smth like an empty string, and actual JavaScript code is like this:\ndocument.getElementById('lblname').innerHTML = '';\n\nIt is only now that JS code is executed, and you can imagine what it will do.\nWhat you want is processing moustache tags after the id variable has been set in JS, which is not how stuff works (at least, if you don't have some crazy tool chain).\nOne way of achieving what you want is to provide the whole project object (or array) to JavaScript by doing the following:\n<script>\nconst project = {{ project|safe }};\n</script>\n\nA complete Django template could look like this (I used <span>s instead of table cells:\n<!doctype html>\n<html>\n<head>\n<meta charset=\"utf-8\" />\n<title>django test</title>\n</head>\n<body>\n{% block content %}\n{% for item in project %}\n<span data-id=\"{{ forloop.counter }}\">{{ forloop.counter }}</span>\n{% endfor %}\n\n<div id=\"output\" style=\"display: flex; flex-direction: column-reverse;\">\n</div>\n\n<script>\nconst project = {{ project|safe }};\nconst spans = document.getElementsByTagName('span');\nconst output = document.getElementById('output');\n\nconst onSpanClick = (event) => {\n const id = parseInt(event.target.getAttribute('data-id'), 10) - 1; // forloop.counter is 1-based, JS arrays are 0-based\n const div = document.createElement('div');\n div.innerHTML = project[id].customer_name;\n output.appendChild(div);\n}\n\nArray.from(spans).forEach(span => {\n span.addEventListener('click', onSpanClick);\n})\n\n</script>\n{% endblock %}\n</body>\n</html>\n\nAnother way is the AJAX way: you create an API endpoint on your server side, so that an URL like example.com/api/customer_name/?id=999 responds to you with the name of customer id=999 when you click on some element and trigger an XMLHttpRequest with param id=999.\n"
] | [
0
] | [] | [] | [
"django",
"javascript",
"python"
] | stackoverflow_0074624791_django_javascript_python.txt |
Q:
Pandas filter dataframe by time
This is not a duplicate of: filter pandas dataframe by time because the solution offered there doesn't address the same column type that needs to be filtered.
I have the following dataframe:
i = pd.date_range('2018-04-09', periods=4, freq='1D20min')
ts = pd.DataFrame({'A': [1, 2, 3, 4],
'B':i})
ts['date'] = pd.to_datetime(ts['B']).dt.date
ts['time'] = pd.to_datetime(ts['B']).dt.time
ts = ts.drop('B', axis = 1)
I want to filter on just the time columns and i tried this:
ts['time'].between_time('0:45', '0:15')
But it doesn't work. I get the error: TypeError: Index must be DatetimeIndex
Do you have any idea how to do this? thanks
A:
EDIT: Solution without B column:
If need filter by time column use Series.between:
from datetime import time
df = ts[ts['time'].between(time(0,15,0), time(0,45,0))]
print (df)
A B date time
1 2 2018-04-10 00:20:00 2018-04-10 00:20:00
2 3 2018-04-11 00:40:00 2018-04-11 00:40:00
Original solution with B column:
Create DatetimeIndex if need filter by DataFrame.between_time:
df = ts.set_index('B').between_time('0:15', '0:45')
print (df)
A date time
B
2018-04-10 00:20:00 2 2018-04-10 00:20:00
2018-04-11 00:40:00 3 2018-04-11 00:40:00
Solution again with DatetimeIndex with DatetimeIndex.indexer_between_time for positions of matched rows and selecting by DataFrame.iloc:
df = ts.iloc[ts.set_index('B').index.indexer_between_time('0:15', '0:45')]
print (df)
A B date time
1 2 2018-04-10 00:20:00 2018-04-10 00:20:00
2 3 2018-04-11 00:40:00 2018-04-11 00:40:00
| Pandas filter dataframe by time | This is not a duplicate of: filter pandas dataframe by time because the solution offered there doesn't address the same column type that needs to be filtered.
I have the following dataframe:
i = pd.date_range('2018-04-09', periods=4, freq='1D20min')
ts = pd.DataFrame({'A': [1, 2, 3, 4],
'B':i})
ts['date'] = pd.to_datetime(ts['B']).dt.date
ts['time'] = pd.to_datetime(ts['B']).dt.time
ts = ts.drop('B', axis = 1)
I want to filter on just the time columns and i tried this:
ts['time'].between_time('0:45', '0:15')
But it doesn't work. I get the error: TypeError: Index must be DatetimeIndex
Do you have any idea how to do this? thanks
| [
"EDIT: Solution without B column:\nIf need filter by time column use Series.between:\nfrom datetime import time\n\ndf = ts[ts['time'].between(time(0,15,0), time(0,45,0))]\nprint (df)\n A B date time\n1 2 2018-04-10 00:20:00 2018-04-10 00:20:00\n2 3 2018-04-11 00:40:00 2018-04-11 00:40:00\n\nOriginal solution with B column:\nCreate DatetimeIndex if need filter by DataFrame.between_time:\ndf = ts.set_index('B').between_time('0:15', '0:45')\nprint (df)\n A date time\nB \n2018-04-10 00:20:00 2 2018-04-10 00:20:00\n2018-04-11 00:40:00 3 2018-04-11 00:40:00\n\nSolution again with DatetimeIndex with DatetimeIndex.indexer_between_time for positions of matched rows and selecting by DataFrame.iloc:\ndf = ts.iloc[ts.set_index('B').index.indexer_between_time('0:15', '0:45')]\nprint (df)\n A B date time\n1 2 2018-04-10 00:20:00 2018-04-10 00:20:00\n2 3 2018-04-11 00:40:00 2018-04-11 00:40:00\n\n"
] | [
1
] | [] | [] | [
"date",
"datetime",
"pandas",
"python",
"time"
] | stackoverflow_0074627123_date_datetime_pandas_python_time.txt |
Q:
Is it possible to maintain login session in selenium-python?
I use Selenium below method.
open chrome by using chromedriver selenium
manually login
get information of webpage
However, after doing this, Selenium seems to get the html code when not logged in.
Is there a solution?
A:
Try this code:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.chrome.options import Options
options = Options()
# path of the chrome's profile parent directory - change this path as per your system
options.add_argument(r"user-data-dir=C:\\Users\\User\\AppData\\Local\\Google\\Chrome\\User Data")
# name of the directory - change this directory name as per your system
options.add_argument("--profile-directory=Default")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)
You can get the chrome profile directory by passing this command - 'chrome://version/' in chrome browser.
Add the code for login after the above code block, then if you execute the code for the second time onwards you can see the account is already logged in.
Before running the code close all the chrome browser windows and execute.
A:
Instead of storing and maintaining the login session another easy approach would be to use pickle library to store the cookies post login.
As an example to store the cookies from Instagram after logging in and then to reuse them you can use the following solution:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
import pickle
# first login
driver.get('http://www.instagram.org')
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[name='username']"))).send_keys("_SelmanFarukYılmaz_")
driver.find_element(By.CSS_SELECTOR, "input[name='password']").send_keys("Selman_Faruk_Yılmaz")
driver.find_element(By.CSS_SELECTOR, "button[type='submit'] div").click()
pickle.dump( driver.get_cookies() , open("cookies.pkl","wb"))
driver.quit()
# future logins
driver = webdriver.Chrome(service=s, options=options)
driver.get('http://www.instagram.org')
cookies = pickle.load(open("cookies.pkl", "rb"))
for cookie in cookies:
driver.add_cookie(cookie)
driver.get('http://www.instagram.org')
| Is it possible to maintain login session in selenium-python? | I use Selenium below method.
open chrome by using chromedriver selenium
manually login
get information of webpage
However, after doing this, Selenium seems to get the html code when not logged in.
Is there a solution?
| [
"Try this code:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom webdriver_manager.chrome import ChromeDriverManager\nfrom selenium.webdriver.chrome.options import Options\n\noptions = Options()\n\n# path of the chrome's profile parent directory - change this path as per your system\noptions.add_argument(r\"user-data-dir=C:\\\\Users\\\\User\\\\AppData\\\\Local\\\\Google\\\\Chrome\\\\User Data\")\n# name of the directory - change this directory name as per your system\noptions.add_argument(\"--profile-directory=Default\")\ndriver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)\n\nYou can get the chrome profile directory by passing this command - 'chrome://version/' in chrome browser.\nAdd the code for login after the above code block, then if you execute the code for the second time onwards you can see the account is already logged in.\nBefore running the code close all the chrome browser windows and execute.\n",
"Instead of storing and maintaining the login session another easy approach would be to use pickle library to store the cookies post login.\nAs an example to store the cookies from Instagram after logging in and then to reuse them you can use the following solution:\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\nimport pickle\n\n# first login\ndriver.get('http://www.instagram.org')\nWebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"input[name='username']\"))).send_keys(\"_SelmanFarukYılmaz_\")\ndriver.find_element(By.CSS_SELECTOR, \"input[name='password']\").send_keys(\"Selman_Faruk_Yılmaz\")\ndriver.find_element(By.CSS_SELECTOR, \"button[type='submit'] div\").click()\npickle.dump( driver.get_cookies() , open(\"cookies.pkl\",\"wb\"))\ndriver.quit()\n\n# future logins\ndriver = webdriver.Chrome(service=s, options=options)\ndriver.get('http://www.instagram.org')\ncookies = pickle.load(open(\"cookies.pkl\", \"rb\"))\nfor cookie in cookies:\n driver.add_cookie(cookie)\ndriver.get('http://www.instagram.org')\n\n"
] | [
0,
0
] | [] | [] | [
"html",
"python",
"selenium"
] | stackoverflow_0074624816_html_python_selenium.txt |
Q:
Python glob multiple filetypes
Is there a better way to use glob.glob in python to get a list of multiple file types such as .txt, .mdown, and .markdown? Right now I have something like this:
projectFiles1 = glob.glob( os.path.join(projectDir, '*.txt') )
projectFiles2 = glob.glob( os.path.join(projectDir, '*.mdown') )
projectFiles3 = glob.glob( os.path.join(projectDir, '*.markdown') )
A:
Maybe there is a better way, but how about:
import glob
types = ('*.pdf', '*.cpp') # the tuple of file types
files_grabbed = []
for files in types:
files_grabbed.extend(glob.glob(files))
# files_grabbed is the list of pdf and cpp files
Perhaps there is another way, so wait in case someone else comes up with a better answer.
A:
glob returns a list: why not just run it multiple times and concatenate the results?
from glob import glob
project_files = glob('*.txt') + glob('*.mdown') + glob('*.markdown')
A:
from glob import glob
files = glob('*.gif')
files.extend(glob('*.png'))
files.extend(glob('*.jpg'))
print(files)
If you need to specify a path, loop over match patterns and keep the join inside the loop for simplicity:
from os.path import join
from glob import glob
files = []
for ext in ('*.gif', '*.png', '*.jpg'):
files.extend(glob(join("path/to/dir", ext)))
print(files)
A:
So many answers that suggest globbing as many times as number of extensions, I'd prefer globbing just once instead:
from pathlib import Path
files = (p.resolve() for p in Path(path).glob("**/*") if p.suffix in {".c", ".cc", ".cpp", ".hxx", ".h"})
A:
Chain the results:
import itertools as it, glob
def multiple_file_types(*patterns):
return it.chain.from_iterable(glob.iglob(pattern) for pattern in patterns)
Then:
for filename in multiple_file_types("*.txt", "*.sql", "*.log"):
# do stuff
A:
For example, for *.mp3 and *.flac on multiple folders, you can do:
mask = r'music/*/*.[mf][pl][3a]*'
glob.glob(mask)
The idea can be extended to more file extensions, but you have to check that the combinations won't match any other unwanted file extension you may have on those folders. So, be careful with this.
To automatically combine an arbitrary list of extensions into a single glob pattern, you can do the following:
def multi_extension_glob_mask(mask_base, *extensions):
mask_ext = ['[{}]'.format(''.join(set(c))) for c in zip(*extensions)]
if not mask_ext or len(set(len(e) for e in extensions)) > 1:
mask_ext.append('*')
return mask_base + ''.join(mask_ext)
mask = multi_extension_glob_mask('music/*/*.', 'mp3', 'flac', 'wma')
print(mask) # music/*/*.[mfw][pml][a3]*
A:
with glob it is not possible. you can use only:
* matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any character not in seq
use os.listdir and a regexp to check patterns:
for x in os.listdir('.'):
if re.match('.*\.txt|.*\.sql', x):
print x
A:
While Python's default glob doesn't really follow after Bash's glob, you can do this with other libraries. We can enable braces in wcmatch's glob.
>>> from wcmatch import glob
>>> glob.glob('*.{md,ini}', flags=glob.BRACE)
['LICENSE.md', 'README.md', 'tox.ini']
You can even use extended glob patterns if that is your preference:
from wcmatch import glob
>>> glob.glob('*.@(md|ini)', flags=glob.EXTGLOB)
['LICENSE.md', 'README.md', 'tox.ini']
A:
Same answer as @BPL (which is computationally efficient) but which can handle any glob pattern rather than extension:
import os
from fnmatch import fnmatch
folder = "path/to/folder/"
patterns = ("*.txt", "*.md", "*.markdown")
files = [f.path for f in os.scandir(folder) if any(fnmatch(f, p) for p in patterns)]
This solution is both efficient and convenient. It also closely matches the behavior of glob (see the documentation).
Note that this is simpler with the built-in package pathlib:
from pathlib import Path
folder = Path("/path/to/folder")
patterns = ("*.txt", "*.md", "*.markdown")
files = [f for f in folder.iterdir() if any(f.match(p) for p in patterns)]
A:
Here is one-line list-comprehension variant of Pat's answer (which also includes that you wanted to glob in a specific project directory):
import os, glob
exts = ['*.txt', '*.mdown', '*.markdown']
files = [f for ext in exts for f in glob.glob(os.path.join(project_dir, ext))]
You loop over the extensions (for ext in exts), and then for each extension you take each file matching the glob pattern (for f in glob.glob(os.path.join(project_dir, ext)).
This solution is short, and without any unnecessary for-loops, nested list-comprehensions, or functions to clutter the code. Just pure, expressive, pythonic Zen.
This solution allows you to have a custom list of exts that can be changed without having to update your code. (This is always a good practice!)
The list-comprehension is the same used in Laurent's solution (which I've voted for). But I would argue that it is usually unnecessary to factor out a single line to a separate function, which is why I'm providing this as an alternative solution.
Bonus:
If you need to search not just a single directory, but also all sub-directories, you can pass recursive=True and use the multi-directory glob symbol ** 1:
files = [f for ext in exts
for f in glob.glob(os.path.join(project_dir, '**', ext), recursive=True)]
This will invoke glob.glob('<project_dir>/**/*.txt', recursive=True) and so on for each extension.
1 Technically, the ** glob symbol simply matches one or more characters including forward-slash / (unlike the singular * glob symbol). In practice, you just need to remember that as long as you surround ** with forward slashes (path separators), it matches zero or more directories.
A:
Python 3
We can use pathlib; .glob still doesn't support globbing multiple arguments or within braces (as in POSIX shells) but we can easily filter the result.
For example, where you might ideally like to do:
# NOT VALID
Path(config_dir).glob("*.{ini,toml}")
# NOR IS
Path(config_dir).glob("*.ini", "*.toml")
you can do:
filter(lambda p: p.suffix in {".ini", ".toml"}, Path(config_dir).glob("*"))
which isn't too much worse.
A:
A one-liner, Just for the hell of it..
folder = "C:\\multi_pattern_glob_one_liner"
files = [item for sublist in [glob.glob(folder + ext) for ext in ["/*.txt", "/*.bat"]] for item in sublist]
output:
['C:\\multi_pattern_glob_one_liner\\dummy_txt.txt', 'C:\\multi_pattern_glob_one_liner\\dummy_bat.bat']
A:
files = glob.glob('*.txt')
files.extend(glob.glob('*.dat'))
A:
By the results I've obtained from empirical tests, it turned out that glob.glob isn't the better way to filter out files by their extensions. Some of the reason are:
The globbing "language" does not allows perfect specification of multiple extension.
The former point results in obtaining incorrect results depending on file extensions.
The globbing method is empirically proven to be slower than most other methods.
Even if it's strange even other filesystems objects can have "extensions", folders too.
I've tested (for correcteness and efficiency in time) the following 4 different methods to filter out files by extensions and puts them in a list:
from glob import glob, iglob
from re import compile, findall
from os import walk
def glob_with_storage(args):
elements = ''.join([f'[{i}]' for i in args.extensions])
globs = f'{args.target}/**/*{elements}'
results = glob(globs, recursive=True)
return results
def glob_with_iteration(args):
elements = ''.join([f'[{i}]' for i in args.extensions])
globs = f'{args.target}/**/*{elements}'
results = [i for i in iglob(globs, recursive=True)]
return results
def walk_with_suffixes(args):
results = []
for r, d, f in walk(args.target):
for ff in f:
for e in args.extensions:
if ff.endswith(e):
results.append(path_join(r,ff))
break
return results
def walk_with_regs(args):
reg = compile('|'.join([f'{i}$' for i in args.extensions]))
results = []
for r, d, f in walk(args.target):
for ff in f:
if len(findall(reg,ff)):
results.append(path_join(r, ff))
return results
By running the code above on my laptop I obtained the following auto-explicative results.
Elapsed time for '7 times glob_with_storage()': 0.365023 seconds.
mean : 0.05214614
median : 0.051861
stdev : 0.001492152
min : 0.050864
max : 0.054853
Elapsed time for '7 times glob_with_iteration()': 0.360037 seconds.
mean : 0.05143386
median : 0.050864
stdev : 0.0007847381
min : 0.050864
max : 0.052859
Elapsed time for '7 times walk_with_suffixes()': 0.26529 seconds.
mean : 0.03789857
median : 0.037899
stdev : 0.0005759071
min : 0.036901
max : 0.038896
Elapsed time for '7 times walk_with_regs()': 0.290223 seconds.
mean : 0.04146043
median : 0.040891
stdev : 0.0007846776
min : 0.04089
max : 0.042885
Results sizes:
0 2451
1 2451
2 2446
3 2446
Differences between glob() and walk():
0 E:\x\y\z\venv\lib\python3.7\site-packages\Cython\Includes\numpy
1 E:\x\y\z\venv\lib\python3.7\site-packages\Cython\Utility\CppSupport.cpp
2 E:\x\y\z\venv\lib\python3.7\site-packages\future\moves\xmlrpc
3 E:\x\y\z\venv\lib\python3.7\site-packages\Cython\Includes\libcpp
4 E:\x\y\z\venv\lib\python3.7\site-packages\future\backports\xmlrpc
Elapsed time for 'main': 1.317424 seconds.
The fastest way to filter out files by extensions, happens even to be the ugliest one. Which is, nested for loops and string comparison using the endswith() method.
Moreover, as you can see, the globbing algorithms (with the pattern E:\x\y\z\**/*[py][pyc]) even with only 2 extension given (py and pyc) returns also incorrect results.
A:
I have released Formic which implements multiple includes in a similar way to Apache Ant's FileSet and Globs.
The search can be implemented:
import formic
patterns = ["*.txt", "*.markdown", "*.mdown"]
fileset = formic.FileSet(directory=projectDir, include=patterns)
for file_name in fileset.qualified_files():
# Do something with file_name
Because the full Ant glob is implemented, you can include different directories with each pattern, so you could choose only those .txt files in one subdirectory, and the .markdown in another, for example:
patterns = [ "/unformatted/**/*.txt", "/formatted/**/*.mdown" ]
I hope this helps.
A:
After coming here for help, I made my own solution and wanted to share it. It's based on user2363986's answer, but I think this is more scalable. Meaning, that if you have 1000 extensions, the code will still look somewhat elegant.
from glob import glob
directoryPath = "C:\\temp\\*."
fileExtensions = [ "jpg", "jpeg", "png", "bmp", "gif" ]
listOfFiles = []
for extension in fileExtensions:
listOfFiles.extend( glob( directoryPath + extension ))
for file in listOfFiles:
print(file) # Or do other stuff
A:
This is a Python 3.4+ pathlib solution:
exts = ".pdf", ".doc", ".xls", ".csv", ".ppt"
filelist = (str(i) for i in map(pathlib.Path, os.listdir(src)) if i.suffix.lower() in exts and not i.stem.startswith("~"))
Also it ignores all file names starting with ~.
A:
Not glob, but here's another way using a list comprehension:
extensions = 'txt mdown markdown'.split()
projectFiles = [f for f in os.listdir(projectDir)
if os.path.splitext(f)[1][1:] in extensions]
A:
The following function _glob globs for multiple file extensions.
import glob
import os
def _glob(path, *exts):
"""Glob for multiple file extensions
Parameters
----------
path : str
A file name without extension, or directory name
exts : tuple
File extensions to glob for
Returns
-------
files : list
list of files matching extensions in exts in path
"""
path = os.path.join(path, "*") if os.path.isdir(path) else path + "*"
return [f for files in [glob.glob(path + ext) for ext in exts] for f in files]
files = _glob(projectDir, ".txt", ".mdown", ".markdown")
A:
From previous answer
glob('*.jpg') + glob('*.png')
Here is a shorter one,
from glob import glob
extensions = ['jpg', 'png'] # to find these filename extensions
# Method 1: loop one by one and extend to the output list
output = []
[output.extend(glob(f'*.{name}')) for name in extensions]
print(output)
# Method 2: even shorter
# loop filename extension to glob() it and flatten it to a list
output = [p for p2 in [glob(f'*.{name}') for name in extensions] for p in p2]
print(output)
A:
You can try to make a manual list comparing the extension of existing with those you require.
ext_list = ['gif','jpg','jpeg','png'];
file_list = []
for file in glob.glob('*.*'):
if file.rsplit('.',1)[1] in ext_list :
file_list.append(file)
A:
import os
import glob
import operator
from functools import reduce
types = ('*.jpg', '*.png', '*.jpeg')
lazy_paths = (glob.glob(os.path.join('my_path', t)) for t in types)
paths = reduce(operator.add, lazy_paths, [])
https://docs.python.org/3.5/library/functools.html#functools.reduce
https://docs.python.org/3.5/library/operator.html#operator.add
A:
To glob multiple file types, you need to call glob() function several times in a loop. Since this function returns a list, you need to concatenate the lists.
For instance, this function do the job:
import glob
import os
def glob_filetypes(root_dir, *patterns):
return [path
for pattern in patterns
for path in glob.glob(os.path.join(root_dir, pattern))]
Simple usage:
project_dir = "path/to/project/dir"
for path in sorted(glob_filetypes(project_dir, '*.txt', '*.mdown', '*.markdown')):
print(path)
You can also use glob.iglob() to have an iterator:
Return an iterator which yields the same values as glob() without actually storing them all simultaneously.
def iglob_filetypes(root_dir, *patterns):
return (path
for pattern in patterns
for path in glob.iglob(os.path.join(root_dir, pattern)))
A:
One glob, many extensions... but imperfect solution (might match other files).
filetypes = ['tif', 'jpg']
filetypes = zip(*[list(ft) for ft in filetypes])
filetypes = ["".join(ch) for ch in filetypes]
filetypes = ["[%s]" % ch for ch in filetypes]
filetypes = "".join(filetypes) + "*"
print(filetypes)
# => [tj][ip][fg]*
glob.glob("/path/to/*.%s" % filetypes)
A:
I had the same issue and this is what I came up with
import os, sys, re
#without glob
src_dir = '/mnt/mypics/'
src_pics = []
ext = re.compile('.*\.(|{}|)$'.format('|'.join(['png', 'jpeg', 'jpg']).encode('utf-8')))
for root, dirnames, filenames in os.walk(src_dir):
for filename in filter(lambda name:ext.search(name),filenames):
src_pics.append(os.path.join(root, filename))
A:
Use a list of extension and iterate through
from os.path import join
from glob import glob
files = []
extensions = ['*.gif', '*.png', '*.jpg']
for ext in extensions:
files.extend(glob(join("path/to/dir", ext)))
print(files)
A:
If you use pathlib try this:
import pathlib
extensions = ['.py', '.txt']
root_dir = './test/'
files = filter(lambda p: p.suffix in extensions, pathlib.Path(root_dir).glob('**/*'))
print(list(files))
A:
This worked for me!
split('.')[-1]
above code separate the filename suffix (*.xxx) so it can help you
for filename in glob.glob(folder + '*.*'):
print(folder+filename)
if filename.split('.')[-1] != 'tif' and \
filename.split('.')[-1] != 'tiff' and \
filename.split('.')[-1] != 'bmp' and \
filename.split('.')[-1] != 'jpg' and \
filename.split('.')[-1] != 'jpeg' and \
filename.split('.')[-1] != 'png':
continue
# Your code
A:
You could use filter:
import os
import glob
projectFiles = filter(
lambda x: os.path.splitext(x)[1] in [".txt", ".mdown", ".markdown"]
glob.glob(os.path.join(projectDir, "*"))
)
A:
You could also use reduce() like so:
import glob
file_types = ['*.txt', '*.mdown', '*.markdown']
project_files = reduce(lambda list1, list2: list1 + list2, (glob.glob(t) for t in file_types))
this creates a list from glob.glob() for each pattern and reduces them to a single list.
A:
Yet another solution (use glob to get paths using multiple match patterns and combine all paths into a single list using reduce and add):
import functools, glob, operator
paths = functools.reduce(operator.add, [glob.glob(pattern) for pattern in [
"path1/*.ext1",
"path2/*.ext2"]])
A:
Easiest way is using itertools.chain
from pathlib import Path
import itertools
cwd = Path.cwd()
for file in itertools.chain(
cwd.rglob("*.txt"),
cwd.rglob("*.md"),
):
print(file.name)
A:
Maybe I'm missing something but if it's just plain glob maybe you could do something like this?
projectFiles = glob.glob(os.path.join(projectDir, '*.{txt,mdown,markdown}'))
| Python glob multiple filetypes | Is there a better way to use glob.glob in python to get a list of multiple file types such as .txt, .mdown, and .markdown? Right now I have something like this:
projectFiles1 = glob.glob( os.path.join(projectDir, '*.txt') )
projectFiles2 = glob.glob( os.path.join(projectDir, '*.mdown') )
projectFiles3 = glob.glob( os.path.join(projectDir, '*.markdown') )
| [
"Maybe there is a better way, but how about:\nimport glob\ntypes = ('*.pdf', '*.cpp') # the tuple of file types\nfiles_grabbed = []\nfor files in types:\n files_grabbed.extend(glob.glob(files))\n\n# files_grabbed is the list of pdf and cpp files\n\nPerhaps there is another way, so wait in case someone else comes up with a better answer.\n",
"glob returns a list: why not just run it multiple times and concatenate the results?\nfrom glob import glob\nproject_files = glob('*.txt') + glob('*.mdown') + glob('*.markdown')\n\n",
"from glob import glob\n\nfiles = glob('*.gif')\nfiles.extend(glob('*.png'))\nfiles.extend(glob('*.jpg'))\n\nprint(files)\n\nIf you need to specify a path, loop over match patterns and keep the join inside the loop for simplicity:\nfrom os.path import join\nfrom glob import glob\n\nfiles = []\nfor ext in ('*.gif', '*.png', '*.jpg'):\n files.extend(glob(join(\"path/to/dir\", ext)))\n\nprint(files)\n\n",
"So many answers that suggest globbing as many times as number of extensions, I'd prefer globbing just once instead:\nfrom pathlib import Path\n\nfiles = (p.resolve() for p in Path(path).glob(\"**/*\") if p.suffix in {\".c\", \".cc\", \".cpp\", \".hxx\", \".h\"})\n\n",
"Chain the results:\nimport itertools as it, glob\n\ndef multiple_file_types(*patterns):\n return it.chain.from_iterable(glob.iglob(pattern) for pattern in patterns)\n\nThen:\nfor filename in multiple_file_types(\"*.txt\", \"*.sql\", \"*.log\"):\n # do stuff\n\n",
"For example, for *.mp3 and *.flac on multiple folders, you can do:\nmask = r'music/*/*.[mf][pl][3a]*'\nglob.glob(mask)\n\nThe idea can be extended to more file extensions, but you have to check that the combinations won't match any other unwanted file extension you may have on those folders. So, be careful with this.\nTo automatically combine an arbitrary list of extensions into a single glob pattern, you can do the following:\ndef multi_extension_glob_mask(mask_base, *extensions):\n mask_ext = ['[{}]'.format(''.join(set(c))) for c in zip(*extensions)]\n if not mask_ext or len(set(len(e) for e in extensions)) > 1:\n mask_ext.append('*')\n return mask_base + ''.join(mask_ext)\n\nmask = multi_extension_glob_mask('music/*/*.', 'mp3', 'flac', 'wma')\nprint(mask) # music/*/*.[mfw][pml][a3]*\n\n",
"with glob it is not possible. you can use only:\n* matches everything\n? matches any single character\n[seq] matches any character in seq\n[!seq] matches any character not in seq \nuse os.listdir and a regexp to check patterns:\nfor x in os.listdir('.'):\n if re.match('.*\\.txt|.*\\.sql', x):\n print x\n\n",
"While Python's default glob doesn't really follow after Bash's glob, you can do this with other libraries. We can enable braces in wcmatch's glob.\n>>> from wcmatch import glob\n>>> glob.glob('*.{md,ini}', flags=glob.BRACE)\n['LICENSE.md', 'README.md', 'tox.ini']\n\nYou can even use extended glob patterns if that is your preference:\nfrom wcmatch import glob\n>>> glob.glob('*.@(md|ini)', flags=glob.EXTGLOB)\n['LICENSE.md', 'README.md', 'tox.ini']\n\n",
"Same answer as @BPL (which is computationally efficient) but which can handle any glob pattern rather than extension:\nimport os\nfrom fnmatch import fnmatch\n\nfolder = \"path/to/folder/\"\npatterns = (\"*.txt\", \"*.md\", \"*.markdown\")\n\nfiles = [f.path for f in os.scandir(folder) if any(fnmatch(f, p) for p in patterns)]\n\nThis solution is both efficient and convenient. It also closely matches the behavior of glob (see the documentation).\nNote that this is simpler with the built-in package pathlib:\nfrom pathlib import Path\n\nfolder = Path(\"/path/to/folder\")\npatterns = (\"*.txt\", \"*.md\", \"*.markdown\")\n\nfiles = [f for f in folder.iterdir() if any(f.match(p) for p in patterns)]\n\n",
"Here is one-line list-comprehension variant of Pat's answer (which also includes that you wanted to glob in a specific project directory):\nimport os, glob\nexts = ['*.txt', '*.mdown', '*.markdown']\nfiles = [f for ext in exts for f in glob.glob(os.path.join(project_dir, ext))]\n\nYou loop over the extensions (for ext in exts), and then for each extension you take each file matching the glob pattern (for f in glob.glob(os.path.join(project_dir, ext)).\nThis solution is short, and without any unnecessary for-loops, nested list-comprehensions, or functions to clutter the code. Just pure, expressive, pythonic Zen. \nThis solution allows you to have a custom list of exts that can be changed without having to update your code. (This is always a good practice!)\nThe list-comprehension is the same used in Laurent's solution (which I've voted for). But I would argue that it is usually unnecessary to factor out a single line to a separate function, which is why I'm providing this as an alternative solution.\nBonus: \nIf you need to search not just a single directory, but also all sub-directories, you can pass recursive=True and use the multi-directory glob symbol ** 1:\nfiles = [f for ext in exts \n for f in glob.glob(os.path.join(project_dir, '**', ext), recursive=True)]\n\nThis will invoke glob.glob('<project_dir>/**/*.txt', recursive=True) and so on for each extension.\n1 Technically, the ** glob symbol simply matches one or more characters including forward-slash / (unlike the singular * glob symbol). In practice, you just need to remember that as long as you surround ** with forward slashes (path separators), it matches zero or more directories.\n",
"Python 3\nWe can use pathlib; .glob still doesn't support globbing multiple arguments or within braces (as in POSIX shells) but we can easily filter the result.\nFor example, where you might ideally like to do:\n# NOT VALID\nPath(config_dir).glob(\"*.{ini,toml}\")\n# NOR IS\nPath(config_dir).glob(\"*.ini\", \"*.toml\")\n\nyou can do:\nfilter(lambda p: p.suffix in {\".ini\", \".toml\"}, Path(config_dir).glob(\"*\"))\n\nwhich isn't too much worse.\n",
"A one-liner, Just for the hell of it..\nfolder = \"C:\\\\multi_pattern_glob_one_liner\"\nfiles = [item for sublist in [glob.glob(folder + ext) for ext in [\"/*.txt\", \"/*.bat\"]] for item in sublist]\n\noutput:\n['C:\\\\multi_pattern_glob_one_liner\\\\dummy_txt.txt', 'C:\\\\multi_pattern_glob_one_liner\\\\dummy_bat.bat']\n\n",
"files = glob.glob('*.txt')\nfiles.extend(glob.glob('*.dat'))\n\n",
"By the results I've obtained from empirical tests, it turned out that glob.glob isn't the better way to filter out files by their extensions. Some of the reason are:\n\nThe globbing \"language\" does not allows perfect specification of multiple extension.\nThe former point results in obtaining incorrect results depending on file extensions.\nThe globbing method is empirically proven to be slower than most other methods.\nEven if it's strange even other filesystems objects can have \"extensions\", folders too.\n\nI've tested (for correcteness and efficiency in time) the following 4 different methods to filter out files by extensions and puts them in a list:\nfrom glob import glob, iglob\nfrom re import compile, findall\nfrom os import walk\n\n\ndef glob_with_storage(args):\n\n elements = ''.join([f'[{i}]' for i in args.extensions])\n globs = f'{args.target}/**/*{elements}'\n results = glob(globs, recursive=True)\n\n return results\n\n\ndef glob_with_iteration(args):\n\n elements = ''.join([f'[{i}]' for i in args.extensions])\n globs = f'{args.target}/**/*{elements}'\n results = [i for i in iglob(globs, recursive=True)]\n\n return results\n\n\ndef walk_with_suffixes(args):\n\n results = []\n for r, d, f in walk(args.target):\n for ff in f:\n for e in args.extensions:\n if ff.endswith(e):\n results.append(path_join(r,ff))\n break\n return results\n\n\ndef walk_with_regs(args):\n\n reg = compile('|'.join([f'{i}$' for i in args.extensions]))\n\n results = []\n for r, d, f in walk(args.target):\n for ff in f:\n if len(findall(reg,ff)):\n results.append(path_join(r, ff))\n\n return results\n\nBy running the code above on my laptop I obtained the following auto-explicative results.\nElapsed time for '7 times glob_with_storage()': 0.365023 seconds.\nmean : 0.05214614\nmedian : 0.051861\nstdev : 0.001492152\nmin : 0.050864\nmax : 0.054853\n\nElapsed time for '7 times glob_with_iteration()': 0.360037 seconds.\nmean : 0.05143386\nmedian : 0.050864\nstdev : 0.0007847381\nmin : 0.050864\nmax : 0.052859\n\nElapsed time for '7 times walk_with_suffixes()': 0.26529 seconds.\nmean : 0.03789857\nmedian : 0.037899\nstdev : 0.0005759071\nmin : 0.036901\nmax : 0.038896\n\nElapsed time for '7 times walk_with_regs()': 0.290223 seconds.\nmean : 0.04146043\nmedian : 0.040891\nstdev : 0.0007846776\nmin : 0.04089\nmax : 0.042885\n\nResults sizes:\n0 2451\n1 2451\n2 2446\n3 2446\n\nDifferences between glob() and walk():\n0 E:\\x\\y\\z\\venv\\lib\\python3.7\\site-packages\\Cython\\Includes\\numpy\n1 E:\\x\\y\\z\\venv\\lib\\python3.7\\site-packages\\Cython\\Utility\\CppSupport.cpp\n2 E:\\x\\y\\z\\venv\\lib\\python3.7\\site-packages\\future\\moves\\xmlrpc\n3 E:\\x\\y\\z\\venv\\lib\\python3.7\\site-packages\\Cython\\Includes\\libcpp\n4 E:\\x\\y\\z\\venv\\lib\\python3.7\\site-packages\\future\\backports\\xmlrpc\n\nElapsed time for 'main': 1.317424 seconds.\n\nThe fastest way to filter out files by extensions, happens even to be the ugliest one. Which is, nested for loops and string comparison using the endswith() method. \nMoreover, as you can see, the globbing algorithms (with the pattern E:\\x\\y\\z\\**/*[py][pyc]) even with only 2 extension given (py and pyc) returns also incorrect results.\n",
"I have released Formic which implements multiple includes in a similar way to Apache Ant's FileSet and Globs.\nThe search can be implemented:\nimport formic\npatterns = [\"*.txt\", \"*.markdown\", \"*.mdown\"]\nfileset = formic.FileSet(directory=projectDir, include=patterns)\nfor file_name in fileset.qualified_files():\n # Do something with file_name\n\nBecause the full Ant glob is implemented, you can include different directories with each pattern, so you could choose only those .txt files in one subdirectory, and the .markdown in another, for example:\npatterns = [ \"/unformatted/**/*.txt\", \"/formatted/**/*.mdown\" ]\n\nI hope this helps.\n",
"After coming here for help, I made my own solution and wanted to share it. It's based on user2363986's answer, but I think this is more scalable. Meaning, that if you have 1000 extensions, the code will still look somewhat elegant.\nfrom glob import glob\n\ndirectoryPath = \"C:\\\\temp\\\\*.\" \nfileExtensions = [ \"jpg\", \"jpeg\", \"png\", \"bmp\", \"gif\" ]\nlistOfFiles = []\n\nfor extension in fileExtensions:\n listOfFiles.extend( glob( directoryPath + extension ))\n\nfor file in listOfFiles:\n print(file) # Or do other stuff\n\n",
"This is a Python 3.4+ pathlib solution:\nexts = \".pdf\", \".doc\", \".xls\", \".csv\", \".ppt\"\nfilelist = (str(i) for i in map(pathlib.Path, os.listdir(src)) if i.suffix.lower() in exts and not i.stem.startswith(\"~\"))\n\nAlso it ignores all file names starting with ~.\n",
"Not glob, but here's another way using a list comprehension:\nextensions = 'txt mdown markdown'.split()\nprojectFiles = [f for f in os.listdir(projectDir) \n if os.path.splitext(f)[1][1:] in extensions]\n\n",
"The following function _glob globs for multiple file extensions.\nimport glob\nimport os\ndef _glob(path, *exts):\n \"\"\"Glob for multiple file extensions\n\n Parameters\n ----------\n path : str\n A file name without extension, or directory name\n exts : tuple\n File extensions to glob for\n\n Returns\n -------\n files : list\n list of files matching extensions in exts in path\n\n \"\"\"\n path = os.path.join(path, \"*\") if os.path.isdir(path) else path + \"*\"\n return [f for files in [glob.glob(path + ext) for ext in exts] for f in files]\n\nfiles = _glob(projectDir, \".txt\", \".mdown\", \".markdown\")\n\n",
"From previous answer\nglob('*.jpg') + glob('*.png')\n\nHere is a shorter one,\nfrom glob import glob\nextensions = ['jpg', 'png'] # to find these filename extensions\n\n# Method 1: loop one by one and extend to the output list\noutput = []\n[output.extend(glob(f'*.{name}')) for name in extensions]\nprint(output)\n\n# Method 2: even shorter\n# loop filename extension to glob() it and flatten it to a list\noutput = [p for p2 in [glob(f'*.{name}') for name in extensions] for p in p2]\nprint(output)\n\n",
"You can try to make a manual list comparing the extension of existing with those you require.\next_list = ['gif','jpg','jpeg','png'];\nfile_list = []\nfor file in glob.glob('*.*'):\n if file.rsplit('.',1)[1] in ext_list :\n file_list.append(file)\n\n",
"import os \nimport glob\nimport operator\nfrom functools import reduce\n\ntypes = ('*.jpg', '*.png', '*.jpeg')\nlazy_paths = (glob.glob(os.path.join('my_path', t)) for t in types)\npaths = reduce(operator.add, lazy_paths, [])\n\nhttps://docs.python.org/3.5/library/functools.html#functools.reduce\nhttps://docs.python.org/3.5/library/operator.html#operator.add\n",
"To glob multiple file types, you need to call glob() function several times in a loop. Since this function returns a list, you need to concatenate the lists.\nFor instance, this function do the job:\nimport glob\nimport os\n\n\ndef glob_filetypes(root_dir, *patterns):\n return [path\n for pattern in patterns\n for path in glob.glob(os.path.join(root_dir, pattern))]\n\nSimple usage:\nproject_dir = \"path/to/project/dir\"\nfor path in sorted(glob_filetypes(project_dir, '*.txt', '*.mdown', '*.markdown')):\n print(path)\n\nYou can also use glob.iglob() to have an iterator:\n\nReturn an iterator which yields the same values as glob() without actually storing them all simultaneously.\n\ndef iglob_filetypes(root_dir, *patterns):\n return (path\n for pattern in patterns\n for path in glob.iglob(os.path.join(root_dir, pattern)))\n\n",
"One glob, many extensions... but imperfect solution (might match other files).\nfiletypes = ['tif', 'jpg']\n\nfiletypes = zip(*[list(ft) for ft in filetypes])\nfiletypes = [\"\".join(ch) for ch in filetypes]\nfiletypes = [\"[%s]\" % ch for ch in filetypes]\nfiletypes = \"\".join(filetypes) + \"*\"\nprint(filetypes)\n# => [tj][ip][fg]*\n\nglob.glob(\"/path/to/*.%s\" % filetypes)\n\n",
"I had the same issue and this is what I came up with \nimport os, sys, re\n\n#without glob\n\nsrc_dir = '/mnt/mypics/'\nsrc_pics = []\next = re.compile('.*\\.(|{}|)$'.format('|'.join(['png', 'jpeg', 'jpg']).encode('utf-8')))\nfor root, dirnames, filenames in os.walk(src_dir):\n for filename in filter(lambda name:ext.search(name),filenames):\n src_pics.append(os.path.join(root, filename))\n\n",
"Use a list of extension and iterate through\nfrom os.path import join\nfrom glob import glob\n\nfiles = []\nextensions = ['*.gif', '*.png', '*.jpg']\nfor ext in extensions:\n files.extend(glob(join(\"path/to/dir\", ext)))\n\nprint(files)\n\n",
"If you use pathlib try this:\nimport pathlib\n\nextensions = ['.py', '.txt']\nroot_dir = './test/'\n\nfiles = filter(lambda p: p.suffix in extensions, pathlib.Path(root_dir).glob('**/*'))\n\nprint(list(files))\n\n",
"This worked for me!\nsplit('.')[-1]\n\nabove code separate the filename suffix (*.xxx) so it can help you\n for filename in glob.glob(folder + '*.*'):\n print(folder+filename)\n if filename.split('.')[-1] != 'tif' and \\\n filename.split('.')[-1] != 'tiff' and \\\n filename.split('.')[-1] != 'bmp' and \\\n filename.split('.')[-1] != 'jpg' and \\\n filename.split('.')[-1] != 'jpeg' and \\\n filename.split('.')[-1] != 'png':\n continue\n # Your code\n\n",
"You could use filter:\nimport os\nimport glob\n\nprojectFiles = filter(\n lambda x: os.path.splitext(x)[1] in [\".txt\", \".mdown\", \".markdown\"]\n glob.glob(os.path.join(projectDir, \"*\"))\n)\n\n",
"You could also use reduce() like so:\nimport glob\nfile_types = ['*.txt', '*.mdown', '*.markdown']\nproject_files = reduce(lambda list1, list2: list1 + list2, (glob.glob(t) for t in file_types))\n\nthis creates a list from glob.glob() for each pattern and reduces them to a single list.\n",
"Yet another solution (use glob to get paths using multiple match patterns and combine all paths into a single list using reduce and add):\nimport functools, glob, operator\npaths = functools.reduce(operator.add, [glob.glob(pattern) for pattern in [\n \"path1/*.ext1\",\n \"path2/*.ext2\"]])\n\n",
"Easiest way is using itertools.chain\nfrom pathlib import Path\nimport itertools\n\ncwd = Path.cwd()\n\nfor file in itertools.chain(\n cwd.rglob(\"*.txt\"),\n cwd.rglob(\"*.md\"),\n):\n print(file.name)\n\n",
"Maybe I'm missing something but if it's just plain glob maybe you could do something like this?\nprojectFiles = glob.glob(os.path.join(projectDir, '*.{txt,mdown,markdown}'))\n\n"
] | [
219,
105,
66,
64,
47,
31,
20,
16,
11,
7,
6,
5,
4,
4,
3,
3,
3,
2,
2,
2,
1,
1,
1,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0
] | [
"This Should Work:\nimport glob\nextensions = ('*.txt', '*.mdown', '*.markdown')\nfor i in extensions:\n for files in glob.glob(i):\n print (files)\n\n",
"For example:\nimport glob\nlst_img = []\nbase_dir = '/home/xy/img/'\n\n# get all the jpg file in base_dir \nlst_img += glob.glob(base_dir + '*.jpg')\nprint lst_img\n# ['/home/xy/img/2.jpg', '/home/xy/img/1.jpg']\n\n# append all the png file in base_dir to lst_img\nlst_img += glob.glob(base_dir + '*.png')\nprint lst_img\n# ['/home/xy/img/2.jpg', '/home/xy/img/1.jpg', '/home/xy/img/3.png']\n\nA function:\nimport glob\ndef get_files(base_dir='/home/xy/img/', lst_extension=['*.jpg', '*.png']):\n \"\"\"\n :param base_dir:base directory\n :param lst_extension:lst_extension: list like ['*.jpg', '*.png', ...]\n :return:file lists like ['/home/xy/img/2.jpg','/home/xy/img/3.png']\n \"\"\"\n lst_files = []\n for ext in lst_extension:\n lst_files += glob.glob(base_dir+ext)\n return lst_files\n\n",
"import glob\nimport pandas as pd\n\ndf1 = pd.DataFrame(columns=['A'])\nfor i in glob.glob('C:\\dir\\path\\*.txt'):\n df1 = df1.append({'A': i}, ignore_index=True)\nfor i in glob.glob('C:\\dir\\path\\*.mdown'):\n df1 = df1.append({'A': i}, ignore_index=True)\nfor i in glob.glob('C:\\dir\\path\\*.markdown):\n df1 = df1.append({'A': i}, ignore_index=True)\n\n",
"In one line :\nIMG_EXTS = (\".jpg\", \".jpeg\", \".jpe\", \".jfif\", \".jfi\", \".jif\",\".JPG\")\ndirectory = './'\nfiles = [ file for file in glob.glob(directory+'/*') if file.endswith(IMG_EXTS)]\n",
"import os\nimport glob\n\nprojectFiles = [i for i in glob.glob(os.path.join(projectDir,\"*\")) if os.path.splitext(i)[-1].lower() in ['.txt','.markdown','.mdown']]\n\nos.path.splitext will return filename & .extension\nfilename, .extension = os.path.splitext('filename.extension')\n\n.lower() will convert a string into lowercase\n",
"this worked for me:\nimport glob\nimages = glob.glob('*.JPG' or '*.jpg' or '*.png')\n\n"
] | [
-1,
-1,
-1,
-1,
-3,
-6
] | [
"glob",
"python"
] | stackoverflow_0004568580_glob_python.txt |
Q:
How to modify a dictionary from a csv file in raw python?
I have a file that looks like this, I want to get data from this file so that I can create a dictionary that takes the neighboruhood names as keys, with its values being ID, Population, and Low rate.
ID Neighbourhood Name Population Low rate
1 East Billin-Pickering 43567 1000
2 North Eglinton-Silverstone-Ajax 33098 9087
3 Wistle-dong Lock 25009 6754
4 Squarion-Staion 10000 8790
5 Doki doki lolo 6789 2315
The output should look something like this (IF THE DICTIONARY I AM GIVEN IS EMPTY)
{'East Billin-Pickering':
{'id': 1, 'population': 43567, 'low_rate': 1000,
'North Eglinton-Silverstone-Ajax':
{'id': 2, 'population': 33098, 'low_rate': 9087},
'Wistle-dong Lock':
{'id': 3, 'population': 25009, 'low_rate': 6754},
'Squarion-Staion':
{'id': 4, 'population': 10000, 'low_rate': 8790},
'Doki doki lolo':
{'id': 5, 'population': 6789, 'low_rate': 2315}}
The file is already open, and I am also given a dictionary that may or may not have given values in it. How would I update that dictionary using data from the file?
Can somebody give me hints? I'm so confused.
I have no idea on how to start this. I know I have loop through the file, and will have to use strip() and split() methods at one point. I'm not sure how to actually get the values themselves, and modify them into a dictionary.
def get_data(data: dict, file: TextIO) -> None:
"""
"""
opened = file.read().strip()
for line in file:
words = line.split(SEP)
income_data = tuple(opened[POP], opened[LI_COL])
data[LI_NBH_NAME_COL] = tuple(opened[ID_COL], income_data)
# Constants I'm using are:
SEP = ','
POP = 2
LI_COL = 3
ID_COL = 0
I'm trying to write an answer that does not use any imports, even the CSV import, mostly for understanding. I want to write a program that doesn't use imports so that I can better understand what's happening before I start importing csv.
How would this work?
A:
If you can install 3rd party library you can use pandas as following:
import pandas as pd
data = pd.read_csv("test.csv", delimiter="\t") # Set delimiter and file name to your specific file
data = data.set_index("Neighbourhood Name")
final_dict = data.to_dict(orient="index")
Final dict now contains:
{
"East Billin-Pickering":{
"ID":1,
"Population":43567,
"Low rate ":1000
},
"North Eglinton-Silverstone-Ajax":{
"ID":2,
"Population":33098,
"Low rate ":9087
},
"Wistle-dong Lock":{
"ID":3,
"Population":25009,
"Low rate ":6754
},
"Squarion-Staion":{
"ID":4,
"Population":10000,
"Low rate ":8790
},
"Doki doki lolo":{
"ID":5,
"Population":6789,
"Low rate ":2315
}
}
A:
With pandas:
import pandas as pd
filename = 'myCSV.csv'
def read_csv(filename):
return pd.read_csv(filename).to_dict('records')
A:
You can use csv and DictReader. For example:
import csv
with open('input.csv', newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
print(row)
would print:
{'ID': '1', 'Neighbourhood Name': 'East Billin-Pickering', 'Population': '43567', 'Low rate': '1000'}
{'ID': '2', 'Neighbourhood Name': 'North Eglinton-Silverstone-Ajax', 'Population': '33098', 'Low rate': '9087'}
{'ID': '3', 'Neighbourhood Name': 'Wistle-dong Lock', 'Population': '25009', 'Low rate': '6754'}
{'ID': '4', 'Neighbourhood Name': 'Squarion-Staion', 'Population': '10000', 'Low rate': '8790'}
{'ID': '5', 'Neighbourhood Name': 'Doki doki lolo', 'Population': '6789', 'Low rate': '2315'}
So, with that you can construct any dictionary you want. In your specific case it could look something like:
import csv
result_dict = {}
with open('input.csv', newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
result_dict[row["Neighbourhood Name"]] = {
"ID": row.get("id"),
"population": row.get("population"),
"low_rate": row.get("low_rate")
}
That should give you the dictionary you wanted. Hope this helps.
| How to modify a dictionary from a csv file in raw python? | I have a file that looks like this, I want to get data from this file so that I can create a dictionary that takes the neighboruhood names as keys, with its values being ID, Population, and Low rate.
ID Neighbourhood Name Population Low rate
1 East Billin-Pickering 43567 1000
2 North Eglinton-Silverstone-Ajax 33098 9087
3 Wistle-dong Lock 25009 6754
4 Squarion-Staion 10000 8790
5 Doki doki lolo 6789 2315
The output should look something like this (IF THE DICTIONARY I AM GIVEN IS EMPTY)
{'East Billin-Pickering':
{'id': 1, 'population': 43567, 'low_rate': 1000,
'North Eglinton-Silverstone-Ajax':
{'id': 2, 'population': 33098, 'low_rate': 9087},
'Wistle-dong Lock':
{'id': 3, 'population': 25009, 'low_rate': 6754},
'Squarion-Staion':
{'id': 4, 'population': 10000, 'low_rate': 8790},
'Doki doki lolo':
{'id': 5, 'population': 6789, 'low_rate': 2315}}
The file is already open, and I am also given a dictionary that may or may not have given values in it. How would I update that dictionary using data from the file?
Can somebody give me hints? I'm so confused.
I have no idea on how to start this. I know I have loop through the file, and will have to use strip() and split() methods at one point. I'm not sure how to actually get the values themselves, and modify them into a dictionary.
def get_data(data: dict, file: TextIO) -> None:
"""
"""
opened = file.read().strip()
for line in file:
words = line.split(SEP)
income_data = tuple(opened[POP], opened[LI_COL])
data[LI_NBH_NAME_COL] = tuple(opened[ID_COL], income_data)
# Constants I'm using are:
SEP = ','
POP = 2
LI_COL = 3
ID_COL = 0
I'm trying to write an answer that does not use any imports, even the CSV import, mostly for understanding. I want to write a program that doesn't use imports so that I can better understand what's happening before I start importing csv.
How would this work?
| [
"If you can install 3rd party library you can use pandas as following:\nimport pandas as pd\n\ndata = pd.read_csv(\"test.csv\", delimiter=\"\\t\") # Set delimiter and file name to your specific file \ndata = data.set_index(\"Neighbourhood Name\")\nfinal_dict = data.to_dict(orient=\"index\")\n\nFinal dict now contains:\n{\n \"East Billin-Pickering\":{\n \"ID\":1,\n \"Population\":43567,\n \"Low rate \":1000\n },\n \"North Eglinton-Silverstone-Ajax\":{\n \"ID\":2,\n \"Population\":33098,\n \"Low rate \":9087\n },\n \"Wistle-dong Lock\":{\n \"ID\":3,\n \"Population\":25009,\n \"Low rate \":6754\n },\n \"Squarion-Staion\":{\n \"ID\":4,\n \"Population\":10000,\n \"Low rate \":8790\n },\n \"Doki doki lolo\":{\n \"ID\":5,\n \"Population\":6789,\n \"Low rate \":2315\n }\n}\n\n",
"With pandas:\nimport pandas as pd\n\nfilename = 'myCSV.csv'\n\ndef read_csv(filename):\n return pd.read_csv(filename).to_dict('records')\n\n",
"You can use csv and DictReader. For example:\nimport csv\n\nwith open('input.csv', newline='') as csvfile:\n reader = csv.DictReader(csvfile)\n for row in reader:\n print(row)\n\nwould print:\n{'ID': '1', 'Neighbourhood Name': 'East Billin-Pickering', 'Population': '43567', 'Low rate': '1000'}\n{'ID': '2', 'Neighbourhood Name': 'North Eglinton-Silverstone-Ajax', 'Population': '33098', 'Low rate': '9087'}\n{'ID': '3', 'Neighbourhood Name': 'Wistle-dong Lock', 'Population': '25009', 'Low rate': '6754'}\n{'ID': '4', 'Neighbourhood Name': 'Squarion-Staion', 'Population': '10000', 'Low rate': '8790'}\n{'ID': '5', 'Neighbourhood Name': 'Doki doki lolo', 'Population': '6789', 'Low rate': '2315'}\n\nSo, with that you can construct any dictionary you want. In your specific case it could look something like:\nimport csv\n\nresult_dict = {}\n\nwith open('input.csv', newline='') as csvfile:\n reader = csv.DictReader(csvfile)\n for row in reader:\n result_dict[row[\"Neighbourhood Name\"]] = {\n \"ID\": row.get(\"id\"),\n \"population\": row.get(\"population\"),\n \"low_rate\": row.get(\"low_rate\")\n }\n\nThat should give you the dictionary you wanted. Hope this helps.\n"
] | [
2,
1,
1
] | [] | [] | [
"csv",
"for_loop",
"python"
] | stackoverflow_0074626946_csv_for_loop_python.txt |
Q:
Google people.listDirectoryPeople() method on python returns a slightly different list everytime
My organisation uses Google G Suite and contact details of all the employees are saved on the workspace directory. I've enabled people API for my work email (since it's part of G Suite) and tried listing out all employee contact details using people.listDirectoryPeople method.
Here's what I'm doing:
service = build('people', 'v1', credentials=creds)
src = 'DIRECTORY_SOURCE_TYPE_DOMAIN_PROFILE'
results = service.people().listDirectoryPeople(
readMask='names,emailAddresses,phoneNumbers,organizations',
sources=src,
pageSize=1000
).execute()
directory_people = results.get('people', [])
## I'M SAVING THE NEXT PAGE TOKEN HERE TO USE IN THE WHILE LOOP ##
next_page_token = results.get('nextPageToken')
for i, person in enumerate(directory_people):
names = person.get('names', [])
emails = person.get('emailAddresses',[])
phones = person.get('phoneNumbers',[])
orgs = person.get('organizations',[])
### code to save contacts ###
with open('file.tsv', 'w') as f:
f.write("Name\tOrg\tPhone\tEmail\n")
f.write(f"{name}\t{org}\t{phone}\t{email}\n")
while next_page_token:
results = service.people().listDirectoryPeople(
readMask='names,emailAddresses,phoneNumbers,organizations',
sources=src,
pageSize=1000,
pageToken=next_page_token
).execute()
directory_people = results.get('people', [])
next_page_token = results.get('nextPageToken')
print(next_page_token)
for i, person in enumerate(directory_people):
names = person.get('names', [])
emails = person.get('emailAddresses',[])
phones = person.get('phoneNumbers',[])
orgs = person.get('organizations',[])
### same code to save contacts ###
with open('file.tsv', 'a+') as f:
f.write(f"{name}\t{org}\t{phone}\t{email}\n")
The subsequent pages are loaded using next_page_token in a while loop.
The problem I'm facing is that the list returned is slightly different every time. E.g. Running the script 3 times would result in 3 different lists of lengths like 20, 25, 18.
Most of the contacts are the same, but there are some which weren't there in the previous run, while some from the previous run are not present now.
Note: I've used source DIRECTORY_SOURCE_TYPE_DOMAIN_CONTACT too but it doesn't serve my purpose because the contacts I'm interested in aren't available on this source
I've also tried using people.connections().list() method but that simply returns None for my work email
Does anyone know why the method isn't returning all the contacts like it's supposed to (or at least I believe it's supposed to)?
A:
Not getting correct list because with open statement is inside the directory_people list for loop, move this out of loop like this:
## I'M SAVING THE NEXT PAGE TOKEN HERE TO USE IN THE WHILE LOOP ##
next_page_token = results.get('nextPageToken')
with open('file.tsv', 'w') as f:
f.write("Name\tOrg\tPhone\tEmail\n")
for i, person in enumerate(directory_people):
name = person.get('names', [])
email = person.get('emailAddresses',[])
phone = person.get('phoneNumbers',[])
org = person.get('organizations',[])
f.write(f"{name}\t{org}\t{phone}\t{email}\n")
| Google people.listDirectoryPeople() method on python returns a slightly different list everytime | My organisation uses Google G Suite and contact details of all the employees are saved on the workspace directory. I've enabled people API for my work email (since it's part of G Suite) and tried listing out all employee contact details using people.listDirectoryPeople method.
Here's what I'm doing:
service = build('people', 'v1', credentials=creds)
src = 'DIRECTORY_SOURCE_TYPE_DOMAIN_PROFILE'
results = service.people().listDirectoryPeople(
readMask='names,emailAddresses,phoneNumbers,organizations',
sources=src,
pageSize=1000
).execute()
directory_people = results.get('people', [])
## I'M SAVING THE NEXT PAGE TOKEN HERE TO USE IN THE WHILE LOOP ##
next_page_token = results.get('nextPageToken')
for i, person in enumerate(directory_people):
names = person.get('names', [])
emails = person.get('emailAddresses',[])
phones = person.get('phoneNumbers',[])
orgs = person.get('organizations',[])
### code to save contacts ###
with open('file.tsv', 'w') as f:
f.write("Name\tOrg\tPhone\tEmail\n")
f.write(f"{name}\t{org}\t{phone}\t{email}\n")
while next_page_token:
results = service.people().listDirectoryPeople(
readMask='names,emailAddresses,phoneNumbers,organizations',
sources=src,
pageSize=1000,
pageToken=next_page_token
).execute()
directory_people = results.get('people', [])
next_page_token = results.get('nextPageToken')
print(next_page_token)
for i, person in enumerate(directory_people):
names = person.get('names', [])
emails = person.get('emailAddresses',[])
phones = person.get('phoneNumbers',[])
orgs = person.get('organizations',[])
### same code to save contacts ###
with open('file.tsv', 'a+') as f:
f.write(f"{name}\t{org}\t{phone}\t{email}\n")
The subsequent pages are loaded using next_page_token in a while loop.
The problem I'm facing is that the list returned is slightly different every time. E.g. Running the script 3 times would result in 3 different lists of lengths like 20, 25, 18.
Most of the contacts are the same, but there are some which weren't there in the previous run, while some from the previous run are not present now.
Note: I've used source DIRECTORY_SOURCE_TYPE_DOMAIN_CONTACT too but it doesn't serve my purpose because the contacts I'm interested in aren't available on this source
I've also tried using people.connections().list() method but that simply returns None for my work email
Does anyone know why the method isn't returning all the contacts like it's supposed to (or at least I believe it's supposed to)?
| [
"Not getting correct list because with open statement is inside the directory_people list for loop, move this out of loop like this:\n## I'M SAVING THE NEXT PAGE TOKEN HERE TO USE IN THE WHILE LOOP ##\nnext_page_token = results.get('nextPageToken')\n\nwith open('file.tsv', 'w') as f:\n f.write(\"Name\\tOrg\\tPhone\\tEmail\\n\")\n for i, person in enumerate(directory_people):\n name = person.get('names', [])\n email = person.get('emailAddresses',[])\n phone = person.get('phoneNumbers',[])\n org = person.get('organizations',[])\n f.write(f\"{name}\\t{org}\\t{phone}\\t{email}\\n\")\n\n"
] | [
0
] | [] | [] | [
"google_people_api",
"python"
] | stackoverflow_0067301671_google_people_api_python.txt |
Q:
Encoding Image to Base64 to MongoDB
Below is the Python code that am using to try to get this done.
I am trying to take an image and upload that to my MongoDB as base64. This issue is that whenever I try to put it into MongoDB it is giving me a different string.
I added the line of code to output enc_file to a text document, and that is the correct Base64 which can then be converted back to an image. The issue is that I am getting the output in the image below in my MongoDB Database.
import os
import base64
import pymongo
def checkImage(file_name):
if file_name.lower().endswith(('.png', '.jpg', '.jpeg', '.tiff', '.bmp', '.gif')):
return True
return False
def checkFile(file_name):
if(os.path.exists(file_name)):
return True
return False
def convert64(file_name):
image_file = open(file_name, "rb")
bs64_str = base64.b64encode(image_file.read())
return bs64_str
conn_str = "--"
connection = pymongo.MongoClient(conn_str, serverSelectionTimeoutMS=5000)
db = connection.test
file_meta = db.file_meta
def main():
while(True):
file_name = input("Enter the image name to upload: ")
# check if the file exists or not in our folder
if checkFile(file_name):
# verify that the file is an image file
if checkImage(file_name):
# print(convert64(file_name))
enc_file = convert64(file_name)
coll = db.testcollection
with open('base64.txt', 'wb') as f:
f.write(enc_file)
coll.insert_one({"filename": file_name, "file": enc_file, "description": "test"})
break;
else:
print("Please enter a valid image file")
main()
I am expecting the output from the text document to be the same output that is inserted into my Mongo Database.
A:
I just ran into this now as well.
You are turning the image into base64 and it seems mongodb does it as well, that is why you are seeing a different string -- you are getting a base64 of a base64.
import bson
from bson import Binary
with open(image_location, "rb") as img_file:
my_string = Binary(img_file.read())
my_collection.insert_one({"_id": bson_id_image, "Image": my_string })
| Encoding Image to Base64 to MongoDB | Below is the Python code that am using to try to get this done.
I am trying to take an image and upload that to my MongoDB as base64. This issue is that whenever I try to put it into MongoDB it is giving me a different string.
I added the line of code to output enc_file to a text document, and that is the correct Base64 which can then be converted back to an image. The issue is that I am getting the output in the image below in my MongoDB Database.
import os
import base64
import pymongo
def checkImage(file_name):
if file_name.lower().endswith(('.png', '.jpg', '.jpeg', '.tiff', '.bmp', '.gif')):
return True
return False
def checkFile(file_name):
if(os.path.exists(file_name)):
return True
return False
def convert64(file_name):
image_file = open(file_name, "rb")
bs64_str = base64.b64encode(image_file.read())
return bs64_str
conn_str = "--"
connection = pymongo.MongoClient(conn_str, serverSelectionTimeoutMS=5000)
db = connection.test
file_meta = db.file_meta
def main():
while(True):
file_name = input("Enter the image name to upload: ")
# check if the file exists or not in our folder
if checkFile(file_name):
# verify that the file is an image file
if checkImage(file_name):
# print(convert64(file_name))
enc_file = convert64(file_name)
coll = db.testcollection
with open('base64.txt', 'wb') as f:
f.write(enc_file)
coll.insert_one({"filename": file_name, "file": enc_file, "description": "test"})
break;
else:
print("Please enter a valid image file")
main()
I am expecting the output from the text document to be the same output that is inserted into my Mongo Database.
| [
"I just ran into this now as well.\nYou are turning the image into base64 and it seems mongodb does it as well, that is why you are seeing a different string -- you are getting a base64 of a base64.\nimport bson\nfrom bson import Binary\n\nwith open(image_location, \"rb\") as img_file:\n my_string = Binary(img_file.read())\n\nmy_collection.insert_one({\"_id\": bson_id_image, \"Image\": my_string })\n\n"
] | [
0
] | [] | [] | [
"base64",
"mongodb",
"python"
] | stackoverflow_0074281617_base64_mongodb_python.txt |
Q:
How to make code sleep without using modules
I'm currently in a bit of a predicament, I'm trying to make a micro python program that has a small time delay for readability, but cannot use any imports. I would simply install the module onto the machine I'm working on, but the program is designed for the Casio-FX9860GIII Calculator.
My first thought was to use a long calculation that takes the calculator a while to process, hence making the program "sleep" for a short period of time, but I've had no success with that. Does anyone have any ideas?
A:
managed to figure it out, just used a for loop and did some testing using the Time function to get the timings right for my system, code looks like this
def systemSleep(s):
i = 0
for i in range(0, s*45100000):
3 * 3
3 * 3
3 * 3
| How to make code sleep without using modules | I'm currently in a bit of a predicament, I'm trying to make a micro python program that has a small time delay for readability, but cannot use any imports. I would simply install the module onto the machine I'm working on, but the program is designed for the Casio-FX9860GIII Calculator.
My first thought was to use a long calculation that takes the calculator a while to process, hence making the program "sleep" for a short period of time, but I've had no success with that. Does anyone have any ideas?
| [
"managed to figure it out, just used a for loop and did some testing using the Time function to get the timings right for my system, code looks like this\ndef systemSleep(s):\n i = 0\n for i in range(0, s*45100000):\n 3 * 3\n 3 * 3\n 3 * 3\n\n"
] | [
0
] | [] | [] | [
"micropython",
"python",
"sleep",
"time"
] | stackoverflow_0074627174_micropython_python_sleep_time.txt |
Q:
AttributeError: 'list' object has no attribute 'fit'
I am very new to python and I encountered this error saying:
AttributeError: 'list' object has no attribute 'fit'
from the following code:
models = [[GMMHMM(n_components=3,n_mix=2,verbose=False,n_iter=10) for i in range(39)]]
p_bar = tqdm(range(39))
#### ---- Training the models ----
for i in range(39):
p_bar.set_description('{}. Training "{}" Phoneme Model'.format(i,fc.get39Phon(i)))
models[i].fit(features[i],lengths[i])
p_bar.update()
How can I solve this?
I tried removing the extra bracket form
models = [[GMMHMM(n_components=3,n_mix=2,verbose=False,n_iter=10) for i in range(39)]]
and got this new error:
ValueError: Expected 2D array, got 1D array instead
| AttributeError: 'list' object has no attribute 'fit' | I am very new to python and I encountered this error saying:
AttributeError: 'list' object has no attribute 'fit'
from the following code:
models = [[GMMHMM(n_components=3,n_mix=2,verbose=False,n_iter=10) for i in range(39)]]
p_bar = tqdm(range(39))
#### ---- Training the models ----
for i in range(39):
p_bar.set_description('{}. Training "{}" Phoneme Model'.format(i,fc.get39Phon(i)))
models[i].fit(features[i],lengths[i])
p_bar.update()
How can I solve this?
I tried removing the extra bracket form
models = [[GMMHMM(n_components=3,n_mix=2,verbose=False,n_iter=10) for i in range(39)]]
and got this new error:
ValueError: Expected 2D array, got 1D array instead
| [] | [] | [
"In your code, models is a list of lists, because you have double brackets. Change the first line to:\nmodels = [GMMHMM(n_components=3,n_mix=2,verbose=False,n_iter=10) for i in range(39)]\n\n... and this sould work.\nP.S.: Please try to use reproducible code when asking quenstions.\n"
] | [
-1
] | [
"python"
] | stackoverflow_0074627222_python.txt |
Q:
Groupby then sum doesn't work when running on large df
I'm trying to tidy up a long and messy csv file a bit, but my method doesn't seem to work until I tried splitting the raw data into several files. Just wondering if anyone can see what goes wrong here?
The original file looks like this, except there are 600+ rows:
Code Item Size Location Available
DD2 Cap Blue S NY 3
DD2 Cap Blue S NY 6
DD2 Cap Blue S CA 18
DD2 Cap Blue S PA 20
DD3 Cap Blue L CA 5
DA5 Tee Red S NY 1
DA7 Tee White S PA 203
DA7 Tee White S PA 204
I would like to turn it into:
Code Item Size Location Available
DD2 Cap Blue S NY 9
CA 18
PA 20
DD3 Cap Blue L CA 5
DA5 Tee Red S NY 1
DA7 Tee White S PA 407
, so that I can then use the pivot_table function to make it tidy.
The method I'm using is
df2 = df.groupby(['Code', 'Item', 'Size', 'Location'])['Available'].sum()
print(df2)
However, pandas merges the values in 'Available' as the numbers are plain text, i.e. the result looks like
Code Item Size Location Available
DD2 Cap Blue S NY 36
CA 18
PA 20
DD3 Cap Blue L CA 5
DA5 Tee Red S NY 1
DA7 Tee White S PA 203204
What I can't get my head around is, if I split the data, say I only take 20 rows out and run the command, it would work perfectly.
I'm very new to python and pandas, any help is appreciated. Thanks in advance.
A:
Change the data-type of the Available column, e.g. by:
df2["Available"] = df2["Available"].values.astype(float)
| Groupby then sum doesn't work when running on large df | I'm trying to tidy up a long and messy csv file a bit, but my method doesn't seem to work until I tried splitting the raw data into several files. Just wondering if anyone can see what goes wrong here?
The original file looks like this, except there are 600+ rows:
Code Item Size Location Available
DD2 Cap Blue S NY 3
DD2 Cap Blue S NY 6
DD2 Cap Blue S CA 18
DD2 Cap Blue S PA 20
DD3 Cap Blue L CA 5
DA5 Tee Red S NY 1
DA7 Tee White S PA 203
DA7 Tee White S PA 204
I would like to turn it into:
Code Item Size Location Available
DD2 Cap Blue S NY 9
CA 18
PA 20
DD3 Cap Blue L CA 5
DA5 Tee Red S NY 1
DA7 Tee White S PA 407
, so that I can then use the pivot_table function to make it tidy.
The method I'm using is
df2 = df.groupby(['Code', 'Item', 'Size', 'Location'])['Available'].sum()
print(df2)
However, pandas merges the values in 'Available' as the numbers are plain text, i.e. the result looks like
Code Item Size Location Available
DD2 Cap Blue S NY 36
CA 18
PA 20
DD3 Cap Blue L CA 5
DA5 Tee Red S NY 1
DA7 Tee White S PA 203204
What I can't get my head around is, if I split the data, say I only take 20 rows out and run the command, it would work perfectly.
I'm very new to python and pandas, any help is appreciated. Thanks in advance.
| [
"Change the data-type of the Available column, e.g. by:\ndf2[\"Available\"] = df2[\"Available\"].values.astype(float)\n\n"
] | [
1
] | [] | [] | [
"pandas",
"pivot_table",
"python",
"sum"
] | stackoverflow_0074627099_pandas_pivot_table_python_sum.txt |
Q:
Normalize spacy nlp vectors
I am working with an nlp model where I'd like to normalize the nlp.vocab.vectors. From the documentation about spacy vectors it states that it's an numpy ndarray.
I've googled a fair bit about normalizing numpy arrays as stated here, here and here.
As such I tried the following 3 approaches;
import spacy
import numpy as np
nlp = spacy.load('en_core_web_lg')
matrix = nlp.vocab.vectors # Shape (514157, 300)
# Approach 1
matrix_norm1 = matrix/np.linalg.norm(matrix)
print(matrix_norm1.shape) # Shape (514157,)
# Approach 2
#matrix_norm2 = matrix / np.sqrt(np.sum(matrix**2))
## Results in TypeError: unsupported operand type(s) for ** or pow(): 'spacy.vectors.Vectors' and 'int'
# Approach 3
matrix_norm3 = matrix / (np.mean(matrix) - np.std(matrix))
print(matrix_norm3.shape) # => Shape (514157,)
The two approaches that returns a result does so but it doesn't retain the dimensions (514157, 300). Any suggestions on how I can do this?
A:
nlp.vocab.vectors is a Vectors object. The numpy array is stored in nlp.vocab.vectors.data. See: https://spacy.io/api/vectors
| Normalize spacy nlp vectors | I am working with an nlp model where I'd like to normalize the nlp.vocab.vectors. From the documentation about spacy vectors it states that it's an numpy ndarray.
I've googled a fair bit about normalizing numpy arrays as stated here, here and here.
As such I tried the following 3 approaches;
import spacy
import numpy as np
nlp = spacy.load('en_core_web_lg')
matrix = nlp.vocab.vectors # Shape (514157, 300)
# Approach 1
matrix_norm1 = matrix/np.linalg.norm(matrix)
print(matrix_norm1.shape) # Shape (514157,)
# Approach 2
#matrix_norm2 = matrix / np.sqrt(np.sum(matrix**2))
## Results in TypeError: unsupported operand type(s) for ** or pow(): 'spacy.vectors.Vectors' and 'int'
# Approach 3
matrix_norm3 = matrix / (np.mean(matrix) - np.std(matrix))
print(matrix_norm3.shape) # => Shape (514157,)
The two approaches that returns a result does so but it doesn't retain the dimensions (514157, 300). Any suggestions on how I can do this?
| [
"nlp.vocab.vectors is a Vectors object. The numpy array is stored in nlp.vocab.vectors.data. See: https://spacy.io/api/vectors\n"
] | [
1
] | [] | [] | [
"arrays",
"normalize",
"numpy",
"python",
"spacy"
] | stackoverflow_0074626626_arrays_normalize_numpy_python_spacy.txt |
Q:
pandas rolling apply with NaNs
I can't understand the behaviour of pandas.rolling.apply with np.prod and NaNs. E.g.
import pandas as pd
import numpy as np
df = pd.DataFrame({'B': [1, 1, 2, np.nan, 4], 'C': [1, 2, 3, 4, 5]}, index=pd.date_range('2013-01-01', '2013-01-05'))
Gives this dataframe:
B C
2013-01-01 1.0 1
2013-01-02 1.0 2
2013-01-03 2.0 3
2013-01-04 NaN 4
2013-01-05 4.0 5
If I apply the numpy np.prod function to a 3 day rolling window with raw=False and min_periods=1 it works as expected, ignoring the NaNs.
df.rolling('3D', min_periods=1).apply(np.prod, raw=False)
B C
2013-01-01 1.0 1.0
2013-01-02 1.0 2.0
2013-01-03 2.0 6.0
2013-01-04 2.0 24.0
2013-01-05 8.0 60.0
However with raw=True I get NaNs in column B:
df.rolling('3D', min_periods=1).apply(np.prod, raw=True)
B C
2013-01-01 1.0 1.0
2013-01-02 1.0 2.0
2013-01-03 2.0 6.0
2013-01-04 NaN 24.0
2013-01-05 NaN 60.0
I'd like to use raw=True for speed, but I don't understand this behavior? Can someone explain what's going on?
A:
It's very simple. You can try this code
import pandas as pd
import numpy as np
def foo(x):
return np.prod(x, where=~np.isnan(x))
if __name__ == '__main__':
df = pd.DataFrame({'B': [1, 1, 2, np.nan, 4], 'C': [1, 2, 3, 4, 5]},
index=pd.date_range('2013-01-01', '2013-01-05'))
res = df.rolling('3D', min_periods=1).apply(foo, raw=True)
print(res)
B C
2013-01-01 1.0 1.0
2013-01-02 1.0 2.0
2013-01-03 2.0 6.0
2013-01-04 2.0 24.0
2013-01-05 8.0 60.0
A:
Thanks to @padu and @bui for contributing comments/answers to lead me to the answer I was looking for, namely explaining the different behaviors.
As the documentation points out, when calling rolling apply with raw=False, each window is converted to a pandas.Series before being passed to np.prod. With raw=True each window is converted to a numpy array.
The key observation then is that np.prod behaves differently on a Series compared to an ndarray, ignoring the NaN in the Series case, and this is why we get different behaviors:
np.prod(np.array([1, 2, np.nan, 3])) gives nan
np.prod(pd.Series([1, 2, np.nan, 3])) gives 6.0
It's not clear to me why the NaN is ignored for the Series, but as @bui points out, you can ignore the NaNs for the ndarray case by setting the where keyword to np.prod.
| pandas rolling apply with NaNs | I can't understand the behaviour of pandas.rolling.apply with np.prod and NaNs. E.g.
import pandas as pd
import numpy as np
df = pd.DataFrame({'B': [1, 1, 2, np.nan, 4], 'C': [1, 2, 3, 4, 5]}, index=pd.date_range('2013-01-01', '2013-01-05'))
Gives this dataframe:
B C
2013-01-01 1.0 1
2013-01-02 1.0 2
2013-01-03 2.0 3
2013-01-04 NaN 4
2013-01-05 4.0 5
If I apply the numpy np.prod function to a 3 day rolling window with raw=False and min_periods=1 it works as expected, ignoring the NaNs.
df.rolling('3D', min_periods=1).apply(np.prod, raw=False)
B C
2013-01-01 1.0 1.0
2013-01-02 1.0 2.0
2013-01-03 2.0 6.0
2013-01-04 2.0 24.0
2013-01-05 8.0 60.0
However with raw=True I get NaNs in column B:
df.rolling('3D', min_periods=1).apply(np.prod, raw=True)
B C
2013-01-01 1.0 1.0
2013-01-02 1.0 2.0
2013-01-03 2.0 6.0
2013-01-04 NaN 24.0
2013-01-05 NaN 60.0
I'd like to use raw=True for speed, but I don't understand this behavior? Can someone explain what's going on?
| [
"It's very simple. You can try this code\nimport pandas as pd\nimport numpy as np\n\n\ndef foo(x):\n return np.prod(x, where=~np.isnan(x))\n\n\nif __name__ == '__main__':\n df = pd.DataFrame({'B': [1, 1, 2, np.nan, 4], 'C': [1, 2, 3, 4, 5]},\n index=pd.date_range('2013-01-01', '2013-01-05'))\n res = df.rolling('3D', min_periods=1).apply(foo, raw=True)\n \n print(res)\n\n\n B C\n2013-01-01 1.0 1.0\n2013-01-02 1.0 2.0\n2013-01-03 2.0 6.0\n2013-01-04 2.0 24.0\n2013-01-05 8.0 60.0\n\n\n",
"Thanks to @padu and @bui for contributing comments/answers to lead me to the answer I was looking for, namely explaining the different behaviors.\nAs the documentation points out, when calling rolling apply with raw=False, each window is converted to a pandas.Series before being passed to np.prod. With raw=True each window is converted to a numpy array.\nThe key observation then is that np.prod behaves differently on a Series compared to an ndarray, ignoring the NaN in the Series case, and this is why we get different behaviors:\nnp.prod(np.array([1, 2, np.nan, 3])) gives nan\nnp.prod(pd.Series([1, 2, np.nan, 3])) gives 6.0\nIt's not clear to me why the NaN is ignored for the Series, but as @bui points out, you can ignore the NaNs for the ndarray case by setting the where keyword to np.prod.\n"
] | [
1,
0
] | [] | [] | [
"pandas",
"python",
"rolling_computation"
] | stackoverflow_0074621552_pandas_python_rolling_computation.txt |
Q:
How can I embed buttons to my message using discord.py?
I am making a discord bot using discord.py (with slash commands), but I am stuck on embedding buttons to my message. I can send the messages fine but once I try to put embeds there is always an error.
I've tried using:
from discord_components import Button
But here's the error message:
from discord_components import Button
ModuleNotFoundError: No module named 'discord_components'
I've looked into many SO questions but most of the answers don't work (ModuleNotFoundError) or do not support slash commands
Note: if it helps, I'm using replit as my IDE.
A:
Your import might be wrong, try this.
from discord.ui import Button, View
| How can I embed buttons to my message using discord.py? | I am making a discord bot using discord.py (with slash commands), but I am stuck on embedding buttons to my message. I can send the messages fine but once I try to put embeds there is always an error.
I've tried using:
from discord_components import Button
But here's the error message:
from discord_components import Button
ModuleNotFoundError: No module named 'discord_components'
I've looked into many SO questions but most of the answers don't work (ModuleNotFoundError) or do not support slash commands
Note: if it helps, I'm using replit as my IDE.
| [
"Your import might be wrong, try this.\nfrom discord.ui import Button, View\n\n"
] | [
0
] | [] | [] | [
"discord.py",
"discord_buttons",
"modulenotfounderror",
"python"
] | stackoverflow_0074394187_discord.py_discord_buttons_modulenotfounderror_python.txt |
Q:
Using a signal as an input to a function adds noise to the signal in Python
I have a signal X,
t,X = genS(f,T,L)that looks like this:
plt.plot(t,X)
Clearly it's a very clean signal with no noise. On the next line, I use this signal as input into a function. If I then plot the same signal again...
[p,d] = bopS(X,R,T,I,fs)
plt.plot(t,X)
There is nothing else done in the code between generating and using the signal, there is not even any modification of X inside bopS, I simply call it for a calculation. Any idea what is going on here?
bopS function
def bopS(s,R,T,I,fs):
s2 = s
s1 = s2 + np.random.normal(0,0.1*max(s2),len(s2))
d = (R+T)/(I*fs)
s1 = np.roll(s1,d)
return s1,d
A:
If you could provide the details of genS & bopS, it would help. With not knowing what these functions do then no one will be able to help.
Are these functions from a library? What library? If not share the function code.
EDIT:
I believe the issue is with you creating a "shallow" copy of the list in bopS
s2 = s
https://docs.python.org/3/library/copy.html
which means that s2 is still linked to s, anything that happens to s2 will happen to s and in this example the funciton is adding noise. To resolve this issue use the following below code.
import copy # at top of code
def bopS(s,R,T,I,fs):
s2 = copy.deepcopy(s) #changed this form s2 = s which was a shallow copy of the list meaning it was still linked.
s1 = s2 + np.random.normal(0,0.1*max(s2),len(s2))
d = (R+T)/(I*fs)
s1 = np.roll(s1,d)
return s1,d
let me know if it resolved your issue.
| Using a signal as an input to a function adds noise to the signal in Python | I have a signal X,
t,X = genS(f,T,L)that looks like this:
plt.plot(t,X)
Clearly it's a very clean signal with no noise. On the next line, I use this signal as input into a function. If I then plot the same signal again...
[p,d] = bopS(X,R,T,I,fs)
plt.plot(t,X)
There is nothing else done in the code between generating and using the signal, there is not even any modification of X inside bopS, I simply call it for a calculation. Any idea what is going on here?
bopS function
def bopS(s,R,T,I,fs):
s2 = s
s1 = s2 + np.random.normal(0,0.1*max(s2),len(s2))
d = (R+T)/(I*fs)
s1 = np.roll(s1,d)
return s1,d
| [
"If you could provide the details of genS & bopS, it would help. With not knowing what these functions do then no one will be able to help.\nAre these functions from a library? What library? If not share the function code.\nEDIT:\nI believe the issue is with you creating a \"shallow\" copy of the list in bopS\ns2 = s\nhttps://docs.python.org/3/library/copy.html\nwhich means that s2 is still linked to s, anything that happens to s2 will happen to s and in this example the funciton is adding noise. To resolve this issue use the following below code.\nimport copy # at top of code\n\n\ndef bopS(s,R,T,I,fs):\n s2 = copy.deepcopy(s) #changed this form s2 = s which was a shallow copy of the list meaning it was still linked.\n s1 = s2 + np.random.normal(0,0.1*max(s2),len(s2))\n d = (R+T)/(I*fs)\n s1 = np.roll(s1,d)\n\n return s1,d\n\nlet me know if it resolved your issue.\n"
] | [
1
] | [] | [] | [
"function",
"noise",
"python",
"signals",
"variables"
] | stackoverflow_0074627204_function_noise_python_signals_variables.txt |
Q:
Default MaxPoolingOp only supports NHWC on device type CPU
I tried to run a prediction on a SegNet model, but when the predict function its call I received an error.
I tried also to run the prediction with the with tf.device('/cpu:0'):, but I received the same error
if __name__ == '__main__':
# path to the model
model = tf.keras.models.load_model('segnet_weightsONNXbackToKeras3.h5')
model.compile(loss='categorical_crossentropy', optimizer='RMSprop', metrics=['accuracy'])
model.summary()
input_shape = [None, 360, 480, 3]
output_shape = [None, 352, 480, 20]
img = cv2.imread('test4.jpg')
input_image = img
img = cv2.resize(img, (input_shape[2], input_shape[1]))
img = np.reshape(img, [1, input_shape[1], input_shape[2], input_shape[3]])
if normalize:
img = img.astype('float32') / 255
model.summary()
classes = model.predict(img)[0]
colors = []
for i in range(output_shape[3]):
colors.append(generate_color())
maxMatrix = np.amax(classes, axis=2)
prediction = np.zeros((output_shape[1], output_shape[2], 3), dtype=np.uint8)
2019-10-25 19:32:03.126831: E tensorflow/core/common_runtime/executor.cc:642] Executor failed to create kernel. Invalid argument: Default MaxPoolingOp only supports NHWC on device type CPU
[[{{node model/LAYER_7/MaxPool}}]]
Traceback (most recent call last):
File "../mold_segmentation_h5VM.py", line 62, in <module>
classes = model.predict(img)[0]
File "..\anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 909, in predict
use_multiprocessing=use_multiprocessing)
File "..\anaconda3\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU
[[node model/LAYER_7/MaxPool (defined at D:\EB-AI\tools\anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py:1751) ]] [Op:__inference_distributed_function_4421]
Function call stack:
distributed_function
A:
Without test4.jpg it's difficult to test solutions. However, the error Default MaxPoolingOp only supports NHWC on device type CPU
means that the model only can accept inputs of the form n_examples x height x width x channels.
I think your cv2.resize and subsequent np.reshape lines are not outputting the image in the correct format. Try printing out the shape of the image before you call model.predict(), and make sure it's in the format n_examples x height x width x channels.
A:
I had an error "AvgPoolingOp only supports NHWC on device type CPU".
In this case was useful:
pip install intel-tensorflow instead of regular tensorflow
A:
This works for me.
pip install intel-tensorflow
| Default MaxPoolingOp only supports NHWC on device type CPU | I tried to run a prediction on a SegNet model, but when the predict function its call I received an error.
I tried also to run the prediction with the with tf.device('/cpu:0'):, but I received the same error
if __name__ == '__main__':
# path to the model
model = tf.keras.models.load_model('segnet_weightsONNXbackToKeras3.h5')
model.compile(loss='categorical_crossentropy', optimizer='RMSprop', metrics=['accuracy'])
model.summary()
input_shape = [None, 360, 480, 3]
output_shape = [None, 352, 480, 20]
img = cv2.imread('test4.jpg')
input_image = img
img = cv2.resize(img, (input_shape[2], input_shape[1]))
img = np.reshape(img, [1, input_shape[1], input_shape[2], input_shape[3]])
if normalize:
img = img.astype('float32') / 255
model.summary()
classes = model.predict(img)[0]
colors = []
for i in range(output_shape[3]):
colors.append(generate_color())
maxMatrix = np.amax(classes, axis=2)
prediction = np.zeros((output_shape[1], output_shape[2], 3), dtype=np.uint8)
2019-10-25 19:32:03.126831: E tensorflow/core/common_runtime/executor.cc:642] Executor failed to create kernel. Invalid argument: Default MaxPoolingOp only supports NHWC on device type CPU
[[{{node model/LAYER_7/MaxPool}}]]
Traceback (most recent call last):
File "../mold_segmentation_h5VM.py", line 62, in <module>
classes = model.predict(img)[0]
File "..\anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 909, in predict
use_multiprocessing=use_multiprocessing)
File "..\anaconda3\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU
[[node model/LAYER_7/MaxPool (defined at D:\EB-AI\tools\anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py:1751) ]] [Op:__inference_distributed_function_4421]
Function call stack:
distributed_function
| [
"Without test4.jpg it's difficult to test solutions. However, the error Default MaxPoolingOp only supports NHWC on device type CPU\nmeans that the model only can accept inputs of the form n_examples x height x width x channels.\nI think your cv2.resize and subsequent np.reshape lines are not outputting the image in the correct format. Try printing out the shape of the image before you call model.predict(), and make sure it's in the format n_examples x height x width x channels.\n",
"I had an error \"AvgPoolingOp only supports NHWC on device type CPU\".\nIn this case was useful:\npip install intel-tensorflow instead of regular tensorflow\n",
"This works for me.\npip install intel-tensorflow\n"
] | [
7,
1,
0
] | [] | [] | [
"keras",
"python",
"tensorflow"
] | stackoverflow_0058562582_keras_python_tensorflow.txt |
Q:
how to pass a context variable from a variable inside of an if statement?
Inside of an if statement I've check_order that I need to have as a context variable for my template, I'm getting this traceback: local variable 'check_order' referenced before assignment. How do I have it as a context variable without having to repeat the code to have it outside of the if statement?
View
if request.method == "POST":
if request.user.is_authenticated:
customer = request.user.customer
check_order = OrderItem.objects.filter(order__customer=customer)
if check_order:
if form.is_valid():
#does logic
else:
messages.error(request, f"Failed")
else:
return redirect()
context = {"check_order": check_order}
A:
This is happening because of variable scoping. check_order is declared within a branch of an if statement, but referenced outside of that branch - it's not in scope, so Python is throwing an error letting you know that you're using it before it is defined.
You can read more about Python scope here: https://realpython.com/python-scope-legb-rule/.
The following code will address your issue:
# Declare check_order with no value but in the same scope it is referenced
check_order = None
if request.method == "POST":
if request.user.is_authenticated:
customer = request.user.customer
check_order = OrderItem.objects.filter(order__customer=customer)
if check_order:
if form.is_valid():
#does logic
else:
messages.error(request, f"Failed")
else:
return redirect()
context = {"check_order": check_order}
| how to pass a context variable from a variable inside of an if statement? | Inside of an if statement I've check_order that I need to have as a context variable for my template, I'm getting this traceback: local variable 'check_order' referenced before assignment. How do I have it as a context variable without having to repeat the code to have it outside of the if statement?
View
if request.method == "POST":
if request.user.is_authenticated:
customer = request.user.customer
check_order = OrderItem.objects.filter(order__customer=customer)
if check_order:
if form.is_valid():
#does logic
else:
messages.error(request, f"Failed")
else:
return redirect()
context = {"check_order": check_order}
| [
"This is happening because of variable scoping. check_order is declared within a branch of an if statement, but referenced outside of that branch - it's not in scope, so Python is throwing an error letting you know that you're using it before it is defined.\nYou can read more about Python scope here: https://realpython.com/python-scope-legb-rule/.\nThe following code will address your issue:\n# Declare check_order with no value but in the same scope it is referenced\ncheck_order = None\n\nif request.method == \"POST\":\n if request.user.is_authenticated:\n customer = request.user.customer\n check_order = OrderItem.objects.filter(order__customer=customer)\n if check_order:\n if form.is_valid():\n #does logic\n else:\n messages.error(request, f\"Failed\")\n else:\n return redirect()\n\ncontext = {\"check_order\": check_order}\n\n"
] | [
3
] | [] | [] | [
"django",
"python"
] | stackoverflow_0074626698_django_python.txt |
Q:
How to correcntly sort time values in a diagram in Python?
I am a beginner in Python and to start of I want to make some simple data visualizations.
To be precise I would like to plot a diagram with the runtimes of movies.
Here's how my code is looking right now:
# import matplotlib
import matplotlib.pyplot as plt
# movie names
x=['titanic','ironman','avengers','sholay','thor','caption america','dabang','bajarangi bhaijaan']
# movie runtime
y=['2:32:23','2:23:5','2:6:45','3:10:23','2:3:23','1:23:5','2:16:42','2:10:23']
# use scatter plot for better visualisation
plt.scatter(x,y,marker='*',s=200,color='r')
# use this if you want to show in bar graph
#plt.bar(x,y,color='r')
plt.xlabel('movie_name',color='c')
plt.ylabel('movie_runtime',color='c')
# make grid
plt.grid(True,color='y')
# use for better show /tilt x axis values / movies names
plt.gcf().autofmt_xdate()
# show out graph
plt.show()
The problem is now in the plot, the time values aren't sorted properly.
The shortest movie isn't the lowest on the y-axis, instead it's just the first movie in the list that is the lowest. How can I change this?
A:
For example by converting the string to datetime (assuming no movie is longer than 23h, 59m and 59s) and setting a formatter for it:
from datetime import datetime
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
x=['titanic','ironman','avengers','sholay','thor','caption america','dabang','bajarangi bhaijaan']
y=['2:32:23','2:23:5','2:6:45','3:10:23','2:3:23','1:23:5','2:16:42','2:10:23']
# to datetime data type
y=[datetime.strptime(t, "%H:%M:%S") for t in y]
plt.scatter(x,y,marker='*',s=200,color='r')
plt.xlabel('movie_name',color='c')
plt.ylabel('movie_runtime',color='c')
# set y-axis / time formatter
plt.gca().yaxis.set_major_formatter(mdates.DateFormatter("%H:%M:%S"))
plt.gcf().autofmt_xdate()
plt.grid(True, color='y')
plt.show()
| How to correcntly sort time values in a diagram in Python? | I am a beginner in Python and to start of I want to make some simple data visualizations.
To be precise I would like to plot a diagram with the runtimes of movies.
Here's how my code is looking right now:
# import matplotlib
import matplotlib.pyplot as plt
# movie names
x=['titanic','ironman','avengers','sholay','thor','caption america','dabang','bajarangi bhaijaan']
# movie runtime
y=['2:32:23','2:23:5','2:6:45','3:10:23','2:3:23','1:23:5','2:16:42','2:10:23']
# use scatter plot for better visualisation
plt.scatter(x,y,marker='*',s=200,color='r')
# use this if you want to show in bar graph
#plt.bar(x,y,color='r')
plt.xlabel('movie_name',color='c')
plt.ylabel('movie_runtime',color='c')
# make grid
plt.grid(True,color='y')
# use for better show /tilt x axis values / movies names
plt.gcf().autofmt_xdate()
# show out graph
plt.show()
The problem is now in the plot, the time values aren't sorted properly.
The shortest movie isn't the lowest on the y-axis, instead it's just the first movie in the list that is the lowest. How can I change this?
| [
"For example by converting the string to datetime (assuming no movie is longer than 23h, 59m and 59s) and setting a formatter for it:\nfrom datetime import datetime\n\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\n\nx=['titanic','ironman','avengers','sholay','thor','caption america','dabang','bajarangi bhaijaan']\ny=['2:32:23','2:23:5','2:6:45','3:10:23','2:3:23','1:23:5','2:16:42','2:10:23']\n\n# to datetime data type\ny=[datetime.strptime(t, \"%H:%M:%S\") for t in y]\n\nplt.scatter(x,y,marker='*',s=200,color='r')\n\nplt.xlabel('movie_name',color='c')\nplt.ylabel('movie_runtime',color='c')\n\n# set y-axis / time formatter\nplt.gca().yaxis.set_major_formatter(mdates.DateFormatter(\"%H:%M:%S\"))\n\nplt.gcf().autofmt_xdate()\n\nplt.grid(True, color='y')\n\nplt.show()\n\n\n"
] | [
1
] | [] | [] | [
"datetime",
"diagram",
"matplotlib",
"python"
] | stackoverflow_0074627049_datetime_diagram_matplotlib_python.txt |
Q:
1D Convolution of 2D arrays
I have 2 arrays of sets of signals, both 16x90000 arrays. In other words, 2 arrays with 16 signals in each. I want to perform matched filtering on the signals, row by row, correlating row 1 of array 1 with row 1 of array 2, and so forth. I've tried using scipy's signal.convolve2D but it is extremely slow, taking tens of seconds to convolve even a 2x90000 array. I'm not sure if I am simply implementing wrong, or if there is a more efficient way of achieving what I want. I know the arrays are long, but I feel it should still be achievable. I have a feeling convolve2d is actually convolving to a squared factor higher than I want and convolving rows by columns too but I may be misunderstanding.
My implementation:
A.shape = (16,90000) # an array of 16 signals each 90000 samples long
B.shape = (16,90000) # another array of 16 signals each 90000 samples long
corr = sig.convolve2d(A,B,mode='same')
I haven't had much coffee yet so there's every chance I'm being stupid right now.
Please no for loops.
A:
Since you need to correlate the signals row by row, the most basic solution would be:
import numpy as np
from scipy.signal import correlate
# sample inputs: A and B both have n signals of length m
n, m = 2, 5
A = np.random.randn(n, m)
B = np.random.randn(n, m)
C = np.vstack([correlate(a, b, mode="same") for a, b in zip(A, B)])
# [[-0.98455996 0.86994062 -1.1446486 -2.1751074 -0.59270322]
# [ 1.7945015 1.51317292 1.74286042 -0.57750712 -1.9178488 ]]]
One way to avoid a looped solution could be by bootlegging off a deep learning library, like PyTorch. Torch's Conv1d (though named conv, it effectively performs cross-correlation) can handle this scenario.
import torch
import torch.nn.functional as F
# Convert A and B to torch tensors
P = torch.from_numpy(A).unsqueeze(0) # (1, n, m)
Q = torch.from_numpy(B).unsqueeze(1) # (n, 1, m)
# Use conv1d --- with groups = n
def torch_correlate(A, B, n):
with torch.no_grad():
return F.conv1d(A, B, bias=None, stride=1, groups=n, padding="same").squeeze(0).numpy()
R = torch_correlate(P, Q, n)
# [[-0.98455996 0.86994062 -1.1446486 -2.1751074 -0.59270322]
# [ 1.7945015 1.51317292 1.74286042 -0.57750712 -1.9178488 ]]
However, I believe there shouldn't be any significant difference in the results, since grouping might be using some form of iteration internally as well. (Plus there is an overhead of converting from torch to numpy and back to consider).
I would suggest using the first method generally. Unless if you are working on really large signals, then you could theoretically use the PyTorch version to run it really fast on GPU, which you won't be able to do with the regular scipy one.
| 1D Convolution of 2D arrays | I have 2 arrays of sets of signals, both 16x90000 arrays. In other words, 2 arrays with 16 signals in each. I want to perform matched filtering on the signals, row by row, correlating row 1 of array 1 with row 1 of array 2, and so forth. I've tried using scipy's signal.convolve2D but it is extremely slow, taking tens of seconds to convolve even a 2x90000 array. I'm not sure if I am simply implementing wrong, or if there is a more efficient way of achieving what I want. I know the arrays are long, but I feel it should still be achievable. I have a feeling convolve2d is actually convolving to a squared factor higher than I want and convolving rows by columns too but I may be misunderstanding.
My implementation:
A.shape = (16,90000) # an array of 16 signals each 90000 samples long
B.shape = (16,90000) # another array of 16 signals each 90000 samples long
corr = sig.convolve2d(A,B,mode='same')
I haven't had much coffee yet so there's every chance I'm being stupid right now.
Please no for loops.
| [
"Since you need to correlate the signals row by row, the most basic solution would be:\nimport numpy as np\nfrom scipy.signal import correlate\n\n# sample inputs: A and B both have n signals of length m\n\nn, m = 2, 5\nA = np.random.randn(n, m)\nB = np.random.randn(n, m)\n\nC = np.vstack([correlate(a, b, mode=\"same\") for a, b in zip(A, B)])\n\n# [[-0.98455996 0.86994062 -1.1446486 -2.1751074 -0.59270322]\n# [ 1.7945015 1.51317292 1.74286042 -0.57750712 -1.9178488 ]]]\n\nOne way to avoid a looped solution could be by bootlegging off a deep learning library, like PyTorch. Torch's Conv1d (though named conv, it effectively performs cross-correlation) can handle this scenario.\nimport torch\nimport torch.nn.functional as F\n\n# Convert A and B to torch tensors\nP = torch.from_numpy(A).unsqueeze(0) # (1, n, m)\nQ = torch.from_numpy(B).unsqueeze(1) # (n, 1, m)\n\n# Use conv1d --- with groups = n\ndef torch_correlate(A, B, n):\n with torch.no_grad():\n return F.conv1d(A, B, bias=None, stride=1, groups=n, padding=\"same\").squeeze(0).numpy()\n\nR = torch_correlate(P, Q, n)\n# [[-0.98455996 0.86994062 -1.1446486 -2.1751074 -0.59270322]\n# [ 1.7945015 1.51317292 1.74286042 -0.57750712 -1.9178488 ]]\n\nHowever, I believe there shouldn't be any significant difference in the results, since grouping might be using some form of iteration internally as well. (Plus there is an overhead of converting from torch to numpy and back to consider).\nI would suggest using the first method generally. Unless if you are working on really large signals, then you could theoretically use the PyTorch version to run it really fast on GPU, which you won't be able to do with the regular scipy one.\n"
] | [
0
] | [] | [] | [
"arrays",
"convolution",
"numpy",
"python",
"scipy"
] | stackoverflow_0074625948_arrays_convolution_numpy_python_scipy.txt |
Q:
Evenly spaced series of values from a list of (timestamp, value) tuples
I'm stuck on this because I'm not quite sure how to ask the question, so here's my best attempt!
I have a list of tuples which represent a temperature reading at a particular timestamp.
[
(datetime.datetime(2022, 11, 30, 8, 25, 10, 261853), 19.82),
(datetime.datetime(2022, 11, 30, 8, 27, 22, 479093), 20.01),
(datetime.datetime(2022, 11, 30, 8, 27, 36, 984757), 19.96),
(datetime.datetime(2022, 11, 30, 8, 36, 46, 651432), 21.25),
(datetime.datetime(2022, 11, 30, 8, 41, 27, 230438), 21.42),
...
(datetime.datetime(2022, 11, 30, 11, 57, 4, 689363), 17.8)
]
As you can see, the deltas between the records are all over the place - some are a few seconds apart, while others are minutes apart.
From these, I want to create a new list of tuples (or other data structure - I am happy to use NumPy or Pandas) where the timestamp value is exactly every 5 minutes, while the temperature reading is calculated as the assumed value at that timestamp given the data that is available. Something like this:
[
(datetime.datetime(2022, 11, 30, 8, 25, 0, 0), ??),
(datetime.datetime(2022, 11, 30, 8, 30, 0, 0), ??),
(datetime.datetime(2022, 11, 30, 8, 35, 0, 0), ??),
(datetime.datetime(2022, 11, 30, 8, 40, 0, 0), ??),
...
(datetime.datetime(2022, 11, 30, 11, 30, 0, 0), ??),
]
My end goal is to be able to plot this data using PIL, but not MatPlotLib as I'm on very constrained hardware. I want to plot a smooth temperature line over a given time period, given the imperfect data I have on hand.
A:
Assuming lst the input list, you can use:
import pandas as pd
out = (
pd.DataFrame(lst).set_index(0).resample('5min')
.mean().interpolate('linear')
.reset_index().to_numpy().tolist()
)
If you really want a list of tuples:
out = list(map(tuple, out))
Output:
[[Timestamp('2022-11-30 08:25:00'), 19.930000000000003],
[Timestamp('2022-11-30 08:30:00'), 20.590000000000003],
[Timestamp('2022-11-30 08:35:00'), 21.25],
[Timestamp('2022-11-30 08:40:00'), 21.42],
[Timestamp('2022-11-30 08:45:00'), 21.32717948717949],
[Timestamp('2022-11-30 08:50:00'), 21.234358974358976],
...
[Timestamp('2022-11-30 11:45:00'), 17.985641025641026],
[Timestamp('2022-11-30 11:50:00'), 17.892820512820514],
[Timestamp('2022-11-30 11:55:00'), 17.8]]
For datetime types:
out = (
pd.DataFrame(lst).set_index(0).resample('5min')
.mean().interpolate('linear')[1]
)
out = list(zip(out.index.to_pydatetime(), out))
Output:
[(datetime.datetime(2022, 11, 30, 8, 25), 19.930000000000003),
(datetime.datetime(2022, 11, 30, 8, 30), 20.590000000000003),
(datetime.datetime(2022, 11, 30, 8, 35), 21.25),
(datetime.datetime(2022, 11, 30, 8, 40), 21.42),
(datetime.datetime(2022, 11, 30, 8, 45), 21.32717948717949),
(datetime.datetime(2022, 11, 30, 8, 50), 21.234358974358976),
...
(datetime.datetime(2022, 11, 30, 11, 45), 17.985641025641026),
(datetime.datetime(2022, 11, 30, 11, 50), 17.892820512820514),
(datetime.datetime(2022, 11, 30, 11, 55), 17.8)]
Before/after resampling:
| Evenly spaced series of values from a list of (timestamp, value) tuples | I'm stuck on this because I'm not quite sure how to ask the question, so here's my best attempt!
I have a list of tuples which represent a temperature reading at a particular timestamp.
[
(datetime.datetime(2022, 11, 30, 8, 25, 10, 261853), 19.82),
(datetime.datetime(2022, 11, 30, 8, 27, 22, 479093), 20.01),
(datetime.datetime(2022, 11, 30, 8, 27, 36, 984757), 19.96),
(datetime.datetime(2022, 11, 30, 8, 36, 46, 651432), 21.25),
(datetime.datetime(2022, 11, 30, 8, 41, 27, 230438), 21.42),
...
(datetime.datetime(2022, 11, 30, 11, 57, 4, 689363), 17.8)
]
As you can see, the deltas between the records are all over the place - some are a few seconds apart, while others are minutes apart.
From these, I want to create a new list of tuples (or other data structure - I am happy to use NumPy or Pandas) where the timestamp value is exactly every 5 minutes, while the temperature reading is calculated as the assumed value at that timestamp given the data that is available. Something like this:
[
(datetime.datetime(2022, 11, 30, 8, 25, 0, 0), ??),
(datetime.datetime(2022, 11, 30, 8, 30, 0, 0), ??),
(datetime.datetime(2022, 11, 30, 8, 35, 0, 0), ??),
(datetime.datetime(2022, 11, 30, 8, 40, 0, 0), ??),
...
(datetime.datetime(2022, 11, 30, 11, 30, 0, 0), ??),
]
My end goal is to be able to plot this data using PIL, but not MatPlotLib as I'm on very constrained hardware. I want to plot a smooth temperature line over a given time period, given the imperfect data I have on hand.
| [
"Assuming lst the input list, you can use:\nimport pandas as pd\n\nout = (\n pd.DataFrame(lst).set_index(0).resample('5min')\n .mean().interpolate('linear')\n .reset_index().to_numpy().tolist()\n)\n\nIf you really want a list of tuples:\nout = list(map(tuple, out))\n\nOutput:\n[[Timestamp('2022-11-30 08:25:00'), 19.930000000000003],\n [Timestamp('2022-11-30 08:30:00'), 20.590000000000003],\n [Timestamp('2022-11-30 08:35:00'), 21.25],\n [Timestamp('2022-11-30 08:40:00'), 21.42],\n [Timestamp('2022-11-30 08:45:00'), 21.32717948717949],\n [Timestamp('2022-11-30 08:50:00'), 21.234358974358976],\n ...\n [Timestamp('2022-11-30 11:45:00'), 17.985641025641026],\n [Timestamp('2022-11-30 11:50:00'), 17.892820512820514],\n [Timestamp('2022-11-30 11:55:00'), 17.8]]\n\nFor datetime types:\nout = (\n pd.DataFrame(lst).set_index(0).resample('5min')\n .mean().interpolate('linear')[1]\n)\n\nout = list(zip(out.index.to_pydatetime(), out))\n\nOutput:\n[(datetime.datetime(2022, 11, 30, 8, 25), 19.930000000000003),\n (datetime.datetime(2022, 11, 30, 8, 30), 20.590000000000003),\n (datetime.datetime(2022, 11, 30, 8, 35), 21.25),\n (datetime.datetime(2022, 11, 30, 8, 40), 21.42),\n (datetime.datetime(2022, 11, 30, 8, 45), 21.32717948717949),\n (datetime.datetime(2022, 11, 30, 8, 50), 21.234358974358976),\n ...\n (datetime.datetime(2022, 11, 30, 11, 45), 17.985641025641026),\n (datetime.datetime(2022, 11, 30, 11, 50), 17.892820512820514),\n (datetime.datetime(2022, 11, 30, 11, 55), 17.8)]\n\nBefore/after resampling:\n\n"
] | [
5
] | [] | [] | [
"numpy",
"pandas",
"python",
"python_imaging_library"
] | stackoverflow_0074627527_numpy_pandas_python_python_imaging_library.txt |
Q:
transform Csv file to list of lists with python?
I want to be able to turn csv file into a list of lists .
my csv file is like that :
['juridiction', 'audience', 'novembre'],['récapitulatif', 'information', 'important', 'octobre'],['terrain', 'entent', 'démocrate'],['porte-parole', 'tribunal', 'monastir', 'farid ben', 'déclaration', 'vendredi', 'octobre', 'télévision', 'national', 'mère', 'fillette', 'an', 'clandestinement', 'italie', 'juge', 'instruction', 'interrogatoire', 'père'],['disposition', 'décret', 'vigueur', 'premier', 'octobre'],['décret', 'loi', 'numéro', '2022', 'octobre', 'disposition', 'spécial', 'amélioration', 'efficacité', 'réalisation', 'projet', 'public', 'priver', 'jort', 'vendredi', 'octobre'],['avocat', 'rahal jallali', 'déclaration', 'vendredi', 'octobre', 'tap', 'militant', 'membre', 'section', 'bardo', 'ligue', 'droit', 'homme', 'membre', 'association', 'damj', 'saif', 'ayadi', 'jeune', 'juge', 'instruction', 'tribunal', 'instance'],...
into
list1 = [['juridiction', 'audience', 'novembre'],['récapitulatif', 'information', 'important', 'octobre'],['terrain', 'entent', 'démocrate'],['porte-parole', 'tribunal', 'monastir', 'farid ben', 'déclaration', 'vendredi', 'octobre', 'télévision', 'national', 'mère', 'fillette', 'an', 'clandestinement', 'italie', 'juge', 'instruction', 'interrogatoire', 'père'],['disposition', 'décret', 'vigueur', 'premier', 'octobre'],['décret', 'loi', 'numéro', '2022', 'octobre', 'disposition', 'spécial', 'amélioration', 'efficacité', 'réalisation', 'projet', 'public', 'priver', 'jort', 'vendredi', 'octobre'],['avocat', 'rahal jallali', 'déclaration', 'vendredi', 'octobre', 'tap', 'militant', 'membre', 'section', 'bardo', 'ligue', 'droit', 'homme', 'membre', 'association', 'damj', 'saif', 'ayadi', 'jeune', 'juge', 'instruction', 'tribunal', 'instance'],...]]
Ive try to solve this but no success :
import csv
from itertools import zip_longest
with open('/content/drive/MyDrive/tokens.csv') as csvfile:
rows = csv.reader(csvfile)
res = list(zip_longest(*rows))
list1 = [list(filter(None.__ne__, l)) for l in res]
print(res2)
but the output is :
[["['juridiction'"], [" 'audience'"], [" 'novembre']"], ["['récapitulatif'"], [" 'information'"], [" 'important'"], [" 'octobre']"], ["['terrain'"], [" 'entent'"], [" 'démocrate']"],...
A:
If your file really consists of only one long line, then here's a couple of options:
Use eval: You need to add the brackets for the outer list.
with open("data.csv", "r") as file:
data = eval("[" + file.read().strip() + "]")
Use json: You need to (1) add the outer brackets, and (2) replace the ' with " to make the string json compliant.
import json
with open("data.csv", "r") as file:
data = json.loads("[" + file.read().strip().replace("'", '"') + "]")
Use string manipulation: You need to (1) remove the brackets at the edges, then (2) remove the 's, then (3) .split anlong "],[", and finally (4) .split the parts along ", ".
with open("data.csv", "r") as file:
data = [
string.split(", ")
for string in file.read().strip().strip("[]").replace("'", "").split("],[")
]
(Replace data.csv with your file path.)
| transform Csv file to list of lists with python? | I want to be able to turn csv file into a list of lists .
my csv file is like that :
['juridiction', 'audience', 'novembre'],['récapitulatif', 'information', 'important', 'octobre'],['terrain', 'entent', 'démocrate'],['porte-parole', 'tribunal', 'monastir', 'farid ben', 'déclaration', 'vendredi', 'octobre', 'télévision', 'national', 'mère', 'fillette', 'an', 'clandestinement', 'italie', 'juge', 'instruction', 'interrogatoire', 'père'],['disposition', 'décret', 'vigueur', 'premier', 'octobre'],['décret', 'loi', 'numéro', '2022', 'octobre', 'disposition', 'spécial', 'amélioration', 'efficacité', 'réalisation', 'projet', 'public', 'priver', 'jort', 'vendredi', 'octobre'],['avocat', 'rahal jallali', 'déclaration', 'vendredi', 'octobre', 'tap', 'militant', 'membre', 'section', 'bardo', 'ligue', 'droit', 'homme', 'membre', 'association', 'damj', 'saif', 'ayadi', 'jeune', 'juge', 'instruction', 'tribunal', 'instance'],...
into
list1 = [['juridiction', 'audience', 'novembre'],['récapitulatif', 'information', 'important', 'octobre'],['terrain', 'entent', 'démocrate'],['porte-parole', 'tribunal', 'monastir', 'farid ben', 'déclaration', 'vendredi', 'octobre', 'télévision', 'national', 'mère', 'fillette', 'an', 'clandestinement', 'italie', 'juge', 'instruction', 'interrogatoire', 'père'],['disposition', 'décret', 'vigueur', 'premier', 'octobre'],['décret', 'loi', 'numéro', '2022', 'octobre', 'disposition', 'spécial', 'amélioration', 'efficacité', 'réalisation', 'projet', 'public', 'priver', 'jort', 'vendredi', 'octobre'],['avocat', 'rahal jallali', 'déclaration', 'vendredi', 'octobre', 'tap', 'militant', 'membre', 'section', 'bardo', 'ligue', 'droit', 'homme', 'membre', 'association', 'damj', 'saif', 'ayadi', 'jeune', 'juge', 'instruction', 'tribunal', 'instance'],...]]
Ive try to solve this but no success :
import csv
from itertools import zip_longest
with open('/content/drive/MyDrive/tokens.csv') as csvfile:
rows = csv.reader(csvfile)
res = list(zip_longest(*rows))
list1 = [list(filter(None.__ne__, l)) for l in res]
print(res2)
but the output is :
[["['juridiction'"], [" 'audience'"], [" 'novembre']"], ["['récapitulatif'"], [" 'information'"], [" 'important'"], [" 'octobre']"], ["['terrain'"], [" 'entent'"], [" 'démocrate']"],...
| [
"If your file really consists of only one long line, then here's a couple of options:\nUse eval: You need to add the brackets for the outer list.\nwith open(\"data.csv\", \"r\") as file:\n data = eval(\"[\" + file.read().strip() + \"]\")\n\nUse json: You need to (1) add the outer brackets, and (2) replace the ' with \" to make the string json compliant.\nimport json\n\nwith open(\"data.csv\", \"r\") as file:\n data = json.loads(\"[\" + file.read().strip().replace(\"'\", '\"') + \"]\")\n\nUse string manipulation: You need to (1) remove the brackets at the edges, then (2) remove the 's, then (3) .split anlong \"],[\", and finally (4) .split the parts along \", \".\nwith open(\"data.csv\", \"r\") as file:\n data = [\n string.split(\", \")\n for string in file.read().strip().strip(\"[]\").replace(\"'\", \"\").split(\"],[\")\n ]\n\n(Replace data.csv with your file path.)\n"
] | [
0
] | [] | [] | [
"csv",
"list",
"python"
] | stackoverflow_0074626121_csv_list_python.txt |
Q:
Why Jupyter Notebook or Spyder execute way faster my Python code than the same .py called in windows shell?
My code does this:
reads an about 460 000 row × 45 column datatable from CSV file.
according to a filter table gives labels to the rows.
It goes through the whole table several times during running.
In Spyder or Jupiter, the runtime is 12 seconds.
But when I run it from Windows PowerShell (python "C:\folders\xy.py") it takes 14 minutes.
The running starts immediately in both ways but in the middle where a big calculation task happens the PowerShell stops for minutes. In the Spyder there is a little delay at this point, but just 10 seconds.
My goal is to call this mainly in Shell.
Do you have an idea what can cause the problem and how to solve it?
I tried to reinstall python to have the same release in the shell too, but the result is the same.
A:
that because when you use shell the code must call the system first, otherwise, when you use IDE like spyder it already call when the IDE start. you can visit this link to know more Why is my Java program running 4 times faster via Eclipse than via shell?. Hope that help you. sorry if my english not good
| Why Jupyter Notebook or Spyder execute way faster my Python code than the same .py called in windows shell? | My code does this:
reads an about 460 000 row × 45 column datatable from CSV file.
according to a filter table gives labels to the rows.
It goes through the whole table several times during running.
In Spyder or Jupiter, the runtime is 12 seconds.
But when I run it from Windows PowerShell (python "C:\folders\xy.py") it takes 14 minutes.
The running starts immediately in both ways but in the middle where a big calculation task happens the PowerShell stops for minutes. In the Spyder there is a little delay at this point, but just 10 seconds.
My goal is to call this mainly in Shell.
Do you have an idea what can cause the problem and how to solve it?
I tried to reinstall python to have the same release in the shell too, but the result is the same.
| [
"that because when you use shell the code must call the system first, otherwise, when you use IDE like spyder it already call when the IDE start. you can visit this link to know more Why is my Java program running 4 times faster via Eclipse than via shell?. Hope that help you. sorry if my english not good\n"
] | [
0
] | [] | [] | [
"anaconda",
"command_line",
"python",
"runtime",
"shell"
] | stackoverflow_0074627309_anaconda_command_line_python_runtime_shell.txt |
Q:
‘’The environment is inconsistent, please check the package plan carefully‘’ always appears
I tried to install new packages from anaconda and this message has appeared:
(base) C:\Users\lenovo>conda install anaconda
Collecting package metadata (current_repodata.json): done
Solving environment: \
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
I tried with conda install anaconda,conda update --all and conda install anaconda-clean,respectively,but it persists.
I CANT EVEN UNINSTALL ANACONDA DUE TO THE SAME ISSUE!
Did anyone get any progress on this?
Here are some details:
(base) C:\Users\lenovo>conda install anaconda
Collecting package metadata (current_repodata.json): done
Solving environment: \
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- defaults/win-64::anaconda==custom=py38_1
- conda-forge/win-64::astropy==5.0.2=py38h6f4d8f0_0
- https://repo.anaconda.com/pkgs/main/win-64::bkcharts==0.2=py38_0
- conda-forge/win-64::bokeh==2.4.2=py38haa244fe_0
- conda-forge/win-64::bottleneck==1.3.4=py38h6f4d8f0_0
- conda-forge/win-64::daal4py==2021.5.0=py38he5193b3_0
- conda-forge/noarch::dask==2022.3.0=pyhd8ed1ab_0
- https://repo.anaconda.com/pkgs/main/win-64::h5py==2.10.0=py38h5e291fa_0
- conda-forge/win-64::imagecodecs==2022.2.22=py38h19b08ce_0
- conda-forge/noarch::imageio==2.16.1=pyhcf75d05_0
- conda-forge/win-64::matplotlib==3.5.1=py38haa244fe_0
- conda-forge/win-64::matplotlib-base==3.5.1=py38h1f000d6_0
- https://repo.anaconda.com/pkgs/main/win-64::mkl_fft==1.1.0=py38h45dec08_0
- https://repo.anaconda.com/pkgs/main/win-64::mkl_random==1.1.1=py38h47e9c7a_0
- conda-forge/noarch::networkx==2.7.1=pyhd8ed1ab_0
- conda-forge/win-64::numba==0.55.1=py38h5858985_0
- https://repo.anaconda.com/pkgs/main/win-64::numexpr==2.7.1=py38h25d0782_0
- conda-forge/win-64::pandas==1.4.1=py38h5d928e2_0
- conda-forge/noarch::patsy==0.5.2=pyhd8ed1ab_0
- conda-forge/win-64::pyerfa==2.0.0.1=py38h6f4d8f0_1
- https://repo.anaconda.com/pkgs/main/win-64::pytables==3.6.1=py38ha5be198_0
- https://repo.anaconda.com/pkgs/main/noarch::python-jsonrpc-server==0.3.4=py_1
- https://repo.anaconda.com/pkgs/main/win-64::python-language-server==0.34.1=py38_0
- conda-forge/win-64::pywavelets==1.3.0=py38h6f4d8f0_0
- conda-forge/win-64::scikit-image==0.19.2=py38h5d928e2_0
- conda-forge/win-64::scikit-learn==1.0.2=py38hb60ee80_0
- conda-forge/win-64::scikit-learn-intelex==2021.5.0=py38haa244fe_1
- conda-forge/win-64::scipy==1.8.0=py38ha1292f7_1
- conda-forge/noarch::seaborn==0.11.2=hd8ed1ab_0
- conda-forge/noarch::seaborn-base==0.11.2=pyhd8ed1ab_0
- https://repo.anaconda.com/pkgs/main/win-64::spyder==4.1.4=py38_0
- conda-forge/win-64::statsmodels==0.13.2=py38h6f4d8f0_0
- conda-forge/noarch::tifffile==2022.3.16=pyhd8ed1ab_0
- pytorch/win-64::torchaudio==0.11.0=py38_cpu
- pytorch/win-64::torchvision==0.12.0=py38_cpu
- defaults/win-64::_anaconda_depends==2021.11=py38_0
failed with initial frozen solve. Retrying with flexible solve.
Conda Info
active environment : base
active env location : G:\anaconda3
shell level : 1
user config file : C:\Users\lenovo\.condarc
populated config files : C:\Users\lenovo\.condarc
conda version : 4.12.0
conda-build version : 3.18.11
python version : 3.8.3.final.0
virtual packages : __cuda=11.6=0
__win=0=0
__archspec=1=x86_64
base environment : G:\anaconda3 (writable)
conda av data dir : G:\anaconda3\etc\conda
conda av metadata url : None
channel URLs : http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/win-64
http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/noarch
http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/win-64
http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/noarch
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/win-64
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/noarch
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/win-64
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/noarch
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r/win-64
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r/noarch
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2/win-64
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2/noarch
package cache : G:\anaconda3\pkgs
C:\Users\lenovo\.conda\pkgs
C:\Users\lenovo\AppData\Local\conda\conda\pkgs
envs directories : G:\anaconda3\envs
C:\Users\lenovo\.conda\envs
C:\Users\lenovo\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.12.0 requests/2.27.1 CPython/3.8.3 Windows/10 Windows/10.0.22000
administrator : False
netrc file : None
offline mode : False
A:
I had a very similar problem as you: couldn't install fbprophet and failed to solve the environment when I tried to update conda. As suggested in this website and this stackoverflow question, I tried the command conda config --set channel_priority flexible. After that, I could run conda install anaconda and the environment failed to solve at the first time, but then downloaded / downgraded / changed the packages successfully.
| ‘’The environment is inconsistent, please check the package plan carefully‘’ always appears | I tried to install new packages from anaconda and this message has appeared:
(base) C:\Users\lenovo>conda install anaconda
Collecting package metadata (current_repodata.json): done
Solving environment: \
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
I tried with conda install anaconda,conda update --all and conda install anaconda-clean,respectively,but it persists.
I CANT EVEN UNINSTALL ANACONDA DUE TO THE SAME ISSUE!
Did anyone get any progress on this?
Here are some details:
(base) C:\Users\lenovo>conda install anaconda
Collecting package metadata (current_repodata.json): done
Solving environment: \
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- defaults/win-64::anaconda==custom=py38_1
- conda-forge/win-64::astropy==5.0.2=py38h6f4d8f0_0
- https://repo.anaconda.com/pkgs/main/win-64::bkcharts==0.2=py38_0
- conda-forge/win-64::bokeh==2.4.2=py38haa244fe_0
- conda-forge/win-64::bottleneck==1.3.4=py38h6f4d8f0_0
- conda-forge/win-64::daal4py==2021.5.0=py38he5193b3_0
- conda-forge/noarch::dask==2022.3.0=pyhd8ed1ab_0
- https://repo.anaconda.com/pkgs/main/win-64::h5py==2.10.0=py38h5e291fa_0
- conda-forge/win-64::imagecodecs==2022.2.22=py38h19b08ce_0
- conda-forge/noarch::imageio==2.16.1=pyhcf75d05_0
- conda-forge/win-64::matplotlib==3.5.1=py38haa244fe_0
- conda-forge/win-64::matplotlib-base==3.5.1=py38h1f000d6_0
- https://repo.anaconda.com/pkgs/main/win-64::mkl_fft==1.1.0=py38h45dec08_0
- https://repo.anaconda.com/pkgs/main/win-64::mkl_random==1.1.1=py38h47e9c7a_0
- conda-forge/noarch::networkx==2.7.1=pyhd8ed1ab_0
- conda-forge/win-64::numba==0.55.1=py38h5858985_0
- https://repo.anaconda.com/pkgs/main/win-64::numexpr==2.7.1=py38h25d0782_0
- conda-forge/win-64::pandas==1.4.1=py38h5d928e2_0
- conda-forge/noarch::patsy==0.5.2=pyhd8ed1ab_0
- conda-forge/win-64::pyerfa==2.0.0.1=py38h6f4d8f0_1
- https://repo.anaconda.com/pkgs/main/win-64::pytables==3.6.1=py38ha5be198_0
- https://repo.anaconda.com/pkgs/main/noarch::python-jsonrpc-server==0.3.4=py_1
- https://repo.anaconda.com/pkgs/main/win-64::python-language-server==0.34.1=py38_0
- conda-forge/win-64::pywavelets==1.3.0=py38h6f4d8f0_0
- conda-forge/win-64::scikit-image==0.19.2=py38h5d928e2_0
- conda-forge/win-64::scikit-learn==1.0.2=py38hb60ee80_0
- conda-forge/win-64::scikit-learn-intelex==2021.5.0=py38haa244fe_1
- conda-forge/win-64::scipy==1.8.0=py38ha1292f7_1
- conda-forge/noarch::seaborn==0.11.2=hd8ed1ab_0
- conda-forge/noarch::seaborn-base==0.11.2=pyhd8ed1ab_0
- https://repo.anaconda.com/pkgs/main/win-64::spyder==4.1.4=py38_0
- conda-forge/win-64::statsmodels==0.13.2=py38h6f4d8f0_0
- conda-forge/noarch::tifffile==2022.3.16=pyhd8ed1ab_0
- pytorch/win-64::torchaudio==0.11.0=py38_cpu
- pytorch/win-64::torchvision==0.12.0=py38_cpu
- defaults/win-64::_anaconda_depends==2021.11=py38_0
failed with initial frozen solve. Retrying with flexible solve.
Conda Info
active environment : base
active env location : G:\anaconda3
shell level : 1
user config file : C:\Users\lenovo\.condarc
populated config files : C:\Users\lenovo\.condarc
conda version : 4.12.0
conda-build version : 3.18.11
python version : 3.8.3.final.0
virtual packages : __cuda=11.6=0
__win=0=0
__archspec=1=x86_64
base environment : G:\anaconda3 (writable)
conda av data dir : G:\anaconda3\etc\conda
conda av metadata url : None
channel URLs : http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/win-64
http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/noarch
http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/win-64
http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/noarch
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/win-64
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/noarch
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/win-64
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/noarch
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r/win-64
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r/noarch
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2/win-64
http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2/noarch
package cache : G:\anaconda3\pkgs
C:\Users\lenovo\.conda\pkgs
C:\Users\lenovo\AppData\Local\conda\conda\pkgs
envs directories : G:\anaconda3\envs
C:\Users\lenovo\.conda\envs
C:\Users\lenovo\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.12.0 requests/2.27.1 CPython/3.8.3 Windows/10 Windows/10.0.22000
administrator : False
netrc file : None
offline mode : False
| [
"I had a very similar problem as you: couldn't install fbprophet and failed to solve the environment when I tried to update conda. As suggested in this website and this stackoverflow question, I tried the command conda config --set channel_priority flexible. After that, I could run conda install anaconda and the environment failed to solve at the first time, but then downloaded / downgraded / changed the packages successfully.\n"
] | [
0
] | [] | [] | [
"anaconda",
"python"
] | stackoverflow_0071599829_anaconda_python.txt |
Q:
How do I get a cloned Django project running?
When I do 'pip install -r requirements.txt', I get this message: python setup.py egg_info did not run successfully
I tried python 'python3 -m pip install -U setuptools' but that didn't work.
A:
Remove psycopg2 from requirements.txt then use
'psycopg2-binary'
pip install psycopg2-binary
| How do I get a cloned Django project running? | When I do 'pip install -r requirements.txt', I get this message: python setup.py egg_info did not run successfully
I tried python 'python3 -m pip install -U setuptools' but that didn't work.
| [
"Remove psycopg2 from requirements.txt then use\n'psycopg2-binary'\npip install psycopg2-binary\n\n"
] | [
0
] | [] | [] | [
"django",
"github",
"pip",
"python",
"web"
] | stackoverflow_0074627546_django_github_pip_python_web.txt |
Q:
How to AutoFilter Excel by RGB cell color with win32com in Python
Let me start by saying that I am not a very skilled programmer, so please keep your answers as simple as possible so I have a chance to understand :-)
I am trying to figure out how to use win32com to open Excel and AutoFilter a column based on cell background colour.
The VBA code for what I want to do is this:
Selection.AutoFilter
ActiveSheet.Range("$A$1:$S$613").AutoFilter Field:=2, Criteria1:=RGB(255, _
153, 0), Operator:=xlFilterCellColor
I can make it work by using a VBA color constant value for yellow
ws.Range("B:B").AutoFilter(Field=1, Criteria1=65535, Operator=8)
But I need to be able to filter by more colours than just the VBA color constant colors.
My code so far is:
from win32com.client import constants as c
excel = win32com.client.gencache.EnsureDispatch("Excel.Application")
excel.Visible = True
wb = excel.Workbooks.Open("path\to\file\filename.xlsm", False, True)
ws = wb.Worksheets("Sheet1")
ws_current.Range('B:B').AutoFilter(Field=1, Criteria1=65535, Operator=c.xlFilterCellColor)
This works to filter by the color yellow, but I need to be able to replace the Criteria1 Field with an RGB value.
Using this code:
ws_current.Range('B:B').AutoFilter(Field=1, Criteria1=RGB(255,255,0), Operator=c.xlFilterCellColor)
results in this error:
Traceback (most recent call last):
File "C:\Users\UserName\AppData\Roaming\Python\Python38\site-packages\IPython\core\interactiveshell.py", line 3437, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-64-809552ca6582>", line 1, in <module>
ws_current.Range('B:B').AutoFilter(Field=1, Criteria1=RGB(255,255,0), Operator=c.xlFilterCellColor)
NameError: name 'RGB' is not defined
Thanks in advance for any insights
A:
The RGB macro comes from the Win32 API and is implemented in Python by pywin32.
In Python, if you have installed pywin32 (which you will have if you are using win32com), you can just write:
from win32api import RGB
n = RGB(255,255,0)
print(n)
which yields 65535.
So if the OP simply adds the line from win32api import RGB the original code should work.
| How to AutoFilter Excel by RGB cell color with win32com in Python | Let me start by saying that I am not a very skilled programmer, so please keep your answers as simple as possible so I have a chance to understand :-)
I am trying to figure out how to use win32com to open Excel and AutoFilter a column based on cell background colour.
The VBA code for what I want to do is this:
Selection.AutoFilter
ActiveSheet.Range("$A$1:$S$613").AutoFilter Field:=2, Criteria1:=RGB(255, _
153, 0), Operator:=xlFilterCellColor
I can make it work by using a VBA color constant value for yellow
ws.Range("B:B").AutoFilter(Field=1, Criteria1=65535, Operator=8)
But I need to be able to filter by more colours than just the VBA color constant colors.
My code so far is:
from win32com.client import constants as c
excel = win32com.client.gencache.EnsureDispatch("Excel.Application")
excel.Visible = True
wb = excel.Workbooks.Open("path\to\file\filename.xlsm", False, True)
ws = wb.Worksheets("Sheet1")
ws_current.Range('B:B').AutoFilter(Field=1, Criteria1=65535, Operator=c.xlFilterCellColor)
This works to filter by the color yellow, but I need to be able to replace the Criteria1 Field with an RGB value.
Using this code:
ws_current.Range('B:B').AutoFilter(Field=1, Criteria1=RGB(255,255,0), Operator=c.xlFilterCellColor)
results in this error:
Traceback (most recent call last):
File "C:\Users\UserName\AppData\Roaming\Python\Python38\site-packages\IPython\core\interactiveshell.py", line 3437, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-64-809552ca6582>", line 1, in <module>
ws_current.Range('B:B').AutoFilter(Field=1, Criteria1=RGB(255,255,0), Operator=c.xlFilterCellColor)
NameError: name 'RGB' is not defined
Thanks in advance for any insights
| [
"The RGB macro comes from the Win32 API and is implemented in Python by pywin32.\nIn Python, if you have installed pywin32 (which you will have if you are using win32com), you can just write:\nfrom win32api import RGB\n\nn = RGB(255,255,0)\nprint(n)\n\nwhich yields 65535.\nSo if the OP simply adds the line from win32api import RGB the original code should work.\n"
] | [
0
] | [] | [] | [
"autofilter",
"python",
"win32com"
] | stackoverflow_0074620403_autofilter_python_win32com.txt |
Q:
Tensorflow dataset with variable number of elements
I need a dataset structured to handle a variable number of input images (a set of images) to regress against an integer target variable.
The code I am using to source the images is like this:
import tensorflow as tf
from tensorflow import convert_to_tensor
def read_image_tf(path: str) -> tf.Tensor:
image = tf.keras.utils.load_img(path)
return tf.keras.utils.img_to_array(image)
def read_image_list(x, y):
return tf.map_fn(read_image_tf, x), y
paths_list = [['image_1', 'image_2', 'image_3'], ['image_6'], ['image_4', 'image_5', 'image_8', 'image_19']]
x = tf.ragged.constant(paths_list)
y = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(lambda x,y: read_image_list(x,y))
This code breaks with TypeError (TypeError: path should be path-like or io.BytesIO, not <class 'tensorflow.python.framework.ops.Tensor'>), as it seems that the map operation is not extracting the paths correctly from the original RaggedTensor. I have also tried to use a generator with similar results. Any help would be much appreciated
A:
Maybe something like this:
import tensorflow as tf
def read_image_tf(path: str) -> tf.Tensor:
img = tf.io.read_file(path)
return tf.io.decode_png(img, channels=3) # more generic: tf.io.decode_image
def read_image_list(x, y):
return tf.map_fn(read_image_tf, x, dtype=tf.uint8), y
paths_list = [['/content/image1.png', '/content/image1.png', '/content/image1.png'], ['/content/image1.png'], ['/content/image1.png', '/content/image1.png', '/content/image1.png', '/content/image1.png']]
x = tf.ragged.constant(paths_list)
y = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(lambda x, y: read_image_list(x, y))
for x, y in dataset:
print(x.shape, y)
(3, 100, 100, 3) tf.Tensor(1, shape=(), dtype=int32)
(1, 100, 100, 3) tf.Tensor(2, shape=(), dtype=int32)
(4, 100, 100, 3) tf.Tensor(3, shape=(), dtype=int32)
You can also convert x back to a ragged tensor if you want.
| Tensorflow dataset with variable number of elements | I need a dataset structured to handle a variable number of input images (a set of images) to regress against an integer target variable.
The code I am using to source the images is like this:
import tensorflow as tf
from tensorflow import convert_to_tensor
def read_image_tf(path: str) -> tf.Tensor:
image = tf.keras.utils.load_img(path)
return tf.keras.utils.img_to_array(image)
def read_image_list(x, y):
return tf.map_fn(read_image_tf, x), y
paths_list = [['image_1', 'image_2', 'image_3'], ['image_6'], ['image_4', 'image_5', 'image_8', 'image_19']]
x = tf.ragged.constant(paths_list)
y = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(lambda x,y: read_image_list(x,y))
This code breaks with TypeError (TypeError: path should be path-like or io.BytesIO, not <class 'tensorflow.python.framework.ops.Tensor'>), as it seems that the map operation is not extracting the paths correctly from the original RaggedTensor. I have also tried to use a generator with similar results. Any help would be much appreciated
| [
"Maybe something like this:\nimport tensorflow as tf\n\ndef read_image_tf(path: str) -> tf.Tensor:\n img = tf.io.read_file(path)\n return tf.io.decode_png(img, channels=3) # more generic: tf.io.decode_image\n\ndef read_image_list(x, y):\n return tf.map_fn(read_image_tf, x, dtype=tf.uint8), y\n\npaths_list = [['/content/image1.png', '/content/image1.png', '/content/image1.png'], ['/content/image1.png'], ['/content/image1.png', '/content/image1.png', '/content/image1.png', '/content/image1.png']]\n\nx = tf.ragged.constant(paths_list)\ny = tf.constant([1,2,3])\n\ndataset = tf.data.Dataset.from_tensor_slices((x, y))\ndataset = dataset.map(lambda x, y: read_image_list(x, y))\n\nfor x, y in dataset:\n print(x.shape, y)\n\n(3, 100, 100, 3) tf.Tensor(1, shape=(), dtype=int32)\n(1, 100, 100, 3) tf.Tensor(2, shape=(), dtype=int32)\n(4, 100, 100, 3) tf.Tensor(3, shape=(), dtype=int32)\n\nYou can also convert x back to a ragged tensor if you want.\n"
] | [
1
] | [] | [] | [
"dataset",
"image",
"python",
"ragged_tensors",
"tensorflow"
] | stackoverflow_0074627040_dataset_image_python_ragged_tensors_tensorflow.txt |
Q:
Django issue saving data to database
username is saving but information such as first_name, email and etc are not.
`from django.contrib.auth.models import User
from django.contrib.auth.password_validation import validate_password
from rest_framework import serializers
class RegisterSerializer(serializers.ModelSerializer):
email = serializers.CharField(required=True)
first_name = serializers.CharField(max_length=50, required=True)
last_name = serializers.CharField(max_length=50, required=True)
password = serializers.CharField(
write_only=True, required=True, validators=[validate_password])
password2 = serializers.CharField(write_only=True, required=True)
is_admin = serializers.BooleanField(default=False)
class Meta:
model = User
fields = ('username', 'first_name', 'last_name', 'email',
'password', 'password2', 'is_admin')
def validate(self, attrs):
if attrs['password'] != attrs['password2']:
raise serializers.ValidationError(
{"password": "Password fields didn't match."})
return attrs
def create(self, validated_data):
user = User.objects.create(
username=validated_data['username']
)
user.set_password(validated_data['password'])
user.save()
return user`
i have searched online for hours, but have not managed to make much progress. if someone could elaborate on my issue and explain what I have done wrong that would be greatly appreciated
A:
Add all the fields that you need to create that exist in model inside create method
user = User.objects.create(
username=validated_data['username'],
first_name =validated_data['first_name'],
last_name =validated_data['last_name'],
# Add other fields here
)
A:
You should also send other validated data's to creation line:
def create(self, validated_data):
user = User.objects.create(
username=validated_data['username'],
first_name=validated_data['first_name'], # <-- add here to all necessary parameters like this
)
user.set_password(validated_data['password'])
user.save()
| Django issue saving data to database | username is saving but information such as first_name, email and etc are not.
`from django.contrib.auth.models import User
from django.contrib.auth.password_validation import validate_password
from rest_framework import serializers
class RegisterSerializer(serializers.ModelSerializer):
email = serializers.CharField(required=True)
first_name = serializers.CharField(max_length=50, required=True)
last_name = serializers.CharField(max_length=50, required=True)
password = serializers.CharField(
write_only=True, required=True, validators=[validate_password])
password2 = serializers.CharField(write_only=True, required=True)
is_admin = serializers.BooleanField(default=False)
class Meta:
model = User
fields = ('username', 'first_name', 'last_name', 'email',
'password', 'password2', 'is_admin')
def validate(self, attrs):
if attrs['password'] != attrs['password2']:
raise serializers.ValidationError(
{"password": "Password fields didn't match."})
return attrs
def create(self, validated_data):
user = User.objects.create(
username=validated_data['username']
)
user.set_password(validated_data['password'])
user.save()
return user`
i have searched online for hours, but have not managed to make much progress. if someone could elaborate on my issue and explain what I have done wrong that would be greatly appreciated
| [
"Add all the fields that you need to create that exist in model inside create method\nuser = User.objects.create(\n username=validated_data['username'], \n first_name =validated_data['first_name'],\n last_name =validated_data['last_name'], \n # Add other fields here\n)\n\n",
"You should also send other validated data's to creation line:\ndef create(self, validated_data):\n user = User.objects.create(\n username=validated_data['username'],\n first_name=validated_data['first_name'], # <-- add here to all necessary parameters like this\n )\n\n user.set_password(validated_data['password'])\n user.save()\n\n"
] | [
1,
0
] | [] | [] | [
"django",
"python",
"reactjs"
] | stackoverflow_0074627538_django_python_reactjs.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.